binary protocol time representation
Marc at facebook.com
Thu Jul 12 19:45:38 UTC 2007
I'd like to see relative time in milliseconds just about everywhere where
time units are specified.
While everyone should run NTP it is not always practical and invariably,
configuration errors, wonky clocks, &c will negate the most attentative
admin's ability to keep everything in sync. Time synchronization is
notoriously difficult in distributed applications. A distributed app that
really requires strict ordering of events should use something like Lamport
Logical Clock in the distributed protocol itself rather than relying on
Regarding precision and units. I don't think there is a need for
microsecond time resolution in network protocols given that latency is
measured in in 100s-1000s of us. Milliseconds addresses allow for finer
grained set and delete exptimes for those platforms that can support it. I
think this is reasonable. For high volume systems, a delete exptime of a
second is an eternity. I suspect to the degree people use things like
set KEY 0 1 1\r\nV\r\n
delete KEY 1\r\n
It is only because they can't set a lower time. If I need a backoff or
quiesce interval, it's probably for much less than a second.
Are there any places where we either require absolute time or a time
interval greater than 49 days? For the latter, we could simply use a
uint64_t version of timestamp, but I'd rather that be the exception than the
On 7/12/07 12:20 PM, "Dan Farina" <drfarina at gmail.com> wrote:
> On Thu, 2007-07-12 at 09:38 -0700, Dustin Sallings wrote:
>> Unix time has no timezone. Timezones are just used when displaying
>> or parsing times.
> Ah! I didn't know that nearly UTC-ness was a for-sure property of UNIX
> time. (Wikipedia says something about leap seconds being a bit wrong,
> but it's good enough for government work)
> However, we still do not dodge the bullet of what happens when the
> client or server clock becomes desychronized. This may be an issue if
> one want to expire objects in a few minutes due to a naughty client or
> server and could lead to subtle bugs.
> In my recollection in some date-time libraries it can be a veritable
> annoyance to convert to epoch time, meaning you are stuck conversions
> yourself. And (this I would have been guilty of prior to this mailing)
> there may be a tendency of converting to epoch time /incorrectly/,
> especially if you miss the fact that UNIX time is always UTC-ish, as I
> On the other hand, I have never seen a date-time library that can't
> easily get the current time and perform subtractions to get number of
> seconds. This would be roughly equivalent to using absolute time, minus
> breaking when synchronization is bad.
> Even in C, from time.h:
>> double difftime(time_t timer2, time_t timer1)
>> Returns the difference in seconds between the two times.
> And I still like the idea of watering down the semantics of the protocol
> as much as possible. Causing the semantics to change on a hard boundary
> is just asking for the possibility of rare "overflow" errors if one is
> doing arithmetic to derive time in the future, resulting in expiration
> times set in the past from the get-go.
> I realize the above objections may seem stupid, "just write the friggen
> library right and get all your machines talking to NTP!" but I feel we
> can avoid them all together and make the protocol epsilon simpler at the
> same time.
More information about the memcached