binary protocol time representation

Dan Farina drfarina at gmail.com
Thu Jul 12 19:20:36 UTC 2007


On Thu, 2007-07-12 at 09:38 -0700, Dustin Sallings wrote:
> 	Unix time has no timezone.  Timezones are just used when displaying  
> or parsing times.

Ah! I didn't know that nearly UTC-ness was a for-sure property of UNIX
time. (Wikipedia says something about leap seconds being a bit wrong,
but it's good enough for government work)

However, we still do not dodge the bullet of what happens when the
client or server clock becomes desychronized. This may be an issue if
one want to expire objects in a few minutes due to a naughty client or
server and could lead to subtle bugs.

In my recollection in some date-time libraries it can be a veritable
annoyance to convert to epoch time, meaning you are stuck conversions
yourself. And (this I would have been guilty of prior to this mailing)
there may be a tendency of converting to epoch time /incorrectly/,
especially if you miss the fact that UNIX time is always UTC-ish, as I
did.

On the other hand, I have never seen a date-time library that can't
easily get the current time and perform subtractions to get number of
seconds. This would be roughly equivalent to using absolute time, minus
breaking when synchronization is bad.

Even in C, from time.h:
> double difftime(time_t timer2, time_t timer1)
>         Returns the difference in seconds between the two times.

And I still like the idea of watering down the semantics of the protocol
as much as possible. Causing the semantics to change on a hard boundary
is just asking for the possibility of rare "overflow" errors if one is
doing arithmetic to derive time in the future, resulting in expiration
times set in the past from the get-go.

I realize the above objections may seem stupid, "just write the friggen
library right and get all your machines talking to NTP!" but I feel we
can avoid them all together and make the protocol epsilon simpler at the
same time.

df




More information about the memcached mailing list