binary protocol time representation

Antonello Provenzano antonello at deveel.com
Thu Jul 12 12:11:53 UTC 2007


Dan,

> My more detailed opinion:
> The epoch time may have been much more useful when people were
> telnetting into memcached, but since the binary protocol will probably
> be used with program support (except in the case of masochists or subtle
> bug hunters) I would suggest just picking a single representation to
> reduce the "hair" on the protocol semantics. While the status-quo is
> relatively harmless, I think a little discussion should be paid to this
> issue while the new protocol gels.

I can't speak for other environments, but converting a .NET "tick"
representation of dates into UNIX epoch time is quite easy and
requires just 2 lines of code.

the main problem with the binary protocol could be another one
instead: you should mainly focus on bytes alignment, which under .NET
is quite different from Java or C.
To interoperate with systems that are not running under .NET (or COM
using Interop) many times I need to represent bytes by using integers
and in many cases the conversion from byte buffers to integers is not
correct (.NET uses integers at 4, 8, 16 bytes) and the result corrupts
the whole system.

My 2 c.



On 7/12/07, Dan Farina <drfarina at gmail.com> wrote:
> >From the docs of the current protocol:
>
> > Some commands involve a client sending some kind of expiration time
> > (relative to an item or to an operation requested by the client) to
> > the server. In all such cases, the actual value sent may either be
> > Unix time (number of seconds since January 1, 1970, as a 32-bit
> > value), or a number of seconds starting from current time. In the
> > latter case, this number of seconds may not exceed 60*60*24*30 (number
> > of seconds in 30 days); if the number sent by a client is larger than
> > that, the server will consider it to be real Unix time value rather
> > than an offset from current time.
>
> Are we going to stick with this behavior? There was no discussion about
> changing it, but perhaps its come time to consider nixing absolute
> (epoch) time, unless someone has a reason to keep it. It costs the
> client a minuscule bit of complexity and saves the server roughly the
> same amount unless it has to do date conversion on some foreign platform
> that doesn't make UNIX epoch time easily available.
>
> My more detailed opinion:
> The epoch time may have been much more useful when people were
> telnetting into memcached, but since the binary protocol will probably
> be used with program support (except in the case of masochists or subtle
> bug hunters) I would suggest just picking a single representation to
> reduce the "hair" on the protocol semantics. While the status-quo is
> relatively harmless, I think a little discussion should be paid to this
> issue while the new protocol gels.
>
> Justification: Many programming environments use something other than
> epoch time, and nowadays even C programmers are generally discouraged to
> use it and instead opt for the time struct. Given that everyone has a
> different way of slicing and dicing  dates (NTP, UNIX
> Epoch, .NET/Microsoft "ticks", and a host of language-specific
> libraries) the shortest-mean-distance computation for "number of seconds
> in the future from time entered" is pretty easy for all involved and
> even in the legacy case conversion from epoch time is of minuscule cost.
>
> Finally, and possibly most importantly, seconds from entry time saves
> you the annoying problems of time zones and daylight savings time*
> without annoying/complexity-inducing time-zone+absolute time
> representation schemes. Yuck!
>
> df
>
> * Something the server implementation /ought/ to be immune to
>
>
>


More information about the memcached mailing list