binary protocol notes from the facebook hackathon

Marc Marc at facebook.com
Tue Jul 10 19:09:57 UTC 2007


Agreed, and as we discussed last night, for UDP relying on the last get is
difficult since you don¹t know the exact length in advance.  It may exceed
the residual length of the packet you are currently constructing, so then
you need to unwind it and potentially unwind the previous getq and redo it
as a get.  I think we can document that GET uncorks and that for datagram
multigets ECHO is preferable but both ECHO and GET must be handled.  (It¹s
no extra work from the server side to handle either)

On 7/10/07 9:38 AM, "Paul Querna" <chip at corelands.com> wrote:

> Brad Fitzpatrick wrote:
> ....
>> > COMMANDS:  (for cmd byte)
>> > 
>> >   get    - single key get (no more multi-get; clients should pipeline)
>> >   getq   - like get, but quiet.  that is, no cache miss, return nothing.
>> > 
>> >       Note: clients should implement multi-get (still important for
>> >             reducing network roundtrips!) as n pipelined requests, the
>> >             first n-1 being getq, the last being a regular
>> >             get.  that way you're guaranteed to get a response, and
>> >             you know when the server's done.  you can also do the naive
>> >             thing and send n pipelined gets, but then you could potentially
>> >             get back a lot of "NOT_FOUND!" error code packets.
> 
> This is missing the discussion about the GETQ putting the server into
> 'cork' mode, and any non-GETQ would uncork it.  This would allow the
> server to optimize its own IO without nearly as much pain.
> 
> Also, as a client author, I really would prefer just having a NOOP or
> ECHO command at end of the bulk GETQ, rather than having to special case
> the last request. I guess I could just send a GET for a stats key or
> something, but that seems weird.
> 
> -Paul
> 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.danga.com/pipermail/memcached/attachments/20070710/fce1e4b0/attachment.htm


More information about the memcached mailing list