Extensible command syntax

Tomash Brechko tomash.brechko at gmail.com
Thu Nov 8 08:23:31 UTC 2007

On Wed, Nov 07, 2007 at 14:10:35 -0800, Dustin Sallings wrote:
> 	I can't speak for append or prepend.  These are commands I've not  
> used.  I can say that I implemented the binary incr in such a way that  
> flags and expiration are both required, but depending on the  
> expiration value, it may be considered an error if the value doesn't  
> exist.  If it does, it should ignore the flags and expiration and just  
> update an existing value.
> 	That is, they only exist for the case where you're creating a new  
> entry.

But this doesn't make the scheme more flexible, because this way I
can't use INRC to both update the value _and_ refresh the entry.
Whatever predetermined approach you choose, you'd close the door for
other possible uses.

I'm not totally against positional parameters, they are good to cover
mostly used cases.  But for rare uses it's better to also have a
flexible scheme.  It's never hard to implement, just add new
"flexible" command, and process the rest of it in a flexible manner.

Still, I have to agree that those GPerf patches have nothing to do
with it.  Even if I want flexible text commands so badly, I could add
them with '} else if (strcmp(...'.  It's just that I will need GPerf
to lookup parameter names anyway, so I decided to use it everywhere.
But if it breaks a lot, I can avoid touching the main parser
altogether, it can be reworked later if we'd want so.

> >For instance, 'get key key
> >key...'  returns nothing for found keys that weren't found, while it
> >could return NOT_FOUND.  This alone breaks all pipelining, because
> >there's no direct correspondence between requests and replies, and
> >instead of simply counting the results one has to _compare strings_ to
> >know which result goes where.
> 	No, this get mechanism works fine, and I pipeline everything heavily 
> with great success.  You can get values back in any order and you are  
> notified when all of the results are available.

This depends on how you look at it.  I mean _sequential_ pipelining
(as pipelines actually are), while you are talking about batch
processing.  With sequential pipelining, I push requests, and fetch
results, and once there's direct one-to-one correspondence between
request and response, I don't have to have any additional logic on
client side.  I.e. if I have a list of keys, I can push them to the
server, and fetch the results in order.  I don't have to have a hash
on the client to decide where the particular result belongs.

> A get across a couple of thousand keys is a one line response in a
> case where none exist (or one message response in the binary
> protocol).

I'd rather optimize for the "found" case.  Suppose your request has a
large number of keys, and only last one matches.  The client has to
wait till the very end (batch mode), while with pipelining it could
start the processing of not found entries right away.

> 	I assume users of my client don't generally care and the entire  
> process can occur asynchronously as they throw away the results.  If  
> they do care, I have the answer for them.  I can't infer who doesn't  
> or doesn't care, but I do need to know when the command is processed  
> so the next one in the pipeline can begin.

That's right, no "one-size-fits-all".  But when I don't need the
result, and I want to save some traffic, I want to have the way to say

Another advantage of flexible text protocol is that once it's there,
you don't have to update all text clients (Perl, PHP, etc.)  when you
add new parameter to some command, given that they have the means to
send arbitrary text request.  I.e., it will always be

  $memcached->set($key, $val, @params);



The same is true for binary protocol, but you have to provide bindings
for construction of arbitrary binary request.

> 	It's a given that the current protocol isn't perfect.  That's
> why we made a new one.  You should complain about that one more.  :)

BTW, is there a description of this binary protocol?

   Tomash Brechko

More information about the memcached mailing list