mecached - text protocol?

Andy Bakun memcached@lists.danga.com
Thu, 15 Jul 2004 19:46:50 -0500


Long time lurker, first (or is it second?) time poster, and I have not
looked at the memcached code in a while... I agree with Anatoly and
Michael.  But you shouldn't trust me just because I agree with them --
here are some more comments on binary vs text protocols.  

One thing you need to be less concerned about with text based protocols
is that, when parsing them, you automatically get rudimentary input
validation.  Because it is text, it is limited to a certain subset of
characters that are valid.  You can't mistake the string "foo" as a
number other than "0" (using proven library routines (like atoi)).  You
have to have a high degree of trust that the clients are going to give
you well-formed binary data, unless you are going to validate it anyway,
in which case, you're still going to be looking at a lot of memory and
doing a lot of compares.

It can also be easier to resync the communications between the client
and server down to the field than the command, due to known delimiters
between all the fields.  This may come up if you were reading
byte-by-byte as part of a state machine (grossly inefficient in this
case, I know), rather than a big block of input that will be processed
as a whole.

You also are easily able to handle simple protocol expansions without
having to create a new version of the protocol which may not be backward
compatible.  Maybe you need to pass a number and you only allow 2 bytes
for it.  The range you want to use is larger than the range of 2 bytes. 
With a text protocol, all you need to do is change the definitions of
that field in the server, change longs to long longs and atoi to atoll
or whatever -- none of the clients need to change or even know about the
change.  For larger numbers, they'll just need to send more significant
digits, and older clients don't need to change nor be aware of any
changes at all.  Less work for everyone.

Thusly, a binary protocol needs to be versioned, and versioned protocol
has more code paths and thus greater chances of bugs.

The client could just store everything in a structure and write that
structure directly to the socket, but only if you don't need to worry
about endian-ness issues.

On the performance issue, there is technically no difference between

     strcmp(str1, str2)

and

     str1[0] == str2[0] 
        && str1[1] == str2[1] 
        && str1[2] == str2[2] 
        && str1[2] == 0

And in fact you might end up with the latter anyway if the complier does
some loop-unrolling and optimizations (string comparisons could be
optimized to compare as a series of 4 byte longs rather than as a series
of 1 byte characters, then the

What's the difference between a "2 byte command" and a "2 character
command", other than a single compare (to determine the end of the
character string)?  You'd achieve the same "optimization" if all
commands were shortened to two characters.  In either case, the
algorithm to check any of the commands is fixed near O(1) (since all the
commands are, by definition, the same length).  This differs from the
current implementation which, at its worst is O(length of command) = 7,
and at an average over the length of the seven commands is 4.2.  I bet
the shorter command get used more often, though.  Allowing two bytes per
command could be done now if "decr" and "delete" didn't overlap on the
first two bytes :)  Sure, if you allow the full range of two bytes then
you could expand to 65,535 possible commands, and two characters limits
you to about 1200 (depending on what you consider a "printable" or
"acceptable" character to be in a command).  The design and goal of
memcached limits the practical usefulness of additional commands, so I
doubt it is going to get close to either of those limits.  If the number
of commands did approach either of those limits, this could be made
closer to optimal for arbitrary length strings by using a string hashing
function (pgperf comes to mind).

If the goal is to save space and time, the following...

      <key>:  1 byte: length of following key, without NULL byte
                   (1-250, inclusive)
              the key
              \0 byte

...is one byte too many.  If you use the ending null byte to determine
when the key ends, you can skip using the length.  If you use the
length, you don't need the null byte.  Plus, it is ambiguous as to if
you can use null bytes in your keys (if you want to use spaces, then is
there a reason to exclude null bytes from being allowed too?)  If the
null byte is there to make passing to the hashing function easier (and
avoid a memcpy), then perhaps the hashing function could be modified to
take a length.  Short of that, this input would still need to be
validated to make sure that there was a null byte at the proper location
in order to avoid a buffer overrun or other memory access exploit
through the hashing function.  Should the clients be trusted in this
case?

Now, if any of these aspects of text protocols may not apply in this
specific case, that is up to the implementors to decide.  Security may
be less of a concern -- does anyone run a public memcached server? :) --
but it may make it harder to ensure correctness.

-- 
Andy Bakun: approaching infinity one step at a time 
        <abakun@thwartedefforts.org>