Fault tolerance and cross client compatibility

Mathias Herberts mathias.herberts at gmail.com
Tue Apr 12 05:51:22 PDT 2005


we intend to use memcached as a way of passing requests results
between a producer and a consumer. For fault tolerance reasons we
intend to use several memcached servers and rely upon the rehashing
capabilities of client APIs.

The producer is written in Perl and the consumer in Java, so I looked
at both client implementations and found out that both rehashing
techniques are incompatible.

Has anybody else experienced this and come up with patches to render
the rehashing algorithms compatible?

The code in Java reads:

                        // if we failed to get a socket from this server
                        // then we try again by adding an incrementer to the
                        // current hash and then rehashing 
                        hv += ("" + hv + tries).hashCode();

that is the hashCode function used is the native String one and not
the one choosen earlier (NATIVE, COMPAT, etc).

The code in Perl reads:

                       $hv += _hashfunc($tries . $real_key);  #
stupid, but works

the rehashing is not done on the same data.

I think this part would need to be normalized somehow so different
client APIs will behave the same way in case of a server failure (of
course erratic behaviour might still be observed in case of network
partitioning but this is a different failure scenario).


More information about the memcached mailing list