Fault tolerance and cross client compatibility
Greg Whalin
gwhalin at meetup.com
Tue Apr 12 06:03:13 PDT 2005
This is a bug in the java client and a nice catch. I have fixed the
code in head release of repository to have fix to match perl way of
doing things. Not tested, but should work.
Mathias Herberts wrote:
> Hi,
>
> we intend to use memcached as a way of passing requests results
> between a producer and a consumer. For fault tolerance reasons we
> intend to use several memcached servers and rely upon the rehashing
> capabilities of client APIs.
>
> The producer is written in Perl and the consumer in Java, so I looked
> at both client implementations and found out that both rehashing
> techniques are incompatible.
>
> Has anybody else experienced this and come up with patches to render
> the rehashing algorithms compatible?
>
> The code in Java reads:
>
> // if we failed to get a socket from this server
> // then we try again by adding an incrementer to the
> // current hash and then rehashing
> hv += ("" + hv + tries).hashCode();
>
> that is the hashCode function used is the native String one and not
> the one choosen earlier (NATIVE, COMPAT, etc).
>
> The code in Perl reads:
>
> $hv += _hashfunc($tries . $real_key); #
> stupid, but works
>
> the rehashing is not done on the same data.
>
> I think this part would need to be normalized somehow so different
> client APIs will behave the same way in case of a server failure (of
> course erratic behaviour might still be observed in case of network
> partitioning but this is a different failure scenario).
>
> Mathias.
More information about the memcached
mailing list