Fault tolerance and cross client compatibility
brad at danga.com
Wed Apr 13 09:40:46 PDT 2005
Anatoly and I are working again on the virtual bucket/tracker thing again,
so soon this won't even matter. In managed mode, clients won't do
rehashing and will instead just ask one of the trackers what nodes own
which vitual buckets.
(FYI: Anatoly and I reviewed/solidified the spec yesterday, and he started
work on the memcached side. I'll be doing the first version of the
tracker, initially in Perl, Danga::Socket based (does epoll/kqueue), and
then later in C as part of memcached if we need it...)
On Tue, 12 Apr 2005, Mathias Herberts wrote:
> we intend to use memcached as a way of passing requests results
> between a producer and a consumer. For fault tolerance reasons we
> intend to use several memcached servers and rely upon the rehashing
> capabilities of client APIs.
> The producer is written in Perl and the consumer in Java, so I looked
> at both client implementations and found out that both rehashing
> techniques are incompatible.
> Has anybody else experienced this and come up with patches to render
> the rehashing algorithms compatible?
> The code in Java reads:
> // if we failed to get a socket from this server
> // then we try again by adding an incrementer to the
> // current hash and then rehashing
> hv += ("" + hv + tries).hashCode();
> that is the hashCode function used is the native String one and not
> the one choosen earlier (NATIVE, COMPAT, etc).
> The code in Perl reads:
> $hv += _hashfunc($tries . $real_key); #
> stupid, but works
> the rehashing is not done on the same data.
> I think this part would need to be normalized somehow so different
> client APIs will behave the same way in case of a server failure (of
> course erratic behaviour might still be observed in case of network
> partitioning but this is a different failure scenario).
More information about the memcached