Better server selection client code?
jeetum at gmail.com
Mon Nov 6 18:15:03 UTC 2006
On Mon, 2006-11-06 at 09:49 -0800, Steven Grimm wrote:
> Jeetendra Mirchandani wrote:
> > Consistent hashing has a shortcoming of not distributing load evenly.
> > Some cases require the number of servers to be a power of 2 always to
> > have perfect distribution of load!
> > The paper mentions of keeping more than one copy to counter this. But
> > then, keeping more copies is equivalent to loosing half my machines :-O
> The paper suggests putting each server in a number of random locations
> around the unit circle to counter that problem. If you have two servers
> and each one has, say, 50 positions on the unit circle, then you will
> get load distribution that's just as good as if you had 100 servers each
> with one position on the circle.
> Brad's sample implementation does that, with the added twist of
> replacing "random" with "deterministic based on the server address"
> which of course is necessary because each client will be computing its
> own unit circle and they all need to match.
Yes, I totally agree.
But my point was in context of load distribution.
>From the paper, section 2.2:
For technical reasons detailed in , it is quite important to make a
small number of copies of each cache point--that is, to map several
copies of each cache to different ``random'' points on the unit circle.
This produces a more uniform distribution of URLs to caches.
Even I had suggested a deterministic way of picking a point, which was
based on the largest available free arc on the unit circle. But as
Andrew added, this will distribute load uniform only when the number of
servers are a power of 2.
More information about the memcached