Better server selection client code?

Jeetendra Mirchandani jeetum at gmail.com
Fri Oct 20 14:15:45 UTC 2006


On 10/20/06, Andrew Harbick <aharbick at aharbick.com> wrote:
> >> Agreed.  But I don't think it has to be that fancy.  I just position the
> >> servers around the unit circle at even intervals.
> >
> > But when I add a new server, I dont want to move all the servers to
> > maintain the intervals same. That is same as doing a modulo function.
> >
> > (to elaborate: The first server moves 1/n, second moves 2/n...and so
> > on to make space for the new server )
> >
> > So to have a practical midway, I pick the biggest arc. or a random one
> > if they are all the same - resulting in a non-uniform distribution of
> > load
>
> Ah...  I think I get it.  You're saying that the new server is position
> at the mid-point of the longest arc.  The result would be that only
> total_keys * (1/arclength * 2) keys would move.
>
> Interesting.  However this the keys aren't evenly distributed when you
> add a new server.  Go from one server to two and you have servers at 0.0
> and 0.5.  Then go from 2 to three and you have servers at 0.0, 0.5, and
> either 0.25 or 0.75.  Right?

Right. Here it is not balanced, but when you add the fourth, the
servers will be at 0, 0.25, 0.5, 0.75

This exactly was my concern - load is not balanced with this
consistent hashing, unless I decide to keep multiple copies of each
entry - which ads a new dimension to choice of positions for a new
server to be added!

moreover - we *cannot* think of a solution that moves a set of keys,
because theres no way of iterating on all keys!

>
> >
> >
> >
> >> So assuming your keys
> >> hash with an even distribution around the unit circle, traffic to your
> >> servers is going to be even.  This way of positioning the servers only
> >> ends up moving 1/totalServer keys on the addition of a new server which
>
> This was actually an error it should be total_keys/total_servers.  I
> meant to say "1/totalServer *percent*".  Even still, that's not entirely
> right either.
>
> Here's a picture of my method:
>     http://aharbick.com/memcached-client.jpg
>
> Where the blue pie pieces are keys that will end up on a new server and
> represents 35% of the keys.  I haven't worked out the math to figure out
> what percentage of the keys move for an arbitrary bump from N-servers to
> M-servers.  Either way, it is less than all of them.
>
> Andy
>


-- 
Regards,
Jeetu
http://www.cse.iitb.ac.in/~jeetu

"Reality is merely an illusion, albeit a very persistent one."


More information about the memcached mailing list