Better server selection client code?
Andrew Harbick
aharbick at aharbick.com
Thu Oct 19 15:43:17 UTC 2006
> But here you can have a problem of load distribution. According to the
> algorithm, if 2 servers map to points very close on the unit circle, the
> load distribution will be screwed badly.
>
> possible solution - dont use hashing for mapping servers to the unit
> circle, instead maintain a map, and pick the longest arc to place the
> new server.
Agreed. But I don't think it has to be that fancy. I just position the
servers around the unit circle at even intervals. So assuming your keys
hash with an even distribution around the unit circle, traffic to your
servers is going to be even. This way of positioning the servers only
ends up moving 1/totalServer keys on the addition of a new server which
is perfect since you just added a new machine which you want to have
that many keys anyhow. Unless I'm missing something, I don't see what
you need something more complex.
>
> The paper also mentions in section 2.2:
> <snip>
> For technical reasons detailed in [6], it is quite important to make a
> small number of copies of each cache point--that is, to map several
> copies of each cache to different ``random'' points on the unit circle.
> This produces a more uniform distribution of URLs to caches.
> </snip>
>
> This will also provide a good failover, as discussed here:
> http://lists.danga.com/pipermail/memcached/2006-October/002856.html
Andy
More information about the memcached
mailing list