Spreading key/value pairs across multiple memcached servers

Brad Fitzpatrick brad at danga.com
Wed Dec 1 08:41:34 PST 2004


Fortunately (for you having to do work) you boss and you misunderstand how
memcached works.  :)

Memcached is a two-layer hash.  The client (your app, or rather the
library you're using) does the first layer hash --- based on the key it
calculates which server to use.  Then, your app (via the library) contacts
that server and the server has a hash within it.

There is so search order (because it never searches) and there is no
master or primary memcached server.

You just add as many as you want and the load balancing is automatic and
even.

- Brad


On Wed, 1 Dec 2004, Chris Hartjes wrote:

> I had a chat about this with my boss, and his concern is not that we
> distribute the keys evenly, his concern is distributing the request to
> find out what server a key is sitting on.  I figure he doesn't want
> the first memcached server getting constantly pounded with requests
> for the location of a key, while the second server could be helping
> out in that regards.
>
>
> On Wed, 01 Dec 2004 11:21:59 -0500, Greg Whalin <gwhalin at meetup.com> wrote:
> > A good hashing algorithm should take care of this for you.  I am under
> > the impression that the currently used default for the perl and php
> > clients is using a new alg based on a crc32 checksum that gets very even
> > key distribution, so I don't see this as being an issue.
> >
> >
> >
> > Chris Hartjes wrote:
> > > My boss has requested that if we are to implement the use of memcached
> > > for our site, that we need to "load balance" the data we place in
> > > multiple memcached servers.  I guess the concern is that we would be
> > > overtaxing one memcached server while other memcached servers are
> > > being underutilized.
> > >
> > > Thoughts?  Comments?
> > >
> >
>
>
> --
> Chris Hartjes
>
> "I know monkeys, and monkeys are good people!"
>
>


More information about the memcached mailing list