Largest production memcached install?

Paul Lindner lindner at inuus.com
Fri May 4 09:07:59 UTC 2007


On Fri, May 04, 2007 at 06:41:25AM +0100, Just Marc wrote:
> Hi Steve,
> 
> A bit off topic. I can't help but wondering:
> 
> Your memcache nodes are nice and beefy boxes (32G RAM, 4 cores of 
> probably at leat 2GHz each -- that's generally a good amount of power 
> for a database), maybe they don't have any spindles at all, though, but, 
> if they did have a few, say up to 4, disks in each;
> 
> And you would split (federate) your database into 100 chunks (the 
> remaining 100 would be hot spares of the first 100 and could even be 
> used to serve reads), wouldn't that take care of all your database load 
> needs and pretty much eliminate the need for memcache? Wouldn't 50 such 
> boxes be enough in reality?
> 
> I do realize that 200 machines with no hard drives cost both less to buy 
> and maintain. But what about 50? (just throwing random numbers). In the 
> past you've also said that some of your memcache nodes do 30-60k 
> reqs/sec, which would be very high in db speak, but I assume that that's 
> the exception rather than the rule because 6 to 12 million memcache 
> reqs/sec in aggregate sounds a bit out of this world.

Don't forget about latency.  At Hi5 we cache entire user profiles that
are composed of data from up to a dozen databases.  Each page might
need access to many profiles.  Getting these from cache is about the
only way you can achieve sub 500ms response times, even with the best
DBs.

We're also using memcache as a write-back cache for transient data.
Data is written to memcache, then queued to the DB where it's
eventually written to long-term storage.  The effect is dramatic --
heavy write spikes are greatly diminished and we get predictable
response times.

That said there's situations that memcache didn't work for our
requirements.  Storing friend graph relations was one of them.  That's
taken care of by another in-memory proprietary system.  At some point
we might consider merging some of this functionality into memcached
including:

  * Multicast listener/broadcaster protocols
  * fixed size data structure storage
    (perhaps done via pluggable hashing algorithms??)
  * Loading the entire contents of one server from another.
    (while processing ongoing multicast updates to get in sync)

I'd be interested in working with others who want to add these types
of features to memcache.

-- 
Paul Lindner        ||||| | | | |  |  |  |   |   |
lindner at inuus.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://lists.danga.com/pipermail/memcached/attachments/20070504/652477d5/attachment.pgp


More information about the memcached mailing list