Largest production memcached install?

Steve Grimm sgrimm at facebook.com
Fri May 4 00:51:57 UTC 2007


Our db load averages tend to range from 0.25 to 4.5 or so, depending on
which particular hosts you¹re looking at. More of them at the lower end of
that range than the upper end.

When we need to do more major surgery to our memcached configuration, we do
it at the lowest-usage time of day to minimize the impact on the site. Our
cache is partitioned into different sections so we can take down part of it
at a time (to upgrade to a new memcached build, say) without losing the
whole cache.

We consider memcached a critical part of our infrastructure. The benefit of
memcached in a typical setup is to reduce the amount of database hardware
you need to support an application; if you have enough database horsepower
to run unimpaired with most of your memcached servers out of service, then
there¹s probably no point using memcached at all, since it without a doubt
adds extra complexity to your application code. But if you go that route
you¹ll probably spend many times as much money and burden yourself with a
great deal more administrative hassle (DB servers typically being more
expensive and more work to keep running smoothly than memcached servers
are.)

-Steve


On 5/3/07 2:16 PM, "Cal Heldenbrand" <cal at fbsdata.com> wrote:

> Steve,
> 
> Just curious what are the OS load averages on your database servers?  Have you
> expanded facebook to the point where losing most of the memcache servers would
> cause your entire application to grind to a halt?
> 
> During my initial thoughts on integrating memcache into our product, I could
> see it eventually becoming a crutch and we wouldn't have enough database
> hardware to support the application anymore.  I wonder if that's a good thing
> or a bad thing? 
> 
> Thanks!
> 
> --Cal
> 
> On 5/3/07, Steve Grimm <sgrimm at facebook.com> wrote:
>> We rebuild from the database. We have enough memcached servers that losing
>> one has a relatively small effect on our cache hit rate. Not to say there's
>> no effect -- our DB load spikes up for a little while when we lose a
>> memcached server -- but we build out our infrastructure such that even at
>> peak load, repopulating an empty memcached instance or two doesn't slow
>> things down noticeably for the users.
>> 
>> -Steve
>> 
>> 
>> 
>> On 5/3/07 12:23 PM, "Murty Chittivenkata" <murty at aol.net> wrote:
>> 
>>> Steve,
>>> 
>>> are you replicating the hash data to hotspares or rebuilding in the event of
>>> failure from backend database?
>>> 
>>> 
>>> Thanks
>>> Murty
>>>>  
>>>>> 
>>>>>  
>>>>> We have a home-built management and monitoring system that keeps  track of
>>>>> all our servers, both memcached and other custom backend stuff. Some  of
>>>>> our other backend services are written memcached-style with fully
>>>>> interchangeable instances; for such services, the monitoring system knows
>>>>> how  to take a hot spare and swap it into place when a live server has a
>>>>> failure.  When one of our memcached servers dies, a replacement is always
>>>>> up and running  in under a minute.
>>>>>  
>>> 
>>> 
>> 
> 
> 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.danga.com/pipermail/memcached/attachments/20070503/ae067344/attachment.html


More information about the memcached mailing list