Largest production memcached install?

Steve Grimm sgrimm at facebook.com
Thu May 3 18:48:39 UTC 2007


At peak times we see about 35-40% utilization (that¹s across all 4 CPUs.)
But as you say, that number will vary dramatically depending on how you use
it. The biggest single user of CPU time isn¹t actually memcached per se;
it¹s interrupt handling for all the incoming packets.

-Steve


On 5/3/07 11:41 AM, "Jerry Maldonado" <jerry.maldonado at apollogrp.edu> wrote:

> With the configuration you noted below, what is your CPU utilization.  We are
> implementing memcached in our environment and I am trying to get a feel for
> what we will need for production.  I realize that it all depends on how we are
> using it, but I am interested to see what it is based on your configuration.
>  
> Thanks,
>  
> Jerry
>  
>>  
>> -----Original Message-----
>> From:  memcached-bounces at lists.danga.com
>> [mailto:memcached-bounces at lists.danga.com]On Behalf Of Steve  Grimm
>> Sent: Thursday, May 03, 2007 11:33 AM
>> To: Sam  Lavery; memcached at lists.danga.com
>> Subject: Re: Largest production  memcached install?
>> 
> No clue if  we¹re the largest installation, but Facebook has roughly 200
> dedicated  memcached servers in its production environment, plus a small
> number of others  for development and so on. A few of those 200 are hot
> spares. They are all  16GB 4-core AMD64 boxes, just because that¹s where the
> price/performance sweet  spot is for us right now (though it looks like 32GB
> boxes are getting more  economical lately, so I suspect we¹ll roll out some of
> those this  year.)
> 
> We have a home-built management and monitoring system that keeps  track of all
> our servers, both memcached and other custom backend stuff. Some  of our other
> backend services are written memcached-style with fully  interchangeable
> instances; for such services, the monitoring system knows how  to take a hot
> spare and swap it into place when a live server has a failure.  When one of
> our memcached servers dies, a replacement is always up and running  in under a
> minute.
> 
> All our services use a unified database-backed  configuration scheme which has
> a Web front-end we use for manual operations  like adding servers to handle
> increased load. Unfortunately that management  and configuration system is
> highly tailored to our particular environment, but  I expect you could
> accomplish something similar on the monitoring side using  Nagios or another
> such app.
> 
> All that said, I agree with the earlier  comment on this list: start small to
> get some experience running memcached in  a production environment. It¹s easy
> enough to expand later once you have  appropriate expertise and code in place
> to make things run  smoothly.
> 
> -Steve
> 
> 
> On 5/3/07 8:06 AM, "Sam Lavery"  <sam.lavery at gmail.com> wrote:
> 
>  
>> Does anyone know what the largest installation of  memcached currently is?
>> I'm considering putting it on 100+  machines(solaris/mod_perl), and would
>> love to hear any tips people have for  managing a group of that size(and
>> larger).  Additionally, are there any  particular patches I should try out
>> for this specific platform?
>>  
>>  
>> Thanks in  advance,
>> Sam
>>  
>>  
>>  
>> 
> 
> 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.danga.com/pipermail/memcached/attachments/20070503/bf87811e/attachment.html


More information about the memcached mailing list