Largest production memcached install?
bra at fsn.hu
Thu Jun 21 13:09:10 UTC 2007
Have you considered using NAPI(Linux)/polling(FreeBSD)?
Although it can increase response times a little, but helps freeing the
machine from the interrupt load.
On 05/03/07 20:48, Steve Grimm wrote:
> At peak times we see about 35-40% utilization (that’s across all 4
> CPUs.) But as you say, that number will vary dramatically depending on
> how you use it. The biggest single user of CPU time isn’t actually
> memcached per se; it’s interrupt handling for all the incoming packets.
> On 5/3/07 11:41 AM, "Jerry Maldonado" <jerry.maldonado at apollogrp.edu>
> With the configuration you noted below, what is your CPU
> utilization. We are implementing memcached in our environment and
> I am trying to get a feel for what we will need for production. I
> realize that it all depends on how we are using it, but I am
> interested to see what it is based on your configuration.
> -----Original Message-----
> *From:* memcached-bounces at lists.danga.com
> [mailto:memcached-bounces at lists.danga.com]
> <mailto:memcached-bounces at lists.danga.com%5D>*On Behalf Of
> *Steve Grimm
> *Sent:* Thursday, May 03, 2007 11:33 AM
> *To:* Sam Lavery; memcached at lists.danga.com
> *Subject:* Re: Largest production memcached install?
> No clue if we’re the largest installation, but Facebook has
> roughly 200 dedicated memcached servers in its production
> environment, plus a small number of others for development and so
> on. A few of those 200 are hot spares. They are all 16GB 4-core
> AMD64 boxes, just because that’s where the price/performance sweet
> spot is for us right now (though it looks like 32GB boxes are
> getting more economical lately, so I suspect we’ll roll out some
> of those this year.)
> We have a home-built management and monitoring system that keeps
> track of all our servers, both memcached and other custom backend
> stuff. Some of our other backend services are written
> memcached-style with fully interchangeable instances; for such
> services, the monitoring system knows how to take a hot spare and
> swap it into place when a live server has a failure. When one of
> our memcached servers dies, a replacement is always up and running
> in under a minute.
> All our services use a unified database-backed configuration
> scheme which has a Web front-end we use for manual operations like
> adding servers to handle increased load. Unfortunately that
> management and configuration system is highly tailored to our
> particular environment, but I expect you could accomplish
> something similar on the monitoring side using Nagios or another
> such app.
> All that said, I agree with the earlier comment on this list:
> start small to get some experience running memcached in a
> production environment. It’s easy enough to expand later once you
> have appropriate expertise and code in place to make things run
> On 5/3/07 8:06 AM, "Sam Lavery" <sam.lavery at gmail.com> wrote:
> Does anyone know what the largest installation of memcached
> currently is? I'm considering putting it on 100+
> machines(solaris/mod_perl), and would love to hear any tips
> people have for managing a group of that size(and larger).
> Additionally, are there any particular patches I should try
> out for this specific platform?
> Thanks in advance,
Attila Nagy e-mail: Attila.Nagy at fsn.hu
Free Software Network (FSN.HU) phone: +3630 306 6758
More information about the memcached