Best practice for a web farm?

dormando dormando at rydia.net
Sat Feb 2 08:48:38 UTC 2008


> 
> Putting memcached on the same nodes as you put your apache workers
> leaves you in a position to run your memcached into swap during a
> request spike/flood and then you may as well just reboot your node
> because the performance has fallen away badly. For example, if you
> expect to run up to apache 200 workers per node with a worker size of
> 20MB, this means 4GB of ram. If you want to dedicate 1G for memcached,
> make sure you have ram leftover for the rest of your OS and cache and
> buffers. However, the longer you work your workers, and depending on
> your app settings, expect your apache processes to fatten over time. So
> if your workers grow from 20MB to 60MB (I regularly see 66MB httpd
> processes in my environment), then you've created a situation where your
> workers demand 12GB during a request spike. If you don't have >12GB
> ram...uh...yeah.
> 
> My point: if you want your web nodes to *take a beating* (and I've seen
> this happen repeatedly from spambots and trackback botnets) don't put
> memcache on your webnodes. Put your memcache on nodes that are well
> protected from memory starvation ...like dedicated boxes or an NFS server.

Funny, my webnodes don't do that... Regardless, anything I say is
assuming the rest of your system is tuned properly. If your webserver
will run out of RAM before it runs out of CPU, you obviously don't have
RAM free to put memcached there.

If you're running 200 parallel apache nodes something else entirely is
probably going to go to shit. That's why we have things like perlbal,
nginx, etc, and are outside of the scope here a little.

> I wouldn't worry about httpd instances thrashing the cpu, because httpd
> workers overload on a multi-cpu box pretty well. I've often watched a
> 4GB,  4 core Xeon 2.6ghz box handle 4000-10000 connections per second,
> under load 10-20 with about 400 apache workers and while it swapped a
> bit, it kept up surprisingly well. (My httpd instances were not as
> large--more like 25MB). I had my memcached instances on my NFS node,
> which never sustained much load. There were also 3 mysql servers behind
> it, too :-) I appreciated that web server a lot.

Image serving? I don't really understand how this is related. If you're
doing something CPU heavy (hence my terminology; CPU node) you're going
to, at most, have 1.5-2 "active" processes per CPU before you're at 0%
idle. Anything else and we're not talking about the same thing.

On the other hand, this is further motivation for me to go write a
really long blog post about why perlbal exists.

-Dormando


More information about the memcached mailing list