Multithreaded status (Was: Re: best way to upgrade)
Paul Lindner
lindner at inuus.com
Thu Apr 12 14:22:26 UTC 2007
On Wed, Apr 11, 2007 at 05:03:16PM -0700, Steven Grimm wrote:
> Don MacAskill wrote:
> >Sorry to thread hi-jack, but I'd love to hear the status of the
> >multithreaded version and whether it's in heavy production use at
> >Facebook. And, if so, how it's doing, of course. :)
>
> We have been running it exclusively on all our production servers for
> several months (though without the changes that have landed in that
> branch in the last couple weeks). We found one minor bug in late
> February, but aside from that it has been trouble-free for us.
>
> It gets extremely heavy usage; during peak times some instances handle
> over 60,000 requests per second sustained, about 95% of which are "get"
> requests. We haven't hit a bottleneck yet, but at the moment it appears
> that the limiting factor will be the Linux kernel's interrupt handling.
> It appears to handle all the interrupts from incoming packets on one
> CPU, as we can see the system CPU time on one of the four CPUs exceed
> 50% while the other three are more in the 20% range. If anyone knows how
> to spread that load around, I'd be interested in that (admittedly I
> haven't actually gone out and tried to research it yet; for all I know
> it's simple to do.)
You might consider putting all the interrupt handling on one CPU.
grep ethX /proc/interrupts
cd /proc/irq/$IRQ
echo 01 > smp_affinity
Of course irqbalance might mess that up..
Another thing to consider is turning TSO on for your ethernet device,
and turn tcp checksums off in your /etc/sysctl.conf
On our SuSE 10 systems we're using these sysctl values after a lot of
trial and error. I'd love to hear what other people are using:
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
fs.file-max = 2097152
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_mem = 128000 200000 262144
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 524287
net.core.wmem_default = 524287
net.core.optmem_max = 524287
net.core.netdev_max_backlog = 300000
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
kernel.sysrq = 0
kernel.core_uses_pid = 1
kernel.shmmax=8589934592
--
Paul Lindner ||||| | | | | | | | | |
lindner at inuus.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://lists.danga.com/pipermail/memcached/attachments/20070412/e034bdd2/attachment.pgp
More information about the memcached
mailing list