NON VOLATILE MEMORY, PHP extension

Steven Grimm sgrimm at facebook.com
Fri Jan 12 14:53:02 UTC 2007


I haven't observed much performance difference between localhost TCP and 
UNIX-domain sockets, actually. That's probably highly platform-dependent 
so your results may vary.

The point of the UNIX-domain socket patch, if I understand correctly, 
was security; the socket can be hidden inside a non-public directory so 
it's inaccessible to other processes on the same machine. I assume 
that's mostly an issue in shared hosting environments where you don't 
trust the other users on your server.

As to memcached being slower than APC, yes, that's obviously the case. 
Even if memcached did no work at all it would still be far slower than a 
local in-process cache. No matter what memcached does, the client still 
has to hash the key to figure out which server to talk to, formulate a 
request, transmit it over a socket, go to sleep waiting for a response, 
read the response from the socket, and unmarshal the data if it was a 
"get". When it's waiting for the response (if memcached is on the same 
host, and discounting multi-CPU hosts for the moment) you have to wait 
for the kernel to context-switch over to the memcached process. All that 
overhead is unavoidable if you're talking to any kind of external cache, 
no matter how efficient it is, and until you understand that, you are 
not going to be able to meaningfully evaluate memcached's performance.

Whereas with APC, the code path looks more like: hash the key, follow a 
few pointers, and copy the data structure from the shared memory segment 
to local memory. In other words, the memcached code path has to do 
everything the APC code path has to, PLUS wait for a response from a 
separate process.

If your web site fits on one web server and will do so for the 
foreseeable future, and the cache doesn't need to be accessed by non-web 
clients, don't use memcached! Use APC or EHCache or your own hashtable; 
it'll be orders of magnitude faster in some cases.

Local caches become much less compelling as soon as you add a second web 
server and the cache has to stay consistent across machines. As soon as 
you have to check the other host's cache to see if it happened to cache 
the value you want, you'll start seeing memcached pull ahead. Try 
hitting your cache from six or seven hosts with hundreds of requests a 
second each, including writes and deletes which you check for 
consistency across hosts, and you'll start to get a fairer picture.

That said, it is entirely reasonable to combine the two approaches: use 
a local cache for infrequently-changing data (configuration settings, 
etc.) and memcached for volatile or infrequently-accessed data. That's 
how we do it.

-Steve


Roberto Spadim wrote:
> if we could start memcached with udp tcp and socket files in the same 
> command line (like : memcached -sock /tmp/memcached.sock -U 11211 -P 
> 11211)
> i think that acess memcache from local machine using socket could be 
> faster than with udp and tcp, don't?
>
> sorry i was replying just for howard
>
> howard chen escreveu:
>> On 1/12/07, Roberto Spadim <roberto at spadim.com.br> wrote:
>>> can memcache use socket?!
>>
>> there is a patch to do so as i remember...
>>
>> but think about it: memcached was designed to be a distributed caching
>> system, do you have a strong reason to use local unix socket?
>>
>> below is a benchmark from MySQL Performance Blog, results show that
>> memcached is abt 5 times slower than APC
>>
>> Cache Performance Comparison
>> http://www.mysqlperformanceblog.com/2006/08/09/cache-performance-comparison/ 
>>
>>
>>
>> regards,
>>
>> Esta mensagem foi verificada pelo E-mail Protegido Terra.
>> Scan engine: McAfee VirusScan / Atualizado em 11/01/2007 / Versão: 
>> 5.1.00/4937
>> Proteja o seu e-mail Terra: http://mail.terra.com.br/
>>
>>
>



More information about the memcached mailing list