parallelized requests

Jan Pfeifer pfjan at
Thu Jan 11 02:07:46 UTC 2007

Thanks for the Answer Reinis!

If one machine for memcached is enough for our thousands of req/s (we will have thousands, possibly 10s of thousands reqs/s at some point, hopefully :) ), there are still two compeling reasons me to want to have it:

1) redundancy, and transparent recovery in many scenarios of failover (or upgrade of software?)

2) parallelize requests: if I need 200 keys (worst case scenario here) to build a page, if a quarter of the keys are in each of the 4 servers, the library can request them in parallel, instead of querying one machine at a time. Do you know if the library does it ?

I mean, in this scenario of 4 servers (yes, carefully  configured), where the library automatically distribute the keys into the different servers, would the library query the 4 memcache instances in parallel ? Or will it sequentialize the calls ? 

thanks :)

- jan

----- Original Message ----
From: Reinis Rozitis <roze at>
To: memcached list <memcached at>
Sent: Thursday, January 11, 2007 1:51:37 AM
Subject: Re: parallelized requests

When you request multiple keys (pass an array of keys to the get() function) 
the PHP extension  fetches the items from different servers (depending on 
the server pool added before)..

There are just few things to note. You have to maintain the same pool when 
you set the cache item and when you want to retrieve it.

Lets say if you have php code:

$mc= new Memcache;
$mc->addServer('host', 11211);
$mc->addServer('host2', 11211);

And in case 'host' or 'host2' goes down (or you change the configuration 
(switch the server for example or add a third server)) next time you do <? 
$mc->get('somekey'); ?> you will probably miss the item, cause the 
calculation which server has to be used to store/retrieve the item is done 
by algorithm (the one used in php pecl extension was mentioned before in 
mailling list, if you want I can search for it) which takes the keys (hash) 
and server pool as arguments.

But if you use it as general cache it doesnt really matter and there is no 
need even for 4 servers..
Just one Memcached server can handle huge amounts of load (million keys, 
thousands req per sec and pretty insane traffic (one of our server writes 
out as cache like nearly  ~1.6-2Tb per day which is kinda 20-30Mb/s )).

Just from my own experience I like better to seperate the different 
memcached servers (you can even run few instances on one box on different 
ports or interfaces) content wise. That is you always know on which server 
some key or content type actually resides (you can write your own 
distribution code or course). There are also some concerns where you dont 
want to actually mix the cached data - for example we use one memcached just 
for php sessions for bunch of  webfrontends because memcached throws the 
data (in case the allowed memory is full) out by expunging least used / 
oldest items, but as sessions are generated/read/written each page request 
you could loose some actually more usefull cached data.


----- Original Message ----- 
From: "Jan Pfeifer" <pfjan at>
To: "memcached list" <memcached at>
Sent: Wednesday, January 10, 2007 9:26 PM
Subject: parallelized requests

hi all,

I'm setting up a cache layer here at work that will
probably be composed by 4 servers (or maybe 2 dual core ones). For our
use case, we might need to get on average 10 keys to build a web page,
but possibly up to 200 small key/values, or even more in very rare

My question is: do the C or the PHP libraries parallelize
the requests for the keys to the different cache servers when using the
multi-key request ? Or does it fetch the keys one server at a time ?

thx in advance for any answers or pointers :)


if this have been asked before (seems a basic question to me), it's the
first time I'm using memcached, and I couldn't find anything by
searching in the list.

Any questions? Get answers on any topic at  Try it 

Yahoo! Music Unlimited
Access over 1 million songs.

More information about the memcached mailing list