parallelized requests

Reinis Rozitis roze at
Thu Jan 11 02:43:00 UTC 2007

1) The transparent recovery (fallbacks what to do when the key is missed 
(read from other source (DB) and update back to memcache)) or redundancy is 
something you have implement in client side..  the extensions itself marks 
the servers as crashed only when unable to connect and depending failover 
settings tries next server in pool

2) Actually in some cases all of the 200 keys could be on one memcached 
In case of more than one server this also up to you - to store every item on 
all of the servers and then use random/balanced gets or use the available 
memory more efficient and store the item only once on one particular server.

But according to the extension code its done parallel..

You can check the memcache.c  mmc_exec_retrieval_cmd_multi function..
In short - at first the extension loops through the array to get from what 
server which keys should be requested and then asks the required servers for 
the data.

Still thats also something you must test yourself. In some cases according 
to php xdebug profiler we got better results on requesting each key 
individually than passing an array with 2 or more items (up to 100 or so per 

On other hand in my opinion/experience memcached will be the last thing you 
will bother about performance/speed-wise (if you feed enough ram, don't let 
the system run into swap and tune a bit tcp settings). Proffiling our whole 
php aplication (which pretty heavy relies on mc) we havent seen anything 
above 0.000x - 0.00x ms ever .. The thing which makes pain is actually those 
DB backends which allways suffer from disk io latency, bad sql queries not 
using indexes or doing it totally wrong or just by dumb php code itself :)


----- Original Message ----- 
From: "Jan Pfeifer" <pfjan at>
To: "Reinis Rozitis" <roze at>; "memcached list" 
<memcached at>
Sent: Thursday, January 11, 2007 4:07 AM
Subject: Re: parallelized requests

Thanks for the Answer Reinis!

If one machine for memcached is enough for our thousands of req/s (we will 
have thousands, possibly 10s of thousands reqs/s at some point, hopefully 
:) ), there are still two compeling reasons me to want to have it:

1) redundancy, and transparent recovery in many scenarios of failover (or 
upgrade of software?)

2) parallelize requests: if I need 200 keys (worst case scenario here) to 
build a page, if a quarter of the keys are in each of the 4 servers, the 
library can request them in parallel, instead of querying one machine at a 
time. Do you know if the library does it ?

I mean, in this scenario of 4 servers (yes, carefully  configured), where 
the library automatically distribute the keys into the different servers, 
would the library query the 4 memcache instances in parallel ? Or will it 
sequentialize the calls ?

thanks :)

- jan 

More information about the memcached mailing list