memcached limits

Steven Grimm sgrimm at
Wed Aug 30 16:37:27 UTC 2006

You can fetch as many keys as you like, subject only to memcached's 
available memory; it reads your entire "get" command into a buffer so 
there has to be room to hold them all. There also are some structures 
that get allocated to hold data about the output, so you'll need memory 
for those too. But memory should be the only limit.

As for batching set/delete requests, there's no multi-key set or delete, 
but it is perfectly reasonable to stream more than one request to 
memcached at a time. For deletes (where you know exactly how big they 
can be given that memcached has a maximum key size) you could do 
something as simple as

for (i = 0; i < [[[number of deletes that will fit in your TCP send 
window]]]; i++)
for (i = 0; i < [[[same number]]]; i++)

Memcached won't process the deletes in parallel, but it is perfectly 
fine for there to be another delete command already waiting in its input 
buffer while it's working on the previous one.

Hope that helps.


Lexington Luthor wrote:
> Hi,
> Is there a limit to the number of keys I can request in a single get 
> request? I have tried a few thousand at a time and things seem to be 
> working fine, but I don't want to hit a sudden unexpected limit. My 
> client code will send as many as it can at a time to minimize the 
> round-trips. Can memcached handle that? (I see in the memcached code 
> that it reads the entire line before responding to any of the keys, is 
> that the best way?).
> Also, will sending many keys on a single request hurt the latency of 
> the response to the other clients connected to that server? Will 
> memcached block other client requests while responding to the get 
> request or will it continue normally?
> My client code is single-threaded and currently most (60%) of its time 
> is spent waiting for readline() on a socket to memcached. Is there a 
> way to batch set and delete requests like get requests to minimize the 
> number of round trips and time spent waiting?
> Thanks,
> LL

More information about the memcached mailing list