Inconsistent performance

Steven Grimm sgrimm at facebook.com
Fri Apr 20 07:41:45 UTC 2007


Dustin Sallings wrote:
>> Of course, as you say, any multi-get that large is probably a sign of 
>> an application bug.
>
>     Why would this be considered a bug?  My client will automatically 
> combine multiple sequential get requests into a single request, so 
> it's possible that a few multi-key gets or several individual gets 
> around the same time could end up being large.

I did say "probably." Most applications don't do that kind of batching 
of requests, especially batches that span multiple independent client 
requests (which it sounds like you're doing.) If you are actually 
legitimately accumulating that many keys to request at once, then it's 
not a problem and you should do it. Each get request has a certain 
amount of constant server-side overhead, and by batching lots of 
requests together into one big "get" you will reduce that overhead to an 
insignificant percentage of the total time it takes to process the batch.

And if you're batching that many keys together, you probably don't even 
care if the server blocks, since it's most likely a sign that you're not 
sensitive to small variations in latency. But if you do care, then just 
run the MT version of the server and a big bulk get won't block smaller 
ones.

>     What's a reasonable limit on these?  It sounds like it might be a 
> good idea to prevent my client from being too aggressive in optimizing 
> gets.

With a multithreaded build, you don't necessarily need a limit since you 
can always increase the number of threads to meet any desired level of 
concurrency.

With a non-MT build, it depends on a few things. The tolerance of the 
application for latency variation on responses to small requests is the 
biggie. Also the ratio of the server's CPU speed to the network latency. 
If you have a very low latency network and a slow server CPU, then 
you'll see high variability in latency depending on whether or not a 
huge request is being handled when a slow one comes in. On the other 
hand, if you have a fast server and a slow network, then your latency 
will usually be dominated by the network delay anyway and it won't 
matter so much how big your requests are.

If *all* the client requests are huge, then they're all going to take a 
while to complete no matter what (simply due to finite network 
bandwidth) and it won't matter too much what you do.

-Steve


More information about the memcached mailing list