setMulti implementation for Java client?

Andreas Vierengel avierengel at
Wed Jun 1 23:19:34 PDT 2005

Am Mittwoch, den 01.06.2005, 20:39 -0400 schrieb Greg Whalin:
> Kevin Burton wrote:
> > Greg Whalin wrote:
> > 
> >> get key1 key2 key3 key4
> >>
> >> as opposed to
> >>
> >> get key1
> >> get key2
> >> get key3
> > 
> > 
> > Crap.. I guess that makes sense.
> > 
> > The question becomes if there's overhead in the pool implementation or 
> > if its just TCP latency.
> > 
> > We have the same issue with the MySQL JDBC driver.  Even though we're 
> > using JDBC polling every SQL command takes about 1ms.
> > 
> > Kevin
> I also wonder if the memcached server is faster at dealing w/ getMulti() 
> vs a bunch of gets (network traffic aside)?

According to my tests, the performance gain in memcached is neglegible,
but it does make a big difference on client side, because you do one big
write and one big read as opposed to small write,read,write,read,...
This also depends on client-side usage of language. The impact on
client-side with c-api is far lower as opposed to perl, or java.
For example the round-trip-time in a 100MBit/s-Ethernet is 0.1-0.3ms.
So if you have 10 get-commands one after the other, you have 1-3ms delay
only from your network.

I have written my own perl-client which supports "set_multi" via
parallel-connections. I had to, because we need additional features
which the standard perl-client does not provide (failing of one
memcached-server invalidates the whole cache-array, for example).
Although the standard-client is twice as fast in single get/set, but
this is was not the main problem for us.

We have also a stream which we push directly in memcached via a small
Here, we serialize a configurable amount of sets in one request and
afterwards read the 10 answers. In our example we got a dramtic
performance-improvement on client-side. That is noteable on our side
because we do about 3000-6000 sets per second in peak-times.

The only perlformance-hit for memcached could be a bulk-request of a lot
of keys in a multi-get ( 10.000 gets lasts about 0.8s on our hardware,
but this blocks everything else in this interval ). I can live with
this, because I enhanced my client to split bulk-requests in a
configurable size of "chunks" and send them serialized to memcached.


More information about the memcached mailing list