Poor performance when inserting - memcached 1.2.5

Brad Fitzpatrick brad at danga.com
Wed May 21 14:11:13 UTC 2008


I see no reason that it requires a new daemon.  It's just a library change.

On Wed, May 21, 2008 at 1:55 AM, Paul McGrath <bytesemantics at gmail.com>
wrote:

> Hi all
>
> I've been evaluating memcached for use in some batch-based trading systems
> at a client of mine.
>
> We have recognised that there is a great deal of data that we are
> continually reloading from the database - and it seemed obvious to employ a
> system-wide caching strategy.
>
> The implementation of these batch jobs is mostly Perl - and so memcached
> seemed like quite a good fit.
>
>
> Unfortunately we have found that the tool is not suitable for our purposes
> in its current form. We are performing a great number of insertions into
> memcached (partly dictated by limits on size of key/value pair) and are
> finding that our performance suffers significantly when inserting data into
> the cache.
>
> I'd like to suggest a change to the existing API set (which is a similar
> request I made to the JavaSpace community several years ago which was
> adopted) and that is to change the API to support bulk inserts, i.e. to be
> able to send a list of key/value pairs which can be inserted into memcached
> - which would reduce the overall network latency/overhead which is killing
> our performance.
>
> Given the ability to pool memcached daemons - I believe this would require
> a small amount of re-architecture of memcached whereby a controlling daemon
> was needed to allocate the data to the appropriate memcached node (if
> configured in a multi-node manner).
>
>
> Any thoughts - and (hoping) has anyone implemented such yet ?
>
>
> Thanks
> Paul
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.danga.com/pipermail/memcached/attachments/20080521/e0bd4d50/attachment.html 


More information about the memcached mailing list