Wed, 17 Dec 2003 12:55:08 -0500
Ah! I thought you meant on the server level.
On the API level this would be rather easy to implement (at least on the PHP
API; I'm sure the Perl API could do it too).
It would just be a matter of writing a new method that (pseudo php-code;
don't know how to write it in perl)
- Took all of the keys in as an array ($del_keys)
- Looped through $del_keys, running the hash method on them to get the
server; create an array for each server ($servers[$server_name] =
- Then start looping through the $servers array; connect to the server,
issue the deletes, then disconnect
For speed, though... I don't know if you'd really gain a whole lot.
In the PHP API, you wouldn't gain speed at all; since it connects once to
each server when it's first needed, and doesn't disconnect until the end of
the script. If you had multiple deletes for the same server, the connection
would already be open after the first -- there's no build/teardown delay.
In fact, I think it would be faster to *not* use an aggregating method,
since it would involves a lot of looping.
I'm not sure if the Perl API uses persistent sockets or not (I've somehow
managed to avoid learning Perl after 11 years, even though I know Python,
PHP, and Java.. heh). If it doesn't, then a deletion aggregation API method
would be useful, and might gain you a few fractions of a second.
[mailto:firstname.lastname@example.org] On Behalf Of Jon Valvatne
Sent: Wednesday, December 17, 2003 11:19 AM
Subject: RE: Multiple delete
On Wed, 2003-12-17 at 17:13, Justin wrote:
> At least for my configuration, they rarely are. The client distributes
> objects fairly evenly among all six of my (really pathetic, underpowered)
> memcached servers. The way the API works, it takes the key, hashes it, and
> decides what server to store it on from the hash.
I realize this, but assuming a fairly low number of servers and hundreds
of deletes in one go, the API should be able to group any multi-delete
from the user into one (still fairly big) multi-delete for each server.