mdelete and Cache::Memcached::delete_multi()
tomash.brechko at gmail.com
Thu Nov 15 10:10:15 UTC 2007
On Wed, Nov 14, 2007 at 23:35:43 -0700, Timo Ewalds wrote:
> It's not that hard to add multi-delete to the current clients with the
> existing protocol. You can just pipeline the commands, sending all the
> deletes, then reading all the responses.
It's possible to implement streaming of the commands, but not quite as
easily as you describe. You can't send all commands _first_, and
_then_ read all the replies. This is because when the server receives
the command, it sends the response. The more commands you send, the
more outstanding replies are there. But you don't read them in yet,
so at some point the server is unable to send the next reply, because
TCP receive buffers on the client are full. At this point the server
is in the "write_reply" state, and it can't accept new requests in
this state (this state is a per-connection of course). So once
server's TCP receive buffers are full, the client will be blocked not
being able to send the next command. This won't affect other clients,
but your connection will deadlock.
You can implement streaming with select()/poll() magic, not using any
threads. All you have to do is to send commands and _simultaneously_
read the replies. But to implement this as a library, like Perl
module for instance, where there's one call per command, you'll have
to complicate the interface (use callbacks when result arrives), and
do some tricks (like in every command try to read all outstanding
responses until there are none before sending the command, to be sure
it will reach the server).
The cleaner way is to implement mdelete ;). I humbly added mdelete
last in the parser, so unless you are actually using it this command
won't affect your performance in any way.
More information about the memcached