libmemcache(3) 1.1.0rc4...
Sean Chittenden
sean at chittenden.org
Tue Dec 21 18:09:42 PST 2004
> This is awesome Sean. I've been running very stable for days now under
> an extremely high load of around 1million sets/gets per hour.
Wow! That's fantastic to hear! Be sure to check out the callback
interface as it may let you more efficiently make get calls if you
structure your app something like:
1) identify data needs
2) acquire data
3) act on data
4) display data
Where in 1), you setup a bunch of callbacks. In 2), you do the actual
mc_get() and your callbacks are executed (which get the data from the
database in the event of a cache miss). And by 3) and 4),
libmemcache(3) is probably out of the picture. But, for step 1 and 2,
you may have removed a bunch of independent get requests and
consolidated them all into one get request.
> One thing I'm curious about is that there doesn't seem to be any
> attempt
> to detect a server coming back up, once it goes down its deactivated,
> and then its down for good unless I create a new memcache object and
> readd the server.
Yeah... this is a gapping hole in the API at the moment. There are two
potential solutions to this. The first being:
*) Move the server lists to shared memory
*) Provide a tool that administrates the lists in shared memory.
This would work fine and dandy, but is a less than perfect solution as
it requires manual management of server lists. The ideal/correct
solution is spiffy neato and makes all of this kinda foo automagic.
You turn on a new server and all of your clients magically get the
updated server lists. In a few days once I've finished mdoc'ifying the
binary protocol proposal, it should be very obvious to those that
re-read it how this is going to happen. As things stand, the published
protocol doesn't include support for this, but in a day or three (it'll
probably be an x-mas present to the folks on the list) the tracking
protocol will be completed, along with a fair amount of niftiness and
new features that more than a few will want.
> Any plans to add something to detect servers coming back up?
Yeah, the fix for this is a collusion of Brad's tracker idea and my own
dribblings.
> Would this
> be something you would accept a patch for if I do it? To avoid the
> performance hit of always checking, I'd be fine with seeing it as a
> seperate function that attempts to check each deactivated server that
> must be explicitly called when you want to check.
Suffice it to say the server and client are going to have versions of
the server maps and notification of updated maps will pass
automatically from servers to clients. The master servers (known as
trackers) will keep track of the other servers that are available and
will update the server list when a client memcached(8) disconnects from
the tracker, thus updating the server maps and the trackers pushing the
updated lists to their peers, then out to the clients. What's even
more nifty, is this will all happen w/o the cluster invalidating its
cache because keys are hashed to virtual buckets, not direct server
assignments as they are now. Basically memcached(8) is going to start
to look a lot like a router in that it will map ranges to servers
(kinda like a router does IPs to other routers). -sc
--
Sean Chittenden
More information about the memcached
mailing list