libmemcache(3) 1.1.0rc4...
John McCaskey
johnm at klir.com
Wed Dec 22 12:06:27 PST 2004
On Wed, 2004-12-22 at 11:21 -0800, Sean Chittenden wrote:
> > Check out these stats for 5pm last night to 9am this morning:
> >
> > stats
> > STAT pid 8269
> > STAT uptime 57909
> > STAT time 1103735953
> > STAT version 1.1.11
> > STAT rusage_user 3596:627229
> > STAT rusage_system 8962:163544
> > STAT curr_items 420162
> > STAT total_items 68031447
> > STAT bytes 91619292
> > STAT curr_connections 1
> > STAT total_connections 584
> > STAT connection_structures 7
> > STAT cmd_get 136062198
> > STAT cmd_set 68031447
> > STAT get_hits 135641850
> > STAT get_misses 420348
> > STAT bytes_read 16627870580
> > STAT bytes_written 38062445503
> > STAT limit_maxbytes 268435456
> > END
>
> Wholly crap! That's a 99.7% hit rate... you've got some damn cachable
> data! What are your stats like during peak hours? Oh, and fwiw,
> you're doing 3524 operations per second, not 2300: (get+set) / uptime.
> :)
>
> (136062198 + 68031447) / 57909 = 3524.385588
Ahh, thats true, I only counted gets not sets.
>
> >> 1) identify data needs
> >> 2) acquire data
> >> 3) act on data
> >> 4) display data
> >>
> >> Where in 1), you setup a bunch of callbacks. In 2), you do the actual
> >> mc_get() and your callbacks are executed (which get the data from the
> >> database in the event of a cache miss). And by 3) and 4),
> >> libmemcache(3) is probably out of the picture. But, for step 1 and 2,
> >> you may have removed a bunch of independent get requests and
> >> consolidated them all into one get request.
> >
> > I'm going to look into this a bit today, right now I just to single
> > gets. But I believe it is actually very easy for me to change my code
> > to perform multi-gets on its own so the callbacks may not be needed in
> > my situation.
>
> Ah, ok... well, I was grouping all of my get's together into a single
> get, then was parsing out the data independently... which was a huge
> waste of CPU cycles and tedious to program. Using the callbacks I
> saved a few hundred lines of code and managed to pick up a bit of a
> boost for performance.
>
> >>> [snip]
> >
> > All of this sounds great, but what I had in mind for now is just a
> > simple mcm_reactivate_servers function or such that could be manually
> > called and would go through the list of deactivated servers and attempt
> > ro revive them.
>
> Ah, yeah... that'd be easy to do too. I thought I had an AM flight
> today, turns out it's a PM flight, so I'm grounded and here at my colo
> banging out code... I'll add two functions: one that does a global
> reactivate, the other that does a per server reactivate.
Awesome, that would be very helpful to me.
>
> > My application isn't handling user requests, its a
> > background process that never quits. As such I want to periodically
> > say
> > once every 5 minutes or something try to restore any dead servers, as
> > we
> > may sometimes need to take one down for maintenance or hardware may
> > fail. If this happens we do not want to have to restart the daemon
> > that
> > is utilizing libmemcache to get it to see the server after it comes
> > back.
>
> Yup, I appreciate that.
>
> > I know this presents problems with the location of data in the cache,
> > but the way I see it the worst thing that happens is I get a miss when
> > really it was cached on the other server, and then I restore the cache
> > and in the future get hits. Some memory is wasted, but the extra copy
> > will just never get accessed and as such rotate out of the LRU cache
> > anyway. Theres no concern about stale data due to expiration times
> > being
> > properly set. Am I missing some major issue?
>
> Not really... when a server goes down, your whole cache essentially
> gets invalidated (well, not your whole cache... if you have 3 machines,
> and one goes down, you have a 50% chance of a cache hit... if you have
> 10 machines, one goes down, you have a 11% chance of a hit, etc). If
> you have enough load and enough memcache machines, if one goes down,
> your database(s) could see a nice spike in load until the cache reaches
> saturation again.... until the next server list change. That
> fluctuation could be trivial or could be disastrous. You've been
> warned. ;~)
It won't quite be disatrous, but it will be a major hit for me. I'll
keep in mind that I should never take a server down unless absolutely
neccesary ;)
>
> > I'm probably going to add this either today (or late next week, as I
> > have some time off for xmas). I think this would likely be useful to
> > others as well until the full solution you outlined above is
> > implemented.
>
> Yup... I'll see if I can knock out a 1.1.1 release before 3pm PST
> that'd include this feature. I'm in the midst of sucking in the
> relevant parts of sys/queue.h into memcache.h that way I can remove
> this external dependency which has been an issue for folks at
> Amazon.com on various leenox distro's. *shrug* BSD++ *grin*
Sweet thanks, oh and no problems with sys/queue.h on linux here...
>
> -sc
>
--
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
johnm at klir.com
206.902.2027
More information about the memcached
mailing list