some questions that weren't FAQs
Justin Matlock
jmat@shutdown.net
Wed, 26 Nov 2003 19:20:45 -0500
A little off topic...
Brad... in practice, have you found it better to have lots of small
memcache's (like a 1gb one on each webserver), or two or three
"megacaches" (having multiple caches, using the max 4gb)? You guys at
LJ are the only ones I know that have enough hardware -- and enough data
-- to have tried it both ways. :)
On a different note -- is anyone still seeing that funky problem where a
memcached gets in a whacked out state, and starts chewing 100% CPU, or
did that get fixed, and I just missed it?
J
>That's a very real possibility. The idea is that your application is
>tolerant of the cache being wrong by doing things like making big,
>expensive objects stay in the cache forever, but making them dependent on
>smaller objects which are quicker to revalidate, and have those expire.
>
>However, we also provide a way to turn that off (at least in the latest
>CVS version), you can set $args->{'no_rehash'} when you construct your
>memcached client object, then the re-mapping behavior is disabled. Note
>that this is only in the Perl client currently, as far as I'm aware.
>
>- Brad
>
>
>On Wed, 26 Nov 2003, Larry Leszczynski wrote:
>
>
>
>>Hi Evan -
>>
>>
>>
>>>>1) It looks like the client initializes with a list of servers it knows
>>>>about, how do those servers communicate with each other (assuming they do)
>>>>so they stay consistent with each other?
>>>>
>>>>
>>>Servers are independent. A given key is stored on only one server, and
>>>which server a key lives on is decided by the client-side memcache
>>>library code.
>>>
>>>
>>Got it, thanks. I saw this line:
>>
>> If a host goes down, the API re-maps that dead host's requests
>> onto the servers that are available.
>>
>>so I was assuming that there was some sort of failover between servers
>>going on.
>>
>>This all looks very slick, I can definitely see places where I could put
>>it to good use. But I'm wondering: since servers don't share data,
>>doesn't this cause runtime inconsistency problems? Like if I have two
>>boxes that each run my web app plus memcached, and I configure each web
>>app to know about both of the memcached instances:
>>
>> - Some request from user 1234 hits box A
>> - Web app on A stores some data in the cache with a key "foo:1234"
>> - The client on box A hashes this to either memcache A or memcache B
>> (suppose it picks memcache B)
>> - A later request from user 1234 hits box B
>> - Web app on B looks up data with key "foo:1234"
>> - The client on box B hashes this to either memcache A or memcache B
>> (should look for the data in memcache B)
>>
>>But say there's a network glitch or something and the client on box A
>>thinks the server on box B is dead because it can't be reached. So cache
>>lookups on box A for "foo:1234" will now go to server A, but lookups on
>>box B will still go to server B, won't they? Does the client on box A
>>periodically check to see if server B has come back?
>>
>>
>>Thanks!
>>Larry Leszczynski
>>larryl@furph.com
>>
>>
>>
>>
>>