some questions that weren't FAQs

russor@msoe.edu russor@msoe.edu
Wed, 26 Nov 2003 16:28:26 -0800 (PST)


Yes, they would, but if there is a network problem preventing A and B from
accessing both servers, so A will access A exclusively for some time.

> Shouldnt the client API on webapp A and webapp B both hash the key to
the same
> server, provided you list the available servers in the same order in the
clientAPIs on both servers?
>
> On Wednesday 26 November 2003 11:04 pm, Brad Fitzpatrick wrote:
>> Larry,
>>
>> That's a very real possibility.  The idea is that your application is
tolerant of the cache being wrong by doing things like making big,
expensive objects stay in the cache forever, but making them dependent
on
>> smaller objects which are quicker to revalidate, and have those expire.
>>
>> However, we also provide a way to turn that off (at least in the latest
CVS version), you can set $args->{'no_rehash'} when you construct your
memcached client object, then the re-mapping behavior is disabled. 
Note that this is only in the Perl client currently, as far as I'm
aware.
>>
>> - Brad
>>
>> On Wed, 26 Nov 2003, Larry Leszczynski wrote:
>> > Hi Evan -
>> >
>> > > > 1) It looks like the client initializes with a list of servers it
knows about, how do those servers communicate with each other
(assuming they do) so they stay consistent with each other?
>> > >
>> > > Servers are independent.  A given key is stored on only one server,
>> and
>> > > which server a key lives on is decided by the client-side memcache
library code.
>> >
>> > Got it, thanks.  I saw this line:
>> >
>> > 	If a host goes down, the API re-maps that dead host's requests onto
the servers that are available.
>> >
>> > so I was assuming that there was some sort of failover between
servers going on.
>> >
>> > This all looks very slick, I can definitely see places where I could
>> put
>> > it to good use.  But I'm wondering: since servers don't share data,
doesn't this cause runtime inconsistency problems?  Like if I have
two boxes that each run my web app plus memcached, and I configure
each
>> web
>> > app to know about both of the memcached instances:
>> >
>> >    - Some request from user 1234 hits box A
>> >    - Web app on A stores some data in the cache with a key "foo:1234"
- The client on box A hashes this to either memcache A or memcache
>> B
>> >      (suppose it picks memcache B)
>> >    - A later request from user 1234 hits box B
>> >    - Web app on B looks up data with key "foo:1234"
>> >    - The client on box B hashes this to either memcache A or memcache
>> B
>> >      (should look for the data in memcache B)
>> >
>> > But say there's a network glitch or something and the client on box A
thinks the server on box B is dead because it can't be reached.  So
>> cache
>> > lookups on box A for "foo:1234" will now go to server A, but lookups
>> on
>> > box B will still go to server B, won't they?  Does the client on box
A periodically check to see if server B has come back?
>> >
>> >
>> > Thanks!
>> > Larry Leszczynski
>> > larryl@furph.com
>