some questions that weren't FAQs
Larry Leszczynski <email@example.com>
Wed, 26 Nov 2003 17:55:28 -0500 (EST)
Hi Evan -
> > 1) It looks like the client initializes with a list of servers it knows
> > about, how do those servers communicate with each other (assuming they do)
> > so they stay consistent with each other?
> Servers are independent. A given key is stored on only one server, and
> which server a key lives on is decided by the client-side memcache
> library code.
Got it, thanks. I saw this line:
If a host goes down, the API re-maps that dead host's requests
onto the servers that are available.
so I was assuming that there was some sort of failover between servers
This all looks very slick, I can definitely see places where I could put
it to good use. But I'm wondering: since servers don't share data,
doesn't this cause runtime inconsistency problems? Like if I have two
boxes that each run my web app plus memcached, and I configure each web
app to know about both of the memcached instances:
- Some request from user 1234 hits box A
- Web app on A stores some data in the cache with a key "foo:1234"
- The client on box A hashes this to either memcache A or memcache B
(suppose it picks memcache B)
- A later request from user 1234 hits box B
- Web app on B looks up data with key "foo:1234"
- The client on box B hashes this to either memcache A or memcache B
(should look for the data in memcache B)
But say there's a network glitch or something and the client on box A
thinks the server on box B is dead because it can't be reached. So cache
lookups on box A for "foo:1234" will now go to server A, but lookups on
box B will still go to server B, won't they? Does the client on box A
periodically check to see if server B has come back?