BBeuning at corecard.com
Wed Oct 24 16:18:33 UTC 2007
Here is what I am thinking. Please poke holes in this.
We currently have 100 processes each loading 400MB which is 40GB of RAM.
What if we set up 5 independent memcached servers (not one cache
with 5 machines, but 5 caches each with a full copy of the data)
and we spread the 100 processes across 5 caches. One process
only talks with one cache, and each cache is serving 20 processes.
If one cache fails, the 20 processes using that cache switch to the other
4 caches. When the dead cache comes back, the 20 processes switch
back to it and start reloading data naturally. (If we can arrange that the
clients move back at different times, maybe the storm can be reduced.)
This sounds safe to me.
A more advanced plan might have one process check 2 caches for data.
If a key is found in one cache but not the other, then it stores the
in the other cache. (In a failure scenario, this would let us load a
recovered cache from other caches instead of the DB.) If the key is not
in either cache, then it hits the DB and saves the key-value in both caches.
This feels like it is moving into the hard replication issues space.
Memcached as it is today provides a certain good level of reliability
(with blazing performance). Some situations are going to require more
reliability, and one way to get that is replication. If there is another
to get more reliability I am very interested in hearing about it.
MySQL without ACID transactions supports a certain level of reliability.
Some situations using MySQL need more reliability so MySQL added
the ACID backend.
From: Clint Webb [mailto:webb.clint at gmail.com]
Sent: Tuesday, October 23, 2007 9:38 PM
To: Marcus Bointon
Cc: Brian Beuning; Memcached (E-mail)
Subject: Re: Replication
The problem with a replicated cache is figuring out what to do if one fails.
Memcached effectively solves this problem by not doing replication. I
strongly agree with this approach unless you have a VERY good reason not to,
and in that case, memcached is probably not a very good choice.
What I recommend is using multiple memcached caches, even if all your cached
data can fit in one instance, spread it over several. That way if you need
to stop one instance for any reason, you dont lose your whole cache, you
only lose part of it, which can be recreated from the database.
One of my smaller projects has a finite set of data that is only about 20mb
in size, it will change frequently, but not get any larger or smaller. I
actually use two memcached instances on different machines (each with 20mb
allocated), and the keys are distributed over both (by the client).
This has worked quite adequately for me for small and large projects.
Replication sounds like a simple thing, but in implementing it, there are a
LOT of things that become issues.
On 10/24/07, Marcus Bointon < marcus at synchromedia.co.uk
<mailto:marcus at synchromedia.co.uk> > wrote:
On 23 Oct 2007, at 20:17, Brian Beuning wrote:
> One instance of memcached could handle our tiny 400 MB with no
> It can probably even handle the load of 100 processes hitting it.
> But I am
> concerned if memcached went down then we would miss our fixed time
> Ideally we would like to have a few memcached instances each with a
> copy of the 400 MB. The Wiki says memcached does not do replication.
Seems like the memcachedb project mentioned on here recently might be
a good fit for you. It's essentially a memcache front-end with a bdb
back end, so can survive restarts etc, while still serving some scary
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK resellers of info at hand CRM solutions
marcus at synchromedia.co.uk <mailto:marcus at synchromedia.co.uk> |
"Be excellent to each other"
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the memcached