PECL memcache extension
Brian Moon
brianm at dealnews.com
Wed Feb 8 13:42:02 UTC 2006
The things you are talking about require the memcached servers to know
about each other. This is not the case. That would add massive
overhead that I believe was avoided on purpose.
Brian Moon
dealnews.com
--------------
How to go broke saving money.
http://dealnews.com/
Joshua Thijssen wrote:
>> Yeah, that's an issue. When you've got, say, 50 memcache servers,
>> though, and one fails, you're only getting 1/50th of those queries
>> passed to the DB. So it's not (at least, for us) that big of a deal.
>> 49/50 queries to memcache is still nice.
>
> Isn't it to prioritize the a key?
>
> I'm not sure if memcached knows if a bucket is not available anymore.
>
> Suppose machine A breaks and had key X with value 1. Another query would
> place the same key X onto server B with a higher priority and a
> mandatory small timeout. This will trigger the DB more often, but takes
> care of the key after expiration (I'm not sure of the key expiration is
> checked on quering, or that an internal process just scans all keys for
> expiry), even if the key is set not to expire.
>
> To take care of coming back online of machine A:
> Machine A comes back up, and a query is made to key X. Memcache sees
> that there is a key inside 2 different buckets (is this possible?),
> collects the one with the highest priority, internally updates all keys
> inside the other (still online) bucket(s) with the new value.
>
> The mandatory timeout could even be left out, since a query known that
> it's grabbing "slave"-data, and it could delete the complete key after
> syncing the master key on machine A.
>
> I think this would take care of unsynced data and obsolete (slave) data.
>
> Gr,
> Joshua
>
More information about the memcached
mailing list