Which way is better for running memcached?
Don MacAskill
don at smugmug.com
Sat Feb 17 00:23:43 UTC 2007
Is your data perfectly divided so that every multi-get never touches
more than one instance?
If so, you just make sure somehow that your data slice never crosses
32GB or whatever your typical memcached instance has?
If not, how to you deal with data spread across multiple memcached
instances?
Do you do a multi-get on one instance, see what's missing and issues
single gets for the remaining data, falling back to some other
disk-based store on those failures?
Or issue multi-gets to each memcached instance and combine, then go to
disk for non-cached requests?
Or something else entirely?
Thanks,
Don
Steven Grimm wrote:
> We use it a lot. We divide the data for a given page into "stuff we need
> immediately for the business logic that will change what other data we
> need to fetch," "stuff we need for the business logic that we can
> evaluate in isolation," and "stuff we're going to display." The first
> gets fetched as needed during the execution of the page. The second and
> third, we queue up internally and request all in one big "get" just
> before rendering the page at the end of the request; for the second
> class of data, we have a callback mechanism wrapped around the memcached
> client so that we can run our business logic using some of the returned
> data. There are some additional wrinkles but that's the rough idea.
>
> By the way, it's not really any easier or harder in PHP than in any
> other language; it's about application structure, not language. If we
> were writing our site in Java or Python or C/C++ we'd probably do
> exactly the same thing.
>
> -Steve
>
>
> Russ Garrett wrote:
>> Reading this made me curious... How often do you end up using
>> get_multi on facebook anyway? It seems particularly hard to do in PHP
>> without some rather convoluted coding.
>>
>> Russ Garrett
>> Last.fm Ltd.
>> russ at last.fm
>
>
More information about the memcached
mailing list