Why not let the fs handle this ??
jeremy at belent.com
Tue Jun 6 16:17:43 UTC 2006
Steven Grimm wrote:
>> the fs has another greate advantage , it cache only used files in fs
>> buffer , so this mean that the ram usage is very small compared to
>> memcache filled with all data .
> memcached is an LRU cache, same as the filesystem's disk buffers. If
> you size the cache appropriately you will get nearly the same caching
> behavior either way, except that running a backup or a "find /" won't
> destroy your cache performance if you're using memcached. You don't
> have to size memcached to hold every possible value in your database
> if you only get significant performance gains from caching very
> recently accessed data. (Though in general the bigger the better, of
> course, within the limits of your available memory -- and you want to
> have some spare capacity to handle one of the cache servers dying.)
One of the biggest problems with using the filesystem I ran into before
switching to memcached (I was using Cache::Cache
with the FileCache backend) is that you may eventually need to clean up
expired entries that are still on the disk.
This can be an expensive operation, and doing so will often cause the
filesystems disk buffers to be flushed, further degrading
performance. With memcached, you never have to worry about cleanup,
when room is needed for a new key, it'll throw out an old
Memcached also makes it very easy to share the cache between web
servers. Instead of using say 512mb ram on each machine for filesystem
buffers for your cache, leaving your cache to degrade in performance if
it grows beyond 512mb, you setup a 512mb memcached on each machine, and
all the machines share it. If you have 5 machines, you have 2.5GB worth
of room to cache your data before performance degrades.
More information about the memcached