question regarding memory usage

Aaron Stone aaron at serendipity.cx
Fri Sep 7 22:53:57 UTC 2007


On Fri, Sep 7, 2007, Bill Marcum <doggus_98 at yahoo.com> said:

> I'm considering using memcached for our php application.  We have
> certain large objects we retrieve on almost every request, and I'd
> like to optimize the retrieval and the memory required to load these
> objects every time.  I see the following on the site:

> > The first thing people generally do is cache things within their web
> > processes. But this means your cache is duplicated 
> > multiple times, once for each mod_perl/PHP/etc thread. This is a
> > waste of memory and you'll get low cache hit rates. 

> My question: since most requests retrieve these objects, isn't it true
> that I'm still going to duplicate the object multiple times, once
> for each PHP thread?  All I'm doing is replacing a mysql call with
> a memcached call.  

Yes, but the MySQL call is expensive and the memcache call is cheap.

> I'm thinking that what I really need is a PHP module that supports
> synchronized read/write access to a single memory object from 
> multiple threads.  But I don't see any PHP modules out there that
> appear to do this.

You'd be trading duplicated data for complex locking mechanisms. I'd take
the duplicated data any day.

> Am I missing the point here?

Locking is incredibly expensive. Memory, CPU and local network bandwidth
are incredibly cheap. Keeping a single cache of the object in memcache,
and retrieving it on whichever web server in your cluster is currently
servicing a client with that object is pretty much the canonical design
for these applications. Unless the object is pathologically large (in
which case you should reconsider the design), you'll pull a copy, use it,
then throw it away, and it will be surprisingly fast.

Aaron


More information about the memcached mailing list