PHP Daemon marks Memcache as failed every few days

Iain Bapty ibapty+memcached at
Wed May 23 15:11:50 UTC 2007

Thanks Brian,

In my case the daemon is feeding the Memcache with updated information
every 5-seconds so the XHR requests from the web browsers can return
up-to-date information. Currently the daemon, web server, and Memcache
are all running on the same server and so all use the loopback device
to communicate.

The data is all stored in a class instance which reaches 1MB and then
upsets the Memcache PHP libraries. With regards to speed, the
(anecdotal) evidence suggests that the slowest part of the process in
my case is the serialization of the class instance by the
memcache->set function.

The data stored in the class instance is of fine granularity as each 5
second update is stored separately. The importance of the fine
granularity becomes less important as each update becomes older. In
order to reduce the size of the class, I'm currently thinking about
reducing the granularity of older data. For example, data which is 1
day old will be summarised as 1 update every 5 minutes instead of
every 5 seconds.

On 5/23/07, Brian Moon <brianm at> wrote:
> Iain Bapty wrote:
> > Unless anyone knows of anyway of changing the 'max bucket size', I'll
> > look to modify the class instances to store less data. I'll also
> > experiment with the ZLib compression to see if this resolves this
> > issue without adding too much of an overhead.
> My tests have shown that if you can reduce the size of the data, the
> time saved in network traffic for moving smaller objects is well worth
> the tiny amount of cpu to compress and uncompress the data.  We deal
> with about 300k chunks of HTML.  I thought that maybe I could speed it
> up if I skipped compressions.  I found the opposite to be true.  Not
> sure about the gz libraries in Java.  But in PHP they are smoking fast.
> --
> Brian Moon
> Senior Developer
> ------------------------------
> It's good to be cheap =)

More information about the memcached mailing list