Filling up space for large chunks?

Brion Vibber
Sat, 29 Nov 2003 22:17:15 -0800

Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=US-ASCII; format=flowed

Our little memcacheds (1.1.9 running on Linux in [ick] poll mode, up 
for about 25 days) have ceased to accept objects larger than 16k. This 
is putting a slight crimp in our plans to put larger common and cached 
data into memcached where they belong...

Popping up a second fresh memcached on another port, I can freely dump 
large objects into it; restarting should work fine but I'd rather not 
have to restart the daemons on a regular basis if the problem is 

If I telnet in I see the nice friendly error message:
set bigjunk 0 0 16384
SERVER_ERROR out of memory

Set a size of 16383, and no problem. I know that memcached assigns 
memory in different-sized blocks, but I was also under the impression 
that older chunks would be dumped in favor of new data when room ran 
out. Perhaps it's out of large blocks? Maybe we never actually stored 
anything that size before and now they're gone forever? Hmm...

Perhaps we'll start precaching a few big random data chunks to make 
sure there's space in the future. It'd be real nice to see the 
distribution of block sizes in the stats output.

STAT pid 14913
STAT uptime 2233354
STAT time 1070172453
STAT version 1.1.9
STAT rusage_user 8272:900000
STAT rusage_system 15267:230000
STAT curr_items 1389986
STAT total_items 42895412
STAT bytes 147956192
STAT curr_connections 1
STAT total_connections 144693848
STAT connection_structures 281
STAT cmd_get 101719226
STAT cmd_set 42896045
STAT get_hits 76572007
STAT get_misses 25147219
STAT bytes_read 7493617394
STAT bytes_written 13115130994
STAT limit_maxbytes 268435456

-- brion vibber (brion @

content-type: application/pgp-signature; x-mac-type=70674453;
content-description: This is a digitally signed message part
content-disposition: inline; filename=PGP.sig
content-transfer-encoding: 7bit

Version: GnuPG v1.2.2 (Darwin)