memcached overrides memory limit
Steven Grimm
sgrimm at facebook.com
Thu Nov 2 18:18:37 UTC 2006
You are actually noticing a couple different things. First, the
difference between bytes_used and the maximum size of the cache:
memcached's memory manager is optimized for speed rather than space
efficiency. It allocates memory in 1MB slabs, which are divided into
fixed-size chunks. The possible chunk sizes are determined at startup.
(Run the 1.2 version with "-vv" to see a dump of the sizes.) That means
that, especially for large objects, a certain amount of memory can be
wasted on each item in the cache.
220MB of data out of 300MB maximum sounds low to me; we typically see
about 85% memory efficiency. But if your object sizes are just right,
you might get efficiency that low. Try playing with the -f and -n
options in 1.2 to tune the chunk sizes; this can be a huge win
especially if you have a lot of fixed-size objects in your cache.
As for memcached exceeding its memory limit, the -m option only controls
the maximum amount of memory used by the cache itself. As you guessed,
each incoming connection also eats some memory. If you are sending large
"get" requests for lots of keys, memcached needs to allocate enough
buffer space for the entire request. There is code in 1.2.0 to release
those buffers if they exceed a certain size, but even so, with enough
connections you'll definitely see an increase in memory used. An extra
70MB doesn't sound out of line at all; you don't say how big a "huge"
number of connections actually is, but for example, on the memcached
instances where we aren't using UDP, with about 34000 connections we see
an extra 950MB of memory used above the configured limit. (Which is a
major reason we added UDP support in the first place.)
There's one other place where memcached can exceed its memory limit: if
a "set" request comes in whose object size requires a fixed chunk size
that hasn't been seen yet, memcached will always allocate a 1MB slab for
objects of that size, even if the cache has already reached its size
limit. That ensures that you don't get bogus "out of memory" errors
after filling up your cache with, say, 1000-byte objects, then sending
in a 1500-byte object.
It's arguable that rather than allocating memory above and beyond the
cache, memcached should instead free stuff from the cache to make room
for connection data, so as to make the best use of the available memory.
Truth be told we would probably prefer it that way, but for now we just
keep track of roughly how big the processes get and lower our limits
accordingly.
Assuming you have enough physical memory, you might try creating, say, a
100MB instance and seeing how big it gets. As far as I know -- and we
use memcached VERY heavily -- it does not actually leak memory, so its
size should reach a stable equilibrium if you let it run for a while.
Memcached's memory allocation is definitely an area that we (Facebook)
will be looking at improving at some point. It is already much better
in 1.2 than in 1.1.x, but that was just a matter of fiddling with some
settings rather than a fundamental algorithmic change. 85% memory
efficiency is good but 99% would be better! We have a few ideas about
how to get there without losing memcached's hard-won CPU efficiency, but
some experimentation will be required.
-Steve
Olga Khenkin wrote:
>
> Hi all,
>
>
>
> I have the following problem with memcached: I run it with certain
> memory limit, let's say /--m 300/ (300 Mb). In a few hours it reaches
> 370 Mb and continues to grow.
>
> When I get stats from this instance, it shows max_available between
> 300 and 320 Mb, and bytes_used about 220 Mb. What happens with the
> rest of memory?
>
>
>
> Our site works with memcached through PHP Memcache extension, in an
> array of 20 instances working together. It works with huge number of
> connections, so I would suspect connection structures of eating the
> memory, but in times of relatively low traffic, when connections get
> released, the memory is not.
>
> We basically use version 1.1.12, but today I tried 1.2 and met the
> same problem. So if it's a bug in memcached, it wasn't fixed. But
> maybe I just use it the wrong way?
>
>
>
> Maybe somebody got the same problem? Or maybe you have any ideas about
> the cause of this memory issue? We're happy to use memcached, but with
> those memory issues we have to restart it every few days and,
> naturally, lose all cache at restart.
>
>
>
> I will be grateful for any ideas, before I enter deep debugging...
>
>
>
> Olga.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.danga.com/pipermail/memcached/attachments/20061102/eb53b204/attachment.html
More information about the memcached
mailing list