memory fragmentation issues in 1.2?

Paul T pault12345 at yahoo.com
Thu Dec 7 09:31:12 UTC 2006


According to code below the deletion is based on
libevent's timer. Timers are (always) nasty business
and it is never a good idea to have substantial logic
attached to the timer (that's why univca is using
libevents' timers only to set a single global variable
and for nothing else)

But then again, maybe this is an old version of
memcached's code, maybe it has been fixed and maybe
this is not what causes the leak. e t.c. e t.c. (I
also suspect that this fragment of code is causing
higher than it should be CPU consumption by memcached.
The same disclaimers apply)

http://code.sixapart.com/svn/memcached/trunk/server/memcached.c

struct event deleteevent;

void delete_handler(int fd, short which, void *arg) {
    struct timeval t;
    static int initialized = 0;

    if (initialized) {
        /* some versions of libevent don't like
deleting events that don't exist,
           so only delete once we know this event has
been added. */
        evtimer_del(&deleteevent);
    } else {
        initialized = 1;
    }

    evtimer_set(&deleteevent, delete_handler, 0);
    t.tv_sec = 5; t.tv_usec=0;
    evtimer_add(&deleteevent, &t);

    {
        int i, j=0;
        for (i=0; i<delcurr; i++) {
            item *it = todelete[i];
            if (item_delete_lock_over(it)) {
                assert(it->refcount > 0);
                it->it_flags &= ~ITEM_DELETED;
                item_unlink(it);
                item_remove(it);
            } else {
                todelete[j++] = it;
            }
        }
        delcurr = j;
    }
}

--- Timo Ewalds <timo at tzc.com> wrote:

> Paul T wrote:
> > You are saying that your copy of memcached is
> leaking
> > memory to the point of going from 800Mb to 2Gb+
> per
> > memcached process. That is certainly wrong and
> should
> > not be happening. It could be that you are the
> only
> > person on a planet observing this behaviour. 
> I have seen this before as well, though not
> recently. I never managed to 
> figure out the reason, though I can try to describe
> the circumstances. 
> This was on a fairly recent debian install, being
> netbooted to about 50 
> identical machines. They all had 256mb worth of
> memcached configured, 
> with 8 of them in one pool and the rest in another.
> The big pool was 
> used as a general purpose object cache (average size
> of under 1kb, high 
> read to write load). The pool of 8 was used as a
> page cache, so every 
> page served was pushed there. That gave it a very
> write heavy load of 
> fairly big objects (ie average size of about
> 20-30kb?), but with very 
> low read load (about 5% read, as most pages weren't
> pulled from cache). 
> The memcached daemons on the main pool stayed under
> 300mb, but the ones 
> in the page cache stabalized around 350mb. Every so
> often the page cache 
> ones would increase in size to 500+mb before being
> killed by the OOM 
> (sometimes taking down the machine as the OOM killed
> the wrong process). 
> I never managed to figure out why that happens, but
> it was consistently 
> those machines. It is either related to writes to
> the cache, large 
> objects, or to forceful expiry. Since those machines
> didn't have a 
> higher write/s count than the other machines, I
> doubt it's related to 
> the amount of writes. Since it was pushing objects
> out of the cache at a 
> very substantial rate, maybe it's related to that? I
> haven't seen this 
> behaviour in quite a while though, so maybe it's
> fixed?
> 
> Timo
> 



 
____________________________________________________________________________________
Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail beta.
http://new.mail.yahoo.com


More information about the memcached mailing list