Memory management

Jure Petrovic fonz at siol.net
Wed Mar 28 08:24:57 UTC 2007


Of course I agree with you. Multithreading is the only possible way and
doing few memcpy's every now and then wouldn't present any real
harm to the modern multicore cpu. After all, we have to move on,
right?

It's just that I am more of a low-level (RT)  kind of a guy and doing 
something
unpredictable in the background is always a problem for me. Especially if 
there
are multiple threads accessing shared memory locations and a special thread 
doing
some packing while requests are in progress... I just can't help myself not 
seeing
any segfaults :-) Don't get me wrong, just want to point out that this 
things must be
carefully designed.

Have you also discussed what is going to happen to current memory 
management?
Since this one is already implemented, it could stay in the application as 
an option?
For example, one could specify memory management method on the command
line when invoking memcached...

Regards,
Jure

----- Original Message ----- 
From: "Steven Grimm" <sgrimm at facebook.com>
To: "Jure Petrovic" <fonz at siol.net>
Cc: "memcached" <memcached at lists.danga.com>
Sent: Wednesday, March 28, 2007 12:46 AM
Subject: Re: Memory management


> Moving objects around in memory would naturally have a CPU cost. And if 
> your memcached was otherwise fully consuming the available processor(s), 
> then something would have to give: either you'd slow down request 
> processing to allow time for the necessary repacking, or you'd reduce 
> memory efficiency by allowing fragmentation to start happening.
>
> On the other hand, if you're maxing out your CPU with a memcached 
> instance, you have bigger problems to deal with; chances are you are 
> already falling behind on your request processing anyway.
>
> But I don't think that's going to be too common. As a sample data point, 
> as I write this, one of our typical memcached instances is serving about 
> 37,000 requests a second and is eating under 20% of the CPU capacity of 
> the 4-core box it's running on. That's a LOT of headroom you could use to 
> shuffle memory around. Obviously other sites could be running memcached on 
> slower hardware, or with higher request volumes, and thus have less CPU 
> time to spare, but there will probably be at least a little wiggle room. 
> (And in fact I suspect we're probably on the high end of the scale in 
> terms of memcached traffic; we did the multithreading work for a reason!)
>
> -Steve 



More information about the memcached mailing list