multiple pools in Java

Greg Whalin gwhalin at meetup.com
Thu Apr 7 10:00:48 PDT 2005


I responded to you directly, but will update to the list.  We saw 
similar behavior simply due to the objects we were caching.  Most of our 
objects were fairly small in size, but the occassional object would be 
much much larger (10-15 times larger than our average).  These objects 
would show up to be cached at random points, and generally well after 
the memcached server had "settled in".  We would get out of memory errors.

We dealt with this problem in two ways.  One, we stopped relying on 
default java serialization.  While the default serialization works 
great, it is far from being efficient if you care about size of the 
serialized object.  Instead, we use java.io.Externalizable and we 
implemented our own serialization scheme.  Instant 60% savings in size 
of objects.

Second, we turned on gzip compression in the client.  Now, my tests and 
benchmarks show that java gzip compression is slow ... horribly slow, so 
we really wanted to avoid using it wide scale.  We set our compression 
threshold so that we only compressed objects > 128K.  This combination 
of things solved all of our problems.  We have yet to see another out of 
memory error from memcached in 8 months of heavy usage.

Now, as to multiple pools.  Currently, the java memcache client pool is 
a singleton, so it is not possible to create multiple pools.  I think I 
could modify the code to allow for multiple pools pretty easily, but I 
would need to go through everything to make sure all code would continue 
to work correctly.  If I get some time, I will look into this approach 
over the weekend.

Greg

Mybrid Spalding wrote:
> Hi!
>  Happy Thursday! Thanks so much for replying.
>  Ok, here is more detail. We  are having excellent success with this 
> very well designed product. One problem we are having is that large 
> objects only have a cache hit rate of 45% while small objects are 90% 
> plus. We speculate the root cause is that our web site uses objects of 
> every size imaginable and that there is not enough slab space for large 
> sized objects.  We'd like to experiment with segmenting by writing 
> objects larger than X to one cache pool where we tune X.
>  Any thoughts on this would be greatly appreciated. JFYI, your cache has 
> helped us nearly double our bandwidth and we really truly appreciate 
> this product.
> 
> Thanks!
> -Mybrid
> 
> 
>  
> Thanks!
> -Mybrid
> 
> Brad Fitzpatrick wrote:
> 
>> Why don't you just make one pool that's twice as big, then?
>>
>> If a node in a pool dies, the memcached client just routes around it and
>> spreads the load to the nodes that are still alive.
>>
>>
>> On Tue, 5 Apr 2005, Mybrid Spalding wrote:
>>
>>  
>>
>>> Hi!
>>>  Greetings from a first-time poster. I'm currently using the Java
>>> client to connect to memcached. For robustness reasons I'm thinking of
>>> segementing traffic to different caches. That way if one cache pool goes
>>> down I can route to another. However, reading the Java docs its not
>>> obvious to me how to or a good strategy.
>>>
>>>
>>> Thanks!
>>> -Mybrid
>>>
>>>
>>>   
>>
>>
>>  
>>
> 



More information about the memcached mailing list