Best Java client ?

Dustin Sallings dustin at spy.net
Wed May 14 16:43:11 UTC 2008


   I can't imagine what this test does.  Can you post code?

   Our serialization is pretty much the same (there are only so many  
ways to do it), but my default compression threshold is lower.  You  
could be falling into a different slab and pushing out the other  
objects.

   But really, I can't imagine what your test is actually doing.  I  
can think of countless ways to not store a lot of data, but if you  
think there's a bug here, I'm going to need your test to reproduce it  
because in all of my tests, I can retrieve everything I store.

-- 
Dustin Sallings (mobile)

On May 14, 2008, at 8:22, Alexander Zaitsev <alexander.zaitsev at webamg.com 
 > wrote:

> Sorry, I was not clear.
>
> We have a tool that generates a flow of objects that are stored in  
> memcached. Objects are java HashSet<String> of variable size.
> When this tool uses Danga client, it is capable to fit about 490K  
> objects in 128M memcached instance.
> When using spymemcached, it is only 317K objects.
> Certainly, the total amount of stored objects is bigger, since old  
> one expire and get removed from the cache by memcached. The numbers  
> above (or below) show the average amount of objects that fit into  
> the cache.
>
> The difference let me think that serialization is implemented quite  
> differently in Danga and Spy. But 1.5 times looks not good.
>
> --
> Alexander Zaitsev
> AMG Lab Sarl
>
>
>
> Dustin Sallings wrote:
>>
>>  What are you showing here?
>>
>> -- Dustin Sallings (mobile)
>>
>> On May 14, 2008, at 6:18, Alexander Zaitsev <alexander.zaitsev at webamg.com 
>> > wrote:
>>
>>> It is interesting that spymemcached client stores items less  
>>> effective. Under the same conditions in 128Mb memcached instance:
>>>
>>> spymemached: STAT curr_items 317886
>>> danga: STAT curr_items 491882
>>>
>>> Danga memcached is initialized with the following parameters:
>>>
>>>      memCachedClient = new MemCachedClient();
>>>      memCachedClient.setSanitizeKeys(false);
>>>      memCachedClient.setCompressEnable(false);            
>>> memCachedClient.setPrimitiveAsString(false);
>>>
>>> Spymemcached uses defaults.
>>>
>>> -- 
>>> Alexander Zaitsev
>>> AMG Lab Sarl
>>>
>>>
>>>
>>> Kristian Eide wrote:
>>>> I recently had to make a choice of which memcached Java library  
>>>> to use and I did some simple informal benchmarking between the  
>>>> Whalin client and 'spymemcached'. While limited in scope I think  
>>>> the results are quite clear.
>>>>
>>>> My test consisted of writing 2000 values and then reading them  
>>>> back; I repeated this for various sized objects. I used only a  
>>>> single Memcached Java object with a compression threshold of  
>>>> 16KB. I did not adjust any of the defaults.
>>>>
>>>> The tests were run on Java 1.6, under Mac OS X, and a single  
>>>> instance of memcached 1.2.4 on localhost with 64MB of memory. The  
>>>> computer has 4 CPU cores.
>>>>
>>>> Here are the results for the Whalin client (numbers are in  
>>>> milliseconds):
>>>>
>>>> N: 2000 Size: 2 Store: 980 Fetch: 1372 kbPerSecond: 2  
>>>> FetchesPerSecond: 1457
>>>> N: 2000 Size: 4 Store: 596 Fetch: 1245 kbPerSecond: 6  
>>>> FetchesPerSecond: 1605
>>>> N: 2000 Size: 9 Store: 503 Fetch: 1122 kbPerSecond: 15  
>>>> FetchesPerSecond: 1782
>>>> N: 2000 Size: 19 Store: 391 Fetch: 1007 kbPerSecond: 36  
>>>> FetchesPerSecond: 1984
>>>> N: 2000 Size: 39 Store: 390 Fetch: 1013 kbPerSecond: 75  
>>>> FetchesPerSecond: 1973
>>>> N: 2000 Size: 81 Store: 363 Fetch: 992 kbPerSecond: 159  
>>>> FetchesPerSecond: 2015
>>>> N: 2000 Size: 168 Store: 370 Fetch: 978 kbPerSecond: 335  
>>>> FetchesPerSecond: 2043
>>>> N: 2000 Size: 350 Store: 357 Fetch: 967 kbPerSecond: 706  
>>>> FetchesPerSecond: 2067
>>>> N: 2000 Size: 729 Store: 375 Fetch: 1004 kbPerSecond: 1416  
>>>> FetchesPerSecond: 1990
>>>> N: 2000 Size: 1516 Store: 391 Fetch: 1051 kbPerSecond: 2815  
>>>> FetchesPerSecond: 1901
>>>> N: 2000 Size: 3154 Store: 398 Fetch: 1034 kbPerSecond: 5956  
>>>> FetchesPerSecond: 1933
>>>> N: 2000 Size: 6561 Store: 426 Fetch: 1072 kbPerSecond: 11946  
>>>> FetchesPerSecond: 1864
>>>> N: 2000 Size: 13647 Store: 545 Fetch: 1010 kbPerSecond: 26369  
>>>> FetchesPerSecond: 1978
>>>> N: 2000 Size: 28388 Store: 2321 Fetch: 1346 kbPerSecond: 41172  
>>>> FetchesPerSecond: 1485
>>>> N: 2000 Size: 59049 Store: 3681 Fetch: 1347 kbPerSecond: 85571  
>>>> FetchesPerSecond: 1483
>>>> Total: 28666 ms
>>>>
>>>> And here are the results with 'spymemcached':
>>>>
>>>> N: 2000 Size: 2 Store: 479 Fetch: 1081 kbPerSecond: 3  
>>>> FetchesPerSecond: 1849
>>>> N: 2000 Size: 4 Store: 196 Fetch: 831 kbPerSecond: 9  
>>>> FetchesPerSecond: 2405
>>>> N: 2000 Size: 9 Store: 127 Fetch: 624 kbPerSecond: 28  
>>>> FetchesPerSecond: 3203
>>>> N: 2000 Size: 19 Store: 56 Fetch: 533 kbPerSecond: 69  
>>>> FetchesPerSecond: 3750
>>>> N: 2000 Size: 39 Store: 66 Fetch: 534 kbPerSecond: 142  
>>>> FetchesPerSecond: 3740
>>>> N: 2000 Size: 81 Store: 65 Fetch: 470 kbPerSecond: 336  
>>>> FetchesPerSecond: 4252
>>>> N: 2000 Size: 168 Store: 41 Fetch: 452 kbPerSecond: 725  
>>>> FetchesPerSecond: 4424
>>>> N: 2000 Size: 350 Store: 42 Fetch: 449 kbPerSecond: 1522  
>>>> FetchesPerSecond: 4454
>>>> N: 2000 Size: 729 Store: 46 Fetch: 457 kbPerSecond: 3112  
>>>> FetchesPerSecond: 4371
>>>> N: 2000 Size: 1516 Store: 50 Fetch: 449 kbPerSecond: 6594  
>>>> FetchesPerSecond: 4454
>>>> N: 2000 Size: 3154 Store: 57 Fetch: 442 kbPerSecond: 13918  
>>>> FetchesPerSecond: 4518
>>>> N: 2000 Size: 6561 Store: 131 Fetch: 476 kbPerSecond: 26876  
>>>> FetchesPerSecond: 4194
>>>> N: 2000 Size: 13647 Store: 227 Fetch: 495 kbPerSecond: 53830  
>>>> FetchesPerSecond: 4039
>>>> N: 2000 Size: 28388 Store: 1452 Fetch: 1044 kbPerSecond: 53074  
>>>> FetchesPerSecond: 1914
>>>> N: 2000 Size: 59049 Store: 2388 Fetch: 1131 kbPerSecond: 101952  
>>>> FetchesPerSecond: 1768
>>>> Total: 14909 ms
>>>>
>>>> The total time is almost half with 'spymemcached'. However, the  
>>>> results get even more interesting when you use multiple threads.  
>>>> I repeated the test with 4 threads and the total time for the  
>>>> Walins client increased to 101415 ms, while 'spymemcached' only  
>>>> increased to 29535. Perhaps you could get better performance out  
>>>> of the Walins client by tweaking some settings, but it does seem  
>>>> that 'spymemcached' is the way to go if you care at all about  
>>>> performance. The async set can be especially helpful.
>>>>
>>>> I did not look at the CPU usage while the test was running, but I  
>>>> would expect that 'spymemcached' has the edge here as well.
>>>>
>>>
>>>
>>
>>
>
>


More information about the memcached mailing list