HTTP ?
Greg Whalin
gwhalin at meetup.com
Tue Nov 30 11:15:42 PST 2004
Evan Martin wrote:
> On Tue, Nov 30, 2004 at 11:17:10AM -0500, Greg Whalin wrote:
>
>>Gregory Block wrote:
>>
>>>The only, the *only* thing I can think of to wish for is a better
>>>performing GzipOutputStream, and that's mostly because I haven't gone
>>>looking for either a faster compression stream in Java, or something
>>>backed by hardware to do that compression, and that's just a
>>>nice-to-have rather than a killer.
>>
>>This is the one thing I would like as well! I played around with the
>>basic compression streams in java and the performance was horrible. Our
>>solution was probably not the most elegant. As the client supports a
>>threshold object size, below which, no compression is attempted, we just
>>set this value fairly high (128K) and store only our largest object
>>compressed. Given a typical memcached server is fairly cheap, we just
>>bought more hardware to run as servers.
>
>
> Depending on your data, an alternative approach to squeezing more memory
> out of your memcached is using a tighter serialization format. Not only
> does generic serialization have some overhead:
> % perl -mStorable -e 'print Storable::freeze(\5)' | wc -c
> 14
> % perl -mStorable -e 'print Storable::freeze([1,2])' | wc -c
> 22
>
> But also if you know the details of your data representation, you can
> use (for example) a single byte for small numbers.
> See, uh, get_userpic_info in ljlib.pl:
> http://cvs.livejournal.org/browse.cgi/livejournal/cgi-bin/ljlib.pl?rev=1.799&content-type=text/x-cvsweb-markup
>
True. Though the default java serialization for most native object types
is decent. Seems only when you start serializing complex data
structures that things get inefficient. But yes, I agree that custom
serialization can shrink things quite a bit.
Greg
More information about the memcached
mailing list