Compression: client or server?
Brad Fitzpatrick
brad@danga.com
Wed, 13 Aug 2003 12:53:20 -0700 (PDT)
I agree.
The issue then is stats, though. It'd be interesting to know how much
physical vs. logical data is in memcache. Based on that, you could adjust
the compression threshold on the clients. I imagine adding an option to
the Perl MemCachedClient API that says, "Compress after this size".
At lunch just now Evan and I realized another point to add it to the
server, though: what if somebody had more memcache servers than web
nodes? But we don't buy this argument, because in our experience you
can't just _buy_ a memcache machine. We were unable to find a system or
parts vendor who could get us a machine with a crap processor and tons of
RAM.
So, yeah: compression definitely client-side, but the server should do
stats by the client sending more data about how much it shrank it to. See
any problem with that?
On Wed, 13 Aug 2003, Lisa Marie Seelye wrote:
> On Wed, 2003-08-13 at 14:44, Brad Fitzpatrick wrote:
> > Thoughts?
>
> What CPU resources are required to compress the data? How much of a lag
> will it introduce to the System?
>
> The best way to go about storing compressed data is for the client to
> _send the data in a compressed format_ and then know well enough to
> decompress it when they get it back.
>
> The server should store whatever the client sends them and don't mess
> with it -- put the burden on the user, not the server.
>
>
> --
> Regards,
> -Lisa
> <Vix ulla tam iniqua pax, quin bello vel aequissimo sit potior>
>