Compressing unencrypted metafiles
gavin at openfusion.com.au
Mon Feb 1 14:43:23 UTC 2010
On Mon, Feb 01, 2010 at 03:17:52PM +0100, Kostas Chatzikokolakis wrote:
> On 01/02/10 14:56, Gavin Carr wrote:
> >On one of my backups - unencrypted, lots and lots of small files, over a
> >high latency link - the storage of the metafile takes 20 minutes, which is
> >up to 2/3 of the time of the whole backup. Gzipping the metafile reduces
> >that to 3 minutes, and reduces the storage requirements on both the client
> >and the server from 30M to 5M.
> Nice feature.
> >- I've made this the default behaviour for unencrypted backups - is that
> > reasonable? Do we need a flag to be able to turn this off?
> Sounds reasonable, but existing uncompressed metafiles should be supported.
Yep, they are.
> One issue that came to my mind: in the restore code, there's an "is
> binary" heuristic somewhere to decide whether a metafile is
> encrypted or not. Won't this break now with compressed metafiles?
> Just saying, I didn't check the code.
Yeah, that test is done after the metafile is slurped, which uses the
uncompress code path, so in the normal case it all Just Works.
The only time it wouldn't work is if we make IO::Compress optional and you
then try to deal with a compressed metafile. In that case you'd get the
metafile wrongly treated as encrypted, when in fact it's compressed.
> >- this adds another dependency, on IO::Compress. Is that ok? Should it be
> > optional, so people who only do encrypted ones don't need to worry about
> > it?
> Optional is better.
Yeah, the only negative is that there's then no indication at perl Makefile.PL
time that IO::Compress might be useful for some use cases. Making it optional
effectively makes it invisible there. Be really nice if ExtUtils::MakeMaker
supported recommended modules.
More information about the brackup