Compressing unencrypted metafiles

Kostas Chatzikokolakis kostas at
Mon Feb 1 14:17:52 UTC 2010

On 01/02/10 14:56, Gavin Carr wrote:
> On one of my backups - unencrypted, lots and lots of small files, over a
> high latency link - the storage of the metafile takes 20 minutes, which is
> up to 2/3 of the time of the whole backup. Gzipping the metafile reduces
> that to 3 minutes, and reduces the storage requirements on both the client
> and the server from 30M to 5M.

Nice feature.

> Questions:
> - I've made this the default behaviour for unencrypted backups - is that
>    reasonable? Do we need a flag to be able to turn this off?

Sounds reasonable, but existing uncompressed metafiles should be supported.

One issue that came to my mind: in the restore code, there's an "is 
binary" heuristic somewhere to decide whether a metafile is encrypted or 
not. Won't this break now with compressed metafiles? Just saying, I 
didn't check the code.

> - this adds another dependency, on IO::Compress. Is that ok? Should it be
>    optional, so people who only do encrypted ones don't need to worry about
>    it?

Optional is better.


More information about the brackup mailing list