Writing backup metafile incrementally

Gavin Carr gavin at openfusion.com.au
Sat Aug 2 07:25:01 UTC 2008

On Fri, Aug 01, 2008 at 09:48:59AM -0700, Brad Fitzpatrick wrote:
> Good idea.  The patch looks sane just looking at the diff, but I'd like to
> look at it in context more first.
> Would you mind uploading this to codereview.appspot.com?  They have a
> command-line tool you can run from your svn repo that does the upload.

Done. Actually, I think that patch I posted was missing a chunk 
anyway, so the one on codereview is the complete one:



> On Tue, Jul 29, 2008 at 11:54 AM, Gavin Carr <gavin at openfusion.com.au>wrote:
> > I'm messing around with some pretty big backups at the moment (400-800GB
> > trees), and I'm finding that brackup is sucking up huge gobs of RAM while
> > doing this (like 8GB).
> >
> > One of the culprits seems to be the metafile data, which we accumulate
> > over the course of the backup and then write out at the end. My metafiles
> > are coming in at around 600MB on these backups, and the in-core footprint
> > seems about 1GB, so that's a big chunk of ram we're holding for no very
> > good reason.
> >
> > The attached patch writes the metafile incrementally instead, writing to
> > a tempfile, and then renaming at the end (so failures don't leave partial
> > metafiles lying around). The only wrinkle is that we still have to spool
> > entries if we have a CompositeChunk open, because we can't record the
> > metafile entry without the chunk checksum.
> >
> > Does this look sane? Please comment/test.
> >
> > Cheers,
> > Gavin
> >
> >

More information about the brackup mailing list