patch for memory frugal backup

Kostas Chatzikokolakis kostas at chatzi.org
Mon Oct 19 12:07:42 UTC 2009


Hello all,

I was recently affected by the memory issue reported by some people,
causing several of my backups to fail, so I tried to figure out
whether there's a memory leak somewhere. It's no secret that brackup
is not very frugal with memory. In particular, on its way from the
source to the target, a chunk passes from the following steps:

1. The chunk is read from the source file into memory
   (Brackup::PositionedChunk::raw_chunkref)
2. The chunk is written from memory to a temp file to be passed to GPG
   (Brackup::Root::encrypt
3. GPG's output is written to a temp file (child process)
   (Brackup::GPGProcess::new)
4. tempfile is read into memory (main process)
   (Brackup::GPGProcess::chunkref)
4. For files smaller than 'merge_files_under':
   Chunks are concatenated in memory into a composite chunk
   (Brackup::CompositeChunk::append_little_chunk)
5. Chunk (Stored or Composite) is read from memory and stored in the
   target (Brackup::Target::XXX::store_chunk)

Brackup encrypts at most 5 chunks in parallel, in child processes, and
uploads one chunk at a time. The first 3 steps are performed in the
child process, which then exits and frees its memory. I did some tests
with big files on my laptop, and even though I saw brackup using more
than half a GB of memory (expected from the steps above), I couldn't
reproduce a usage much higher than that.

So I thought that instead of debugging strange memory allocation
scenarios (especially when forking is involved), it should be easier
to change brackup's logic so that it doesn't load all chunks in
memory. Because of brackup's nice structure, it was actually
surprisingly easy to do so. This change involves some design choices,
namely what interface the various parts of the code should use to pass
data around without loading stuff in memory. I thought that IO::Handle
is a good idea: generic, simple and without any a-priori overhead. It
could also allow to wrap any kind of data source, for example to add
sources that are not plain files in the future.

So, for PositionChunk, I replaced raw_chunkref with raw_chunk_h
that returns an IO::Handle for the chunk's data. IO::InnerFile does
exactly what we want here.

For GPG, we can feed data from the chunk's handle. On the other hand,
if we want to keep parallelism and still upload chunks to the target
one at a time, I see no other option but to store GPG's output in a
temporary file. So I changed Brackup::Root::encrypt to receive an
IO::Handle and return a temp filename.

For StoredChunk, IO::Handle is again a good choice, so I replaced
chunkref by chunk_h. The problem, however, is that the libraries used
by the target classes (eg Net::Amazon::S3 and Net::Mosso::CloudFiles)
might not provide a way to feed them data from an IO::Handle. On the
other hand they should offer a way to upload data from a file on disk
(e.g. add_key_filename for S3) so a tempfile-interface seems a good
option, and we actually have a tempfile from GPG already. So I added
a chunk_file method to StoredChunk that returns a tempfile containing
the chunk's data. (Btw it might be possible to use methods like
add_key_filename without a real file by creating a named pipe, but I
didn't follow this approach because probably it doesn't work in
windows and because we do have the tempfile from GPG anyway.)

CompositeChunk needs to have the same interface as StoredChunk, so to
provide chunk_file I add all little chunks to a tempfile. This has a
small overhead, but it's only for small files anyway. chunk_h is
also provided.

Finally, a target receives a StoredChunk or CompositeChunk and it can
use either chunk_h (preferred) or chunk_file to get the data. Using
chunk_h allows to completely avoid temporary files if GPG is not used.
I adapted all targets to this interface, Filesystem and Sftp use
chunk_h, Amazon, CouldFiles and Ftp use chunk_file.

With these changes, the chunk's magical journey to targetland becomes:

1. The chunk is read from the source file (through an IO::Handle)
   and fed to GPG (Brackup::Root::encrypt)
2. GPG outputs to temp file, StoredChunk is created
   (Brackup::Root::encrypt)
3. For files smaller than 'merge_files_under':
   Chunks are written in a composite temp file
   (Brackup::CompositeChunk::append_little_chunk)
4. Chunk (Stored or Composite) is read from temp file and stored in the
   target (Brackup::Target::XXX::store_chunk)

As instructed in HACKING, I uploaded the patch (against r248) here:
  http://codereview.appspot.com/135046
(great tool, btw)
It contains changes in several files, however each change is quite
straightforward. I didn't touch the restore code (which is quite
separate) at all. Restore uses less memory anyway and I wanted to keep
the patch small.

The patched version passes all tests fine. I also did various tests
with S3, creating a backup and restoring with the unpatched version
and vice versa, everything seems to work fine. As expected, memory
usage is really low, even with huge files and big chunk_size. Note
however that I did *no* tests with CloudFiles and *no* tests
under windows.

I'll start using the patch in production in the following days for
more testing. If other people having memory issues could also test it,
it would be great. Of course, I would be very happy to see the patch
eventually merged, let me know if any changes are needed for that (btw,
I should have discussed the changes in the list in advance, but hacking
often happens in bursts).

Cheers,
Kostas



More information about the brackup mailing list