Kostas Chatzikokolakis' lowmem patches merged
gavin at openfusion.com.au
Wed Jan 13 20:12:59 UTC 2010
On Wed, Jan 13, 2010 at 04:59:05PM +0100, Kostas Chatzikokolakis wrote:
> I had a good look at the latest trunk. I only see one problem: in case
> of encryption, the tempfile containing the encrypted chunk is not
> deleted after uploading to the target, but only automatically when
> brackup ends! This could eat all the free space on disk. The attached
> delete_temp_files.patch solves this issue using the OO interface of
> File::Temp which is great for controling when a tempfile is deleted.
> Since we represent the chunk's data by a filehandle now, it makes much
> sense to use it (I would actually suggest using the OO interface
> everywhere). I also changed CompositeChunk for uniformity.
Ah yes, we should unlink the file in GPGProcess::chunkref after we've
re-opened the filehandle. Well spotted.
But I agree, in this case using the OO interface of File::Temp is a
cleaner solution as it ties the unlink to the destruction of the
filehandle. Patch applied.
> Apart from that everything looks good. The only major difference from my
> patch is that you only kept the filehandle interface to get the data of
> a StoredChunk or CompositeChunk, not the tempfile interface
> (_chunk_file). Of course this is cleaner and simplifies the code. On the
> other hand targets that do not support uploading from a filehandle are
> forced to load the chunk in memory. So currently the Amazon, CloudFiles
> and GoogleAppEngine targets still load the chunk in memory, this is
> better than before but still not optimal.
Yes, when I was applying your patch I wasn't convinced the _chunk_file
was required, and so deferred it initially. It seemed wrong to add the
overhead of a tmpfile for targets that didn't need it.
> Still, I start to believe this is a better approach. Keep the interface
> simple and give the responsibility to the target to support filehandles
> for optimal memory consumption. If this is not possible, fallback to
> loading chunks in memory. I think I can easily patch Net::Amazon::S3 to
> enable uploading from a filehandle.
Yes, that's where I ended up too - make it the target's problem to handle
the filehandle, and it it can't at least the extra cost is limited to just
More information about the brackup