Kostas Chatzikokolakis' lowmem patches merged
kostas at chatzi.org
Tue Jan 19 14:25:34 UTC 2010
>> I'd vote for this. I don't like taking up resources that are not really
>> used, even in a controlled way. I'm not sure about the race condition,
>> but tempfile() opens the file in a temp dir that shouldn't be used by
>> any other process.
> I'm ambivalent on this one. We *do* want to read from the filehandle, so it's
> not unused, just not used immediately.
I mean that during encryption we only need up to 5 filehandles open
simultaneously and during upload only 1. So if we have 100 open it's a
waste of resources (but limited, if it's at most 100).
> I think throttling on both is ok - they're both real resources with real-world
> limits. Making them configurable might be worth doing too.
> FWIW, I reran my large backup from the weekend with that gpg throttling patch
> applied, and it performed nicely - 29 open files max at any point. It also
> reduced the load over the whole backup, because we weren't (pointlessly) doing
> all the encryption as fast as we could. So I'll probably commit this shortly.
Leaving it like this is ok I guess. If we make the throttling on chunks
configurable however, we need to mention in the doc that setting it too
high might cause running out of filehandles.
PS. Ever though of concurrent upload to the target? It might be a really
nice feature to add in the future. For Amazon it should speed up the
backup considerably, especially if you upload lots of relatively small
More information about the brackup