Patch and RFC: Perlbal OOM with buffered uploads and limited disk bandwidth. Also other upload weirdness

Jeremy James jbj at forbidden.co.uk
Tue May 15 19:03:30 UTC 2007


We've had problems recently with doing large (few hundred megs) POSTs to
test servers on a fast network, but having perlbal buffer data onto slow
disks - typically onto a NFS share.

During such an upload, the read_ahead gets large (but not massive -
typically around 60MB) and gives an 'Out of memory' error, rather than
telling the socket to back off. There is, however, support for exactly
this case with chunked uploads, so livejournal must have come across it
before (about line 590 in ClientProxy.pm - in r617 by bradfitz)?

Attached is a patch to call watch_read(0) if read_ahead gets above a Mb
(and watch_read(1) if it drops back below). It's currently running much
better without any crashes - at the worst case the user will see a
paused upload for a few seconds before it continues.

Any thoughts on this - Is watch_read actually an expensive function to
call every time buffered_upload_update is? Are there any other issues
with buffered uploads and slow disk bandwidth?


With large files such as above, it can take some time to send the
buffered upload to a backend apache server (again if the buffered data
is coming back off NFS). In this time, the client often appears to
either drop the connection and not ever show the results of the POST
(and confirmation message from apache) or start the POST again from the
beginning (especially noticible with the upload tracking - it uses the
same upload_session).

Is this a case where alive_time isn't being set for the client while
data is being sent to the backend? Obviously this may be more of a
theoretical situation than a practical one - we will be using local,
fast disks and have more memory on our production machines - but it
would be worth knowing if this may be an issue if uploads increase in size.

I've had a poke around with updating the alive_time inside
continue_buffered_upload, and watching when alive_time is updating in
particular sockets, but results are inconsistent (sometimes perlbal is
completely blocking, which makes getting results out of the management
console awkward). Is there anything else that I should have a look at
during the transfer to backend to aid in debugging this?

Best wishes,
Jeremy
-------------- next part --------------
A non-text attachment was scrubbed...
Name: perlbal-clientproxy-bufferupload-fix.patch
Type: text/x-patch
Size: 395 bytes
Desc: not available
Url : http://lists.danga.com/pipermail/perlbal/attachments/20070515/072e7ec4/perlbal-clientproxy-bufferupload-fix.bin


More information about the perlbal mailing list