Hi,<br><br>This is exactly why we've opted to try and fix large file support on the client side (Ruby mogilefs-client) and figure out a combination of tracker/storage for the MogileFS setup to support large files instead of chunked transfers. We frequently have files of about 500MB that need to be replicated and streamed to clients. The overhead of chunking is simply too big and we would loose X-Sendfile support. Apache2/WebDAV has worked out well on storage nodes (mogstored currently chokes on files > 100MB).
<br><br>Gr,<br>Andy<br><br><div><span class="gmail_quote">On 10/30/07, <b class="gmail_sendername">dormando</b> <<a href="mailto:dormando@rydia.net">dormando@rydia.net</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I'd suggest testing it.<br><br>I've been manually disabling the chunked support, given every use case I<br>have requires the files to be streamed from one place to the other. If<br>you have a large, contiguous file, you need to re-assemble it in order
<br>at some point anyway. If you're spoonfeeding clients, you want to start<br>and go without having to re-stablish an HTTP session in the middle.<br><br>So, if you want to pull data in chunks and can process it in parallel,
<br>or you want to very evenly fill every storage device, fine... but I<br>don't see that happening in any useful way.<br><br>It's likely my own lack of imagination here. Someone please prove me<br>wrong :)<br><br>
It feels like a bad default at any rate, since you can't serve the large<br>files back like that presently.<br><br>-Dormando<br><br>Lance Reed wrote:<br>> I have yet to deploy a large scale setup of mogilefs but have a small
<br>> test env setup.<br>><br>> The Java client would fit nicely into our application use.<br>> Most of my experience so far has been with using mogtool, so I assumed<br>> that it was best to chunk files in to 64 MB.
<br>> But I take it from this thread that this is not the case. Do people<br>> usually just put and take files as in without chunking then?<br>> This would save time on the initial write I assume. But do little fro
<br>> the read back. I have not had a chance to test, but do people see a<br>> performance increase when reading back large files that have been<br>> chunked due to parallel reads?<br>> Or is the overhead of re-construction too much to really see a benefit?
<br>><br>> I'd love to hear what folks have been seeing in the real world.<br>> I am planning to use mogilefs with files that range from 30 KB to 20 GB,<br>> and am trying to figure out if I really need to put chunking code in.
<br>><br>> But I could see wanting a patch for the java client to do chunking.<br>><br>> Thanks for any thoughts on this.<br>><br>> Lance<br>><br>><br>><br>> dormando wrote:<br>>> Curious now...
<br>>><br>>> Does anyone use chunked files for anything?<br>>><br>>> I can't think of any reason why you'd get more performance out of it,<br>>> and the only benefit being able to stuff files larger than the
<br>>> individual storage nodes into mogile.<br>>><br>>> Believe it's still the default for mogtool with files > 64M, which<br>>> must be confusing for folks?<br>>><br>>> -Dormando
<br>>><br>>> Jared Klett wrote:<br>>>> We ended up not using the chunked file support for performance<br>>>> reasons - we just store the whole file and serve it straight off disk.<br>>>>
<br>>>> Regardless, I'd be happy to release a patch if there's demand.<br>>>><br>>>> cheers,<br>>>><br>>>> - Jared<br>>>><br>>><br>>><br>><br>
<br></blockquote></div><br>