lreed at boomerang.com
Tue Oct 30 01:08:33 UTC 2007
I have yet to deploy a large scale setup of mogilefs but have a small
test env setup.
The Java client would fit nicely into our application use.
Most of my experience so far has been with using mogtool, so I assumed
that it was best to chunk files in to 64 MB.
But I take it from this thread that this is not the case. Do people
usually just put and take files as in without chunking then?
This would save time on the initial write I assume. But do little fro
the read back. I have not had a chance to test, but do people see a
performance increase when reading back large files that have been
chunked due to parallel reads?
Or is the overhead of re-construction too much to really see a benefit?
I'd love to hear what folks have been seeing in the real world.
I am planning to use mogilefs with files that range from 30 KB to 20 GB,
and am trying to figure out if I really need to put chunking code in.
But I could see wanting a patch for the java client to do chunking.
Thanks for any thoughts on this.
> Curious now...
> Does anyone use chunked files for anything?
> I can't think of any reason why you'd get more performance out of it,
> and the only benefit being able to stuff files larger than the
> individual storage nodes into mogile.
> Believe it's still the default for mogtool with files > 64M, which
> must be confusing for folks?
> Jared Klett wrote:
>> We ended up not using the chunked file support for performance
>> reasons - we just store the whole file and serve it straight off disk.
>> Regardless, I'd be happy to release a patch if there's demand.
>> - Jared
More information about the mogilefs