MogileFS limits...

mike mike503 at gmail.com
Sat Apr 12 20:45:22 UTC 2008


On 4/12/08, dormando <dormando at rydia.net> wrote:

> 'sfine, though billions are debatable. If your hardware's too slow, get
> better hardware. DB'll be the sticking point here, and the schema's
> miniscule.

has anyone added in memcached in front of the tracker db? seems like
another layer of caching could be thrown in to save some of the work
(although i don't know how the workload is really...) and of course
sharding and just using mysql proxy/multiple servers... seems like
eventualy too many files you'll have to scale the tracker db too. but
i suppose that is expected.

> > How is it for large files (600+ meg) - is there a rough limit as to
> > file sizes before it becomes too segmented or whatever?
> >
>
> Might need a little tuning, but seems to do okay. You have to insure
> min_free_space is bigger than the largest file you expect to have... unless
> that bug is fixed? (it only ensures you have min_free_space available before
> storing a file... not the length of the file).

gotcha.

> For downloads, yeah. That's a straight-up perlbal feature. There was someone
> messing with uploads, but I dunno if it worked? :)

is perlbal required?

i have nginx right now running as my mogstored http server. i saw it
can be used in its place... it was a piece of cake to setup. where
else would perlbal be required at this point?

i am looking into suggesting and trying to get development started (if
it makes sense) for a mogilefs-aware plugin to nginx. something like
the X-Sendfile header  in Lighty (or X-Accel-Redirect in nginx
already) but you can give it a mogilefs key instead... feed the file
directly to the webserver and release PHP/etc. from it (just like
X-Sendfile stuff does for normal files...)


More information about the mogilefs mailing list