MogileFS limits...

Nathan Schmidt nschmidt at
Sat Apr 12 21:53:34 UTC 2008

We've got about 50M dkeys tracked on a generic core2duo master-master
setup, and that is starting to get a little claustrophobic -- the
indices for file and file_on are well in excess of the RAM of the
boxes. MySQL does a pretty good job all things considered but we
definitely are working on a new arrangement which will use multiple
independent mogilefs systems to spread the file count around. The main
limiting factors with this kind of dataset are in list_keys and
rebalance operations. We cache at the application level so paths
lookups are rare. We're certainly at one edge of the continuum, with
many millions of small files rather than hundreds of thousands of
large files and I get the feeling we're operating outside of the
mogilefs sweet spot but it's still quite usable.


On Sat, Apr 12, 2008 at 1:45 PM, mike <mike503 at> wrote:
> On 4/12/08, dormando <dormando at> wrote:
>  > 'sfine, though billions are debatable. If your hardware's too slow, get
>  > better hardware. DB'll be the sticking point here, and the schema's
>  > miniscule.
>  has anyone added in memcached in front of the tracker db? seems like
>  another layer of caching could be thrown in to save some of the work
>  (although i don't know how the workload is really...) and of course
>  sharding and just using mysql proxy/multiple servers... seems like
>  eventualy too many files you'll have to scale the tracker db too. but
>  i suppose that is expected.
>  > > How is it for large files (600+ meg) - is there a rough limit as to
>  > > file sizes before it becomes too segmented or whatever?
>  > >
>  >
>  > Might need a little tuning, but seems to do okay. You have to insure
>  > min_free_space is bigger than the largest file you expect to have... unless
>  > that bug is fixed? (it only ensures you have min_free_space available before
>  > storing a file... not the length of the file).
>  gotcha.
>  > For downloads, yeah. That's a straight-up perlbal feature. There was someone
>  > messing with uploads, but I dunno if it worked? :)
>  is perlbal required?
>  i have nginx right now running as my mogstored http server. i saw it
>  can be used in its place... it was a piece of cake to setup. where
>  else would perlbal be required at this point?
>  i am looking into suggesting and trying to get development started (if
>  it makes sense) for a mogilefs-aware plugin to nginx. something like
>  the X-Sendfile header  in Lighty (or X-Accel-Redirect in nginx
>  already) but you can give it a mogilefs key instead... feed the file
>  directly to the webserver and release PHP/etc. from it (just like
>  X-Sendfile stuff does for normal files...)

New office! 1825 South Grant Street, Suite 850 San Mateo 94402
New home! 21677 Rainbow Drive Cupertino 95014
New phone! 415.420.1647

More information about the mogilefs mailing list