MogileFS limits...

Nathan Schmidt nschmidt at gmail.com
Sat Apr 12 23:04:52 UTC 2008


Dormando,

The most intense performance hits for us have come from losing storage
nodes and the resulting rebalance operations putting a lot of
read/write load on the tables. Worst case we've seen was almost 20
hours of thrashing. Not really user-affecting but it did peg a lot of
ops charts.

We're far from optimal - using UTF-8 collations/charsets which seems
to unconditionally add a great deal of overhead to index sizes. Our
initial bring-up was on a clean debian box so we went with MyISAM,
which does have its problems - index corruption, O(n) repairs, ugh.
Functional but non-optimal. Good point about the optimize table
routine, we probably haven't done that for a while.

We do a lot of list_keys operations when our caches are cold because
we're using mogile as our primary datastore for wiki page revisions -
we didn't really anticipate the rest of our infrastructure handling
this kind of data size so well so mogile's been left alone while the
rest of our system has become much more efficient and
well-distributed. We're on all-commodity hardware, and those boxes max
out at 8Gb each, which puts a ceiling on where we can go without
moving to some substantially more expensive systems. Our file.MYI and
file_on.MYI are 8.5Gb together, which means MySQL is doing some
interesting tricks to keep things running smoothly. That said, we're
getting ready to bite the bullet and bring in somewhat more
substantial boxes for our tracker dbs, oh well.

-n


On Sat, Apr 12, 2008 at 2:58 PM, dormando <dormando at rydia.net> wrote:
> Nathan Schmidt wrote:
>
> > We've got about 50M dkeys tracked on a generic core2duo master-master
> > setup, and that is starting to get a little claustrophobic -- the
> > indices for file and file_on are well in excess of the RAM of the
> > boxes. MySQL does a pretty good job all things considered but we
> > definitely are working on a new arrangement which will use multiple
> > independent mogilefs systems to spread the file count around. The main
> > limiting factors with this kind of dataset are in list_keys and
> > rebalance operations. We cache at the application level so paths
> > lookups are rare. We're certainly at one edge of the continuum, with
> > many millions of small files rather than hundreds of thousands of
> > large files and I get the feeling we're operating outside of the
> > mogilefs sweet spot but it's still quite usable.
> >
> > -n
> >
>
>  I've had many more small files than that and the DB still held up. Had to
> add more RAM at some point and run OPTIMIZE TABLE every few months but it
> was okay.
>
>  However we didn't use list_keys or rebalance... just drain/dead operations.
> Any chance you could describe the pain there in a little more detail?
>
>  -Dormando
>


More information about the mogilefs mailing list