Mogile Deployment Layout: More Hosts or More Disks.

dormando dormando at rydia.net
Wed Sep 19 03:40:59 UTC 2007


I like having a balance of # of drivers per host. No more than 8 disks 
per host actually... If you have tons and tons of disks then losing one 
host could tie up replication for days on end.

-Dormando

Lance Boomerang wrote:
> I am looking to build out a fairly dense setup with nodes having 8 or so 
> SATA drive slots for about 6 TB per node (8x750GB).   Initially plan to 
> have 6 of these nodes.   Initial build out would have 30 TB or so.  
> Debating going for higher / lower density on either number of drives per 
> node, and / or size of individual drives.  The long term plan is to 
> scale this out to potentially a PB or so.  Stability / integrity is more 
> important than performance, but power and space usage are tight also.  I 
> was wondering if anyone has dabbled with dense solution.  I would even 
> consider building out 9 TB nodes, but not sure this is truly feasible.  
> If anyone has thoughts on this I would be very interested.
> 
> Thanks!
> 
> 
> 
> 
> 
> marc at corky.net wrote:
>> I can only comment on what I'd do - I don't run mogile (yet) and am 
>> just observing.   We have a similar, home grown system.  I would go 
>> for boxes with more disks if I were you.     How much actual storage 
>> would you be needing?
>>
>> Marc
>>
>>
>> Javier Frias wrote:
>>> Hello all,
>>>
>>> At my company we've recently evaluated mogilefs and it seems to meet
>>> our needs. (great piece of software btw ) We are now planning to build
>>> out our prod configuration and are having issues deciding on a
>>> hardware layout, so I'm looking for input from people that may have
>>> used mogilefs in a similar way.
>>>
>>> Basically, we will be using mogilefs for long term text and image
>>> archiving, as well as low level image serving ( it will only be
>>> feeding our CDN, so while there will be some performance issues, it
>>> doesnt look to be now our primary concern since the system will only
>>> be feeding the CDN and not taking the brunt of the traffic itself ),
>>> but from time to time, we will run batch jobs that will fetch ten's of
>>> thousands of items for reprocessing.
>>>
>>> Main question is, do we do more hosts per disks, or more disks per 
>>> hosts.
>>>
>>> Due to hardware standards ( self imposed, too many hosts to have to
>>> worry about yet another hardware manufacturer ) I have the choice of
>>> either a host that can handle two 750GB disks ( dell 860s or 1435's )
>>> or a host that can handle 6 x 750GB disks ( dell 2950's ) for the
>>> storage nodes. The difference in pricing is about 20%  in favor of a 6
>>> disk server solution, versus a 2 disks server solution. So is the
>>> extra complexity worth it in terms of performance and redundancy? or
>>> will i be shooting myself in the foot by having 3 X the number of
>>> storage nodes? I'm planning a 3 copy policy for most of my files, and
>>> will need approximate 5TB to start, so we are taking of at least 4 x 6
>>> disks systems, or 10-12 2 disk systems.
>>>
>>> Any input greatly appreciated.
>>>
>>> As a side note, any real reason not to run the trackers on the storage
>>> nodes? also, anyone have any pros cons on running mysql master/save
>>> with InnoDB on DRBD versus running lets say mysql cluster?
>>>
>>>
>>> thx
>>>
>>>   
>>
>>
>>
> 
> 



More information about the mogilefs mailing list