Save large file problem

Brad Fitzpatrick brad at danga.com
Mon Sep 5 18:45:39 PDT 2005


That warning isn't related to your chunk size, and is actually fixed in
the newest version of Danga::Socket.  100MB chunks are fine.  I have no
rationale for this, but I wouldn't go above 150 or 200MB.

On Mon, 5 Sep 2005, Yuri Subach wrote:

> Hello Brad,
>
> As you suggested, I split large files into chunks. MogileFS works really good.
> I'm wondering about optimal size of the chunk. 10MB size works fine, but I'm
> not sure that system will be stable. When I tried 100MB mogstored show this error:
>
> epoll() returned fd 11 w/ state 1 for which we have no mapping.  removing.
>
> Can you tell me how to determine largest possible chunk size? In other words,
> where is the bottle neck of mogstored?
>
> Brad Fitzpatrick wrote:
> > It shouldn't be trying to get it all into memory.  I don't recall any part
> > of MogileFS in particular that would try to do that.
> >
> > But you are pushing what I'd consider reasonable sizes for MogileFS.  Big
> > files should be broken up into chunks.  (mogtool can help you out with
> > that)  I'd love it if MogileFS did the auto-chunking for you
> > transparently, but I haven't gotten around to that (or had a need).
> >
> > - Brad
> >
> >
> > On Fri, 2 Sep 2005, Yuri Subach wrote:
> >
> >
> >>Hi,
> >>
> >>I tried to put large file (650MB) into MogileFS storage, but mogstored die
> >>with "Out of memory!" message. Machine has 256 MB. My questions: is it trying
> >>to put whole file in memory before saving to disk?
> >>
> >>Can someone advise how to use MogileFS with large files?
> >>
> >>--
> >>Yuri Subach  subach at whirix.com
> >>Director of Whirix Ltd.  http://www.whirix.com
> >>
> >>
>
> --
> Yuri Subach  subach at whirix.com
> Director of Whirix Ltd.  http://www.whirix.com
>
>


More information about the mogilefs mailing list