mogtool storing large file:
Greg Connor
gconnor at nekodojo.org
Thu May 15 16:25:38 UTC 2008
Ask Bjørn Hansen wrote:
>
> What does the backend logs say?
>
> There's a bug in the latest released mogtool that makes the automatic
> retrying on failure not work. Patch below; the change is also in SVN
> (as per r1130).
>
> I forget the details, but I noticed the bug when my backends had some
> sort of configuration problem that made them fail once in a while.
Thank you for the answer... I think that may be the same problem I am
having. I was using the svn version (1177) on all the servers but had
installed the CPAN release on the clients for ease of distribution. I'm
trying the svn 1177 version of client and utils now to see if that helps.
I am not sure where to find the logs on the storage nodes... is that
something I need to explicitly turn on?
While on the subject of mogtool, I previously had a problem where the
chunks I stored had a different checksum during the --verify phase. In
the case of a tar/dd stream coming in, if the checksum is wrong, after
the spawned child has already terminated and released its memory is
probably not an optimal time to identify problems with the saved data.
One solution to this might be to have the spawned child process wait
around for a bit, read the file back (at least once, could be twice) and
only then drop its memory buffer. The side effect would be that some of
the --concurrent= number of threads would be idle and not constantly
involved in sending, so we might see less throughput (or might want to
add more threads if memory size permits).
I suppose my question here is 1. has something like this already been
done and 2. would there be interest in testing such a patch?
thanks
gregc
More information about the mogilefs
mailing list