How to cleanup dangling (unable to be retrieved) files entirely?

dormando dormando at
Fri Feb 1 06:53:33 UTC 2008

王晓哲 wrote:
> Hi all,
> I have tried MogileFS for a while, and did the following things:
> 1. add a domain and a class in that domain, with mindevcount=3 (make 3
> replications);
> 2. insert a new file into MogileFS, and I saw there are 3 storage nodes
> (node A,B,C) contained that file according to 'mogadm stats';
> 3. manually mark one of those nodes (node A) to be down, and try to
> retrieve that file: file retrieved normally, and a new replication is
> made up on another node (node D) as expected;
> 4. manually mark the previous down node alive, and delete the file.

Were you marking it as down, or dead?
If _down_, the correct behavior is to retry the delete later (since it 
could not succeed in removing all of the files). Do you have any rows in 
file_to_delete_later? Also, I don't _think_ the replicator would 
re-replicate files if a host or device is simply marked as down. When 
you mark one as dead the reaper process will do an immediate fixup though.

> Here comes the problem: after the file has been deleted, there are no
> contents in node B,C and D, but still left one copy of the file in node
> A (can be seen through 'mogadm stats'). That copy could not be accessed
> anymore, but do occupied the storage spaces. Even though I tried to
> clean up the whole storage directory on node A, the mogadm file
> statistics remain unchanged.
> With a long running systems and lots of files, this phenomenon will be
> very annoying. So is there any formal method to clean up all of these
> dangling files in a MogileFS system?

Yeah I don't really see this. I don't try too hard to find it, but if 
it's happening I'd consider it a bug.


More information about the mogilefs mailing list