How to cleanup dangling (unable to be retrieved) files entirely?
jared at blip.tv
Thu Jan 31 15:17:45 UTC 2008
I believe 'mogadm fsck' should take care of this. Dormando or Mark, feel free to correct me.
On our MogileFS cluster, I run 'mogadm fsck start' at 2 AM and 'mogadm fsck stop' at 7 AM, via cron. So far, it's found a good number of quirks (and fixed them, I assume) from looking at the log ('mogadm fsck printlog').
I'd be interested to hear how other people are using fsck and what their results are.
From: mogilefs-bounces at lists.danga.com [mailto:mogilefs-bounces at lists.danga.com] On Behalf Of ???
Sent: Wednesday, January 30, 2008 9:41 PM
To: mogilefs at lists.danga.com
Subject: How to cleanup dangling (unable to be retrieved) files entirely?
I have tried MogileFS for a while, and did the following things:
1. add a domain and a class in that domain, with mindevcount=3 (make 3 replications); 2. insert a new file into MogileFS, and I saw there are 3 storage nodes (node A,B,C) contained that file according to 'mogadm stats'; 3. manually mark one of those nodes (node A) to be down, and try to retrieve that file: file retrieved normally, and a new replication is made up on another node (node D) as expected; 4. manually mark the previous down node alive, and delete the file.
Here comes the problem: after the file has been deleted, there are no contents in node B,C and D, but still left one copy of the file in node A (can be seen through 'mogadm stats'). That copy could not be accessed anymore, but do occupied the storage spaces. Even though I tried to clean up the whole storage directory on node A, the mogadm file statistics remain unchanged.
With a long running systems and lots of files, this phenomenon will be very annoying. So is there any formal method to clean up all of these dangling files in a MogileFS system?
More information about the mogilefs