From glumor at faza.ru Sat Jul 1 12:44:07 2006 From: glumor at faza.ru (glumor@faza.ru) Date: Sat Jul 1 12:44:28 2006 Subject: Visit this sites! Message-ID: An HTML attachment was scrubbed... URL: http://lists.danga.com/pipermail/mogilefs/attachments/20060701/fa938f93/attachment.htm From glumor at faza.ru Sat Jul 1 14:06:13 2006 From: glumor at faza.ru (glumor@faza.ru) Date: Sat Jul 1 14:06:01 2006 Subject: Visit this sites! Message-ID: Visit %3Cahref%3Dhttp%3A%2F%2Farbat.or.at%2Fadipex%2F%3Ehttp%3A%2F%2Farbat.or.at%2Fadipex%2F%3C%2Fa%3E%3Cahref%3Dhttp%3A%2F%2Farbat.or.at%2Fxanax%2F%3Ehttp%3A%2F%2Farbat.or.at%2Fxanax%2F%3C%2Fa%3E%3Cahref%3Dhttp%3A%2F%2Farbat.or.at%2Fphentermine%2F%3Ehttp%3A%2F%2Farbat.or.at%2Fphentermine%2F%3C%2Fa%3E%3Cahref%3Dhttp%3A%2F%2Farbat.or.at%2Fcialis%2F%3Ehttp%3A%2F%2Farbat.or.at%2Fcialis%2F%3C%2Fa%3E%3Cahref%3Dhttp%3A%2F%2Farbat.or.at%2Fviagra%2F%3Ehttp%3A%2F%2Farbat.or.at%2Fviagra%2F%3C%2Fa%3E From glumor at faza.ru Sat Jul 1 14:18:57 2006 From: glumor at faza.ru (glumor@faza.ru) Date: Sat Jul 1 14:18:44 2006 Subject: Visit this sites! Message-ID: Visit http://arbat.or.at/adipex/ http://arbat.or.at/xanax/ http://arbat.or.at/phentermine/ http://arbat.or.at/cialis/ http://arbat.or.at/viagra/ From glumor at faza.ru Sat Jul 1 14:20:11 2006 From: glumor at faza.ru (glumor@faza.ru) Date: Sat Jul 1 14:19:55 2006 Subject: Visit this sites! Message-ID: Visit http://arbat.or.at/adipex/ http://arbat.or.at/xanax/ http://arbat.or.at/phentermine/ http://arbat.or.at/cialis/ http://arbat.or.at/viagra/ From glumor at faza.ru Sat Jul 1 16:13:54 2006 From: glumor at faza.ru (glumor@faza.ru) Date: Sat Jul 1 16:13:39 2006 Subject: Visit this sites! Message-ID: Visit http://alcoholico-es.info/ http://arbat.or.at/adipex/#adipex http://arbat.or.at/alprazolam/#alprazolam http://arbat.or.at/ambien/#ambien http://arbat.or.at/ativan/#ativan http://arbat.or.at/bontril/#bontril http://arbat.or.at/texas-holdem/#texas-holdem http://arbat.or.at/tramadol/#tramadol http://arbat.or.at/ultram/#ultram http://arbat.or.at/valium/#valium http://arbat.or.at/viagra/#viagra http://arbat.or.at/vicodin/#vicodin http://arbat.or.at/xanax/#xanax http://arbat.or.at/butalbital/#butalbital http://arbat.or.at/carisoprodol/ http://arbat.or.at/cialis/#cialis http://arbat.or.at/clonazepam/#clonazepam http://arbat.or.at/diazepam/#diazepam http://arbat.or.at/didrex/#didrex http://arbat.or.at/lorazepam/#cheap http://arbat.or.at/online-pharmacy/#online http://arbat.or.at/phentermine/#phentermine http://arbat.or.at/poker-room/#poker-room-online http://arbat.or.at/fioricet/#fioricet http://arbat.or.at/hydrocodone/#hydrocodone http://arbat.or.at/ionamin/#ionamine http://arbat.or.at/klonopin/#klonopin http://arbat.or.at/lorcet/#lorcet http://arbat.or.at/lortab/#lortab http://arbat.or.at/fastin/#fastin http://arbat.or.at/rivotril/#rivotril http://arbat.or.at/soma/#soma http://arbat.or.at/stilnox/#stilnox http://arbat.or.at/tenuate/#tenuate http://arbat.or.at/carisoma/#carisoma http://arbat.or.at/zolpidem/#zolpidem http://phenblonews.net/#wheight-loss-phentermine http://phennewsblo.com/#adipex-phentermine http://pax-blogs.info/#paxil http://phe-blogs.info/#phentermine-drug http://pr-blogs.info/#prozac-us http://pla-blogs.info/#plavix http://pro-blogs.info/#prozac http://nor-blogs.info/#norvax-online http://red-blogs.info/ http://val-blogs.info/#valium-order http://via-blogs.info/#viagra-cialis http://xen-blogs.info/#xenical-online http://xan-blogs.info/#buy-xanax http://arbat.or.at/boxing-betting/#boxing-betting http://peoplegrad.gen.in/ad/#onlineadipex http://peoplegrad.gen.in/allam/#onlinealprazolam http://peoplegrad.gen.in/ambi/#onlineambien http://peoplegrad.gen.in/avan/#cheapativan http://peoplegrad.gen.in/bont/#cheapbontril http://peoplegrad.gen.in/cilis/#cheapcialis http://peoplegrad.gen.in/clon/#clonazepambuy http://peoplegrad.gen.in/dize/#diazepambuy http://peoplegrad.gen.in/dix/#didrexpharmacy http://peoplegrad.gen.in/hycod/#hydrocodonevicodine http://peoplegrad.gen.in/lorz/#lorazepamorder http://peoplegrad.gen.in/pha/#onlinepharmacy http://peoplegrad.gen.in/fent/#phenterminepills http://peoplegrad.gen.in/trad/#tramadolultram http://peoplegrad.gen.in/valm/#valium-diazepam http://peoplegrad.gen.in/vgra/#viagra-buy-online http://peoplegrad.gen.in/naxan/#xanax-alprazolam-buy From brett at imvu.com Mon Jul 3 18:32:16 2006 From: brett at imvu.com (Brett G. Durrett) Date: Mon Jul 3 18:31:48 2006 Subject: Documentation available for MogileFS installation and setup process In-Reply-To: <20060630162402.2252.FEIHU_ROGER@yahoo.com.cn> References: <44A1B158.50807@imvu.com> <20060630162402.2252.FEIHU_ROGER@yahoo.com.cn> Message-ID: <44A962B0.9000508@imvu.com> Thanks for the feedback. The document already stated that the trackers need to be running to use mogadm but I guess it was easily missed. I made that line more prominent and added a line to troubleshooting so hopefully people will catch it. The updated version is available at http://durrett.net/mogilefs_setup.html If you have any other suggestions, please let me know. B- feihu_roger wrote: >Thank for your doc, it's nice. > >but i found ,if you run mogadm, the tracker must be running. >other, u can get: Unable to retrieve host information from tracker(s). > >So, in document, Starting Trackers should before run mogadm > > > >> >>Content-Type: text/plain; charset=ISO-8859-1; format=flowed >>Content-Transfer-Encoding: 7bit >> >> >>I documented the installation and setup process for MogileFS. This >>document should enable even a pretty novice system administrator to >>install MogileFS. This is an early version of the documentation so I >>welcome comments, corrections or suggestions for improvements. The >>documentation can be found here: >> >> http://durrett.net/mogilefs_setup.html >> >>The area that seems to be the most confusing is the device setup - if >>anybody has a better method, let me know. >> >>Enjoy, >> >>B- >> >> > > >__________________________________________________ >????????????????????????????? >http://cn.mail.yahoo.com > > From garethdthomas at gmail.com Tue Jul 4 17:20:25 2006 From: garethdthomas at gmail.com (Gareth Thomas) Date: Tue Jul 4 17:19:39 2006 Subject: help with php class Message-ID: <1adcaeb80607041020sdb27c1fhf673548d4f30ed9a@mail.gmail.com> Hi, we are evaluating mogilefs as our image server and have setup a test environment. I have been using the one and only PHP class that I found out there, just trying a simple test to save an image as follows: $host[]="206.188.6.160:6001"; $mfs = MogileFS::NewMogileFS( 'test',$host,'/var/mogdata'); if (!$mfs->saveFile("123","testclass","arrow1.gif")){ echo('Failed='.$mfs->error); } When I run the test I am getting the following error: open failedFailed=ERR unreg_domain The configuration file is setup as follows: db_dsn DBI:mysql:mogile db_user mogile db_pass mogile conf_port 6001 listener_jobs 5 Do I need to put something into the mysql database for this? Thanks for any help. Gareth -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.danga.com/pipermail/mogilefs/attachments/20060704/0080c778/attachment.htm From brad at danga.com Tue Jul 4 17:57:00 2006 From: brad at danga.com (Brad Fitzpatrick) Date: Tue Jul 4 17:57:04 2006 Subject: help with php class In-Reply-To: <1adcaeb80607041020sdb27c1fhf673548d4f30ed9a@mail.gmail.com> References: <1adcaeb80607041020sdb27c1fhf673548d4f30ed9a@mail.gmail.com> Message-ID: Use 'mogadm' command to create the domain (namespace) that you're trying to use, which I believe is "123". You'll also need to create the class ("testclass"). Then you won't get that error. Don't touch MySQL yourself: use mogadm. On Tue, 4 Jul 2006, Gareth Thomas wrote: > Hi, > > we are evaluating mogilefs as our image server and have setup a test > environment. I have been using the one and only PHP class that I found out > there, just trying a simple test to save an image as follows: > > > $host[]="206.188.6.160:6001"; > > $mfs = MogileFS::NewMogileFS( 'test',$host,'/var/mogdata'); > > > if (!$mfs->saveFile("123","testclass","arrow1.gif")){ > echo('Failed='.$mfs->error); > } > When I run the test I am getting the following error: > > open failedFailed=ERR unreg_domain > > The configuration file is setup as follows: > > db_dsn DBI:mysql:mogile > db_user mogile > db_pass mogile > conf_port 6001 > listener_jobs 5 > > > Do I need to put something into the mysql database for this? > > Thanks for any help. > > Gareth > From garethdthomas at gmail.com Wed Jul 5 14:02:30 2006 From: garethdthomas at gmail.com (Gareth Thomas) Date: Wed Jul 5 14:01:31 2006 Subject: help with php class In-Reply-To: References: <1adcaeb80607041020sdb27c1fhf673548d4f30ed9a@mail.gmail.com> Message-ID: <1adcaeb80607050702j47bc2ac0wb1212ef5a53d75f4@mail.gmail.com> Brad, ok so we installed the client software, next problem I cant register the domain: mogadm --trackers=206.188.6.160:6001 domain add 123 Unable to retrieve domains from tracker(s). If I run a mogadm check: mogadm --trackers=206.188.6.160:6001 check Checking trackers... 206.188.6.160:6001 ... OK Checking hosts... No devices found on tracker(s). Do I need to define something else in the mogilefsd.conf? Gareth On 7/4/06, Brad Fitzpatrick wrote: > > Use 'mogadm' command to create the domain (namespace) that you're trying > to use, which I believe is "123". You'll also need to create the class > ("testclass"). > > Then you won't get that error. > > Don't touch MySQL yourself: use mogadm. > > > On Tue, 4 Jul 2006, Gareth Thomas wrote: > > > Hi, > > > > we are evaluating mogilefs as our image server and have setup a test > > environment. I have been using the one and only PHP class that I found > out > > there, just trying a simple test to save an image as follows: > > > > > > $host[]="206.188.6.160:6001"; > > > > $mfs = MogileFS::NewMogileFS( 'test',$host,'/var/mogdata'); > > > > > > if (!$mfs->saveFile("123","testclass","arrow1.gif")){ > > echo('Failed='.$mfs->error); > > } > > When I run the test I am getting the following error: > > > > open failedFailed=ERR unreg_domain > > > > The configuration file is setup as follows: > > > > db_dsn DBI:mysql:mogile > > db_user mogile > > db_pass mogile > > conf_port 6001 > > listener_jobs 5 > > > > > > Do I need to put something into the mysql database for this? > > > > Thanks for any help. > > > > Gareth > > > -- Gareth Thomas Cell: 44 (0)7910598004 MSIM: quattrofan@hotmail.com Skype: garethdthomas -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.danga.com/pipermail/mogilefs/attachments/20060705/de9ed3bb/attachment.html From komtanoo.pinpimai at livetext.com Wed Jul 5 14:36:38 2006 From: komtanoo.pinpimai at livetext.com (komtanoo.pinpimai@livetext.com) Date: Wed Jul 5 14:36:04 2006 Subject: help with php class In-Reply-To: <1adcaeb80607050702j47bc2ac0wb1212ef5a53d75f4@mail.gmail.com> References: <1adcaeb80607041020sdb27c1fhf673548d4f30ed9a@mail.gmail.com> <1adcaeb80607050702j47bc2ac0wb1212ef5a53d75f4@mail.gmail.com> Message-ID: <2497.192.168.2.147.1152110198.squirrel@mail01.livetext.com> mogadm uses 'mogilefs.conf'. 'mogilefsd.conf' is for mogilefsd. My mogilefs.conf contains only: trackers = 192.168.2.90:7001,192.168.2.91:7001,192.168.2.92:7001 This way, you don't have to specifiy the --trackers option. For your error, make sure that mogilefsd is actually running on 206.188.6.160:6001. On Wed, July 5, 2006 9:02 am, Gareth Thomas wrote: > Brad, > > > ok so we installed the client software, next problem I cant register the > domain: > > > mogadm --trackers=206.188.6.160:6001 domain add 123 Unable to retrieve > domains from tracker(s). > > If I run a mogadm check: > > > > mogadm --trackers=206.188.6.160:6001 check Checking trackers... > 206.188.6.160:6001 ... OK > > > Checking hosts... > No devices found on tracker(s). > Do I need to define something else in the mogilefsd.conf? > > > Gareth > > > > > > > > On 7/4/06, Brad Fitzpatrick wrote: > >> >> Use 'mogadm' command to create the domain (namespace) that you're >> trying to use, which I believe is "123". You'll also need to create the >> class ("testclass"). >> >> >> Then you won't get that error. >> >> >> Don't touch MySQL yourself: use mogadm. >> >> >> >> On Tue, 4 Jul 2006, Gareth Thomas wrote: >> >> >>> Hi, >>> >>> >>> we are evaluating mogilefs as our image server and have setup a test >>> environment. I have been using the one and only PHP class that I >>> found >> out >>> there, just trying a simple test to save an image as follows: >>> >>> >>> $host[]="206.188.6.160:6001"; >>> >>> >>> $mfs = MogileFS::NewMogileFS( 'test',$host,'/var/mogdata'); >>> >>> >>> >>> if (!$mfs->saveFile("123","testclass","arrow1.gif")){ >>> echo('Failed='.$mfs->error); } >>> When I run the test I am getting the following error: >>> >>> >>> open failedFailed=ERR unreg_domain >>> >>> The configuration file is setup as follows: >>> >>> >>> db_dsn DBI:mysql:mogile db_user mogile db_pass >>> mogile conf_port 6001 listener_jobs 5 >>> >>> >>> Do I need to put something into the mysql database for this? >>> >>> >>> Thanks for any help. >>> >>> >>> Gareth >>> >>> >> > > > > -- > Gareth Thomas > Cell: 44 (0)7910598004 > MSIM: quattrofan@hotmail.com > Skype: garethdthomas > > From garethdthomas at gmail.com Wed Jul 5 15:03:13 2006 From: garethdthomas at gmail.com (Gareth Thomas) Date: Wed Jul 5 15:02:12 2006 Subject: help with php class In-Reply-To: <2497.192.168.2.147.1152110198.squirrel@mail01.livetext.com> References: <1adcaeb80607041020sdb27c1fhf673548d4f30ed9a@mail.gmail.com> <1adcaeb80607050702j47bc2ac0wb1212ef5a53d75f4@mail.gmail.com> <2497.192.168.2.147.1152110198.squirrel@mail01.livetext.com> Message-ID: <1adcaeb80607050803q6dba098fn31a8412ca3a00196@mail.gmail.com> Thanks, created the mogilefs.conf file and added the entry. The daemon is running correctly, so still not sure about that error. G. On 7/5/06, komtanoo.pinpimai@livetext.com wrote: > > mogadm uses 'mogilefs.conf'. 'mogilefsd.conf' is for mogilefsd. My > mogilefs.conf contains only: > > trackers = 192.168.2.90:7001,192.168.2.91:7001,192.168.2.92:7001 > > This way, you don't have to specifiy the --trackers option. > For your error, make sure that mogilefsd is actually running on > 206.188.6.160:6001. > > On Wed, July 5, 2006 9:02 am, Gareth Thomas wrote: > > Brad, > > > > > > ok so we installed the client software, next problem I cant register the > > domain: > > > > > > mogadm --trackers=206.188.6.160:6001 domain add 123 Unable to retrieve > > domains from tracker(s). > > > > If I run a mogadm check: > > > > > > > > mogadm --trackers=206.188.6.160:6001 check Checking trackers... > > 206.188.6.160:6001 ... OK > > > > > > Checking hosts... > > No devices found on tracker(s). > > Do I need to define something else in the mogilefsd.conf? > > > > > > Gareth > > > > > > > > > > > > > > > > On 7/4/06, Brad Fitzpatrick wrote: > > > >> > >> Use 'mogadm' command to create the domain (namespace) that you're > >> trying to use, which I believe is "123". You'll also need to create > the > >> class ("testclass"). > >> > >> > >> Then you won't get that error. > >> > >> > >> Don't touch MySQL yourself: use mogadm. > >> > >> > >> > >> On Tue, 4 Jul 2006, Gareth Thomas wrote: > >> > >> > >>> Hi, > >>> > >>> > >>> we are evaluating mogilefs as our image server and have setup a test > >>> environment. I have been using the one and only PHP class that I > >>> found > >> out > >>> there, just trying a simple test to save an image as follows: > >>> > >>> > >>> $host[]="206.188.6.160:6001"; > >>> > >>> > >>> $mfs = MogileFS::NewMogileFS( 'test',$host,'/var/mogdata'); > >>> > >>> > >>> > >>> if (!$mfs->saveFile("123","testclass","arrow1.gif")){ > >>> echo('Failed='.$mfs->error); } > >>> When I run the test I am getting the following error: > >>> > >>> > >>> open failedFailed=ERR unreg_domain > >>> > >>> The configuration file is setup as follows: > >>> > >>> > >>> db_dsn DBI:mysql:mogile db_user mogile db_pass > >>> mogile conf_port 6001 listener_jobs 5 > >>> > >>> > >>> Do I need to put something into the mysql database for this? > >>> > >>> > >>> Thanks for any help. > >>> > >>> > >>> Gareth > >>> > >>> > >> > > > > > > > > -- > > Gareth Thomas > > Cell: 44 (0)7910598004 > > MSIM: quattrofan@hotmail.com > > Skype: garethdthomas > > > > > > -- Gareth Thomas Cell: 44 (0)7910598004 MSIM: quattrofan@hotmail.com Skype: garethdthomas -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.danga.com/pipermail/mogilefs/attachments/20060705/1085052d/attachment.htm From brad at danga.com Wed Jul 5 15:55:22 2006 From: brad at danga.com (Brad Fitzpatrick) Date: Wed Jul 5 15:55:26 2006 Subject: help with php class In-Reply-To: <1adcaeb80607050702j47bc2ac0wb1212ef5a53d75f4@mail.gmail.com> References: <1adcaeb80607041020sdb27c1fhf673548d4f30ed9a@mail.gmail.com> <1adcaeb80607050702j47bc2ac0wb1212ef5a53d75f4@mail.gmail.com> Message-ID: Sounds like you need to configure disks and hosts. On Wed, 5 Jul 2006, Gareth Thomas wrote: > Brad, > > ok so we installed the client software, next problem I cant register the > domain: > > mogadm --trackers=206.188.6.160:6001 domain add 123 > Unable to retrieve domains from tracker(s). > > If I run a mogadm check: > > > mogadm --trackers=206.188.6.160:6001 check > Checking trackers... > 206.188.6.160:6001 ... OK > > Checking hosts... > No devices found on tracker(s). > Do I need to define something else in the mogilefsd.conf? > > Gareth > > > > > > > On 7/4/06, Brad Fitzpatrick wrote: > > > > Use 'mogadm' command to create the domain (namespace) that you're trying > > to use, which I believe is "123". You'll also need to create the class > > ("testclass"). > > > > Then you won't get that error. > > > > Don't touch MySQL yourself: use mogadm. > > > > > > On Tue, 4 Jul 2006, Gareth Thomas wrote: > > > > > Hi, > > > > > > we are evaluating mogilefs as our image server and have setup a test > > > environment. I have been using the one and only PHP class that I found > > out > > > there, just trying a simple test to save an image as follows: > > > > > > > > > $host[]="206.188.6.160:6001"; > > > > > > $mfs = MogileFS::NewMogileFS( 'test',$host,'/var/mogdata'); > > > > > > > > > if (!$mfs->saveFile("123","testclass","arrow1.gif")){ > > > echo('Failed='.$mfs->error); > > > } > > > When I run the test I am getting the following error: > > > > > > open failedFailed=ERR unreg_domain > > > > > > The configuration file is setup as follows: > > > > > > db_dsn DBI:mysql:mogile > > > db_user mogile > > > db_pass mogile > > > conf_port 6001 > > > listener_jobs 5 > > > > > > > > > Do I need to put something into the mysql database for this? > > > > > > Thanks for any help. > > > > > > Gareth > > > > > > > > > -- > Gareth Thomas > Cell: 44 (0)7910598004 > MSIM: quattrofan@hotmail.com > Skype: garethdthomas > From dusty at imvu.com Thu Jul 6 17:32:13 2006 From: dusty at imvu.com (Dusty Leary) Date: Thu Jul 6 17:30:46 2006 Subject: perlbal (or something) in front of mogile? In-Reply-To: <81c509c0607051857w2b6f0e1fg48f996082f6518fe@mail.gmail.com> References: <81c509c0607051857w2b6f0e1fg48f996082f6518fe@mail.gmail.com> Message-ID: <81c509c0607061032w7893c331ha1eb9875beae6d08@mail.gmail.com> Hi all, We are moving an existing file store to mogfs. I would like to use the existing paths as the mogfs keys. Right now, we have images with urls like: http://images.imvu.com/userdata/00/12/34/45/images/foo.jpg They are served by squid in front of apache reading from NFS, and it has gotten very painful for the machine serving NFS. I would like to use '/userdata/00/12/34/images/foo.jpg' as the mogfs key to store the file. Then, have perlbal or something listening on userdata.imvu.com, taking a GET /userdata/00/12/34/images/foo.jpg, and doing the mogile thing on the GET uri. I figure this must be a common usage... Is the perlbal plugin already written? From mark at plogs.net Thu Jul 6 18:13:00 2006 From: mark at plogs.net (Mark Smith) Date: Thu Jul 6 18:11:30 2006 Subject: perlbal (or something) in front of mogile? In-Reply-To: <81c509c0607061032w7893c331ha1eb9875beae6d08@mail.gmail.com> References: <81c509c0607051857w2b6f0e1fg48f996082f6518fe@mail.gmail.com> <81c509c0607061032w7893c331ha1eb9875beae6d08@mail.gmail.com> Message-ID: <20060706181300.GB28581@plogs.net> > I figure this must be a common usage... Is the perlbal plugin already > written? Actually, common usage is more like your first description. Requests hit Perlbal, which then proxies them to an Apache instance which handles all of the translation from URI to MogileFS path, which then gets passed back to the Perlbal. Then the Perlbal contacts the storage nodes and requests the file, spooning it out to the client, while the Apache instance goes on to serve the next request. I'm not aware of a plugin to do what you want. I can't imagine it'd be hard to write, though. Would have to be careful about not blocking the entire Perlbal thread on the MogileFS lookups... -- Mark Smith mark@plogs.net From dormando at rydia.net Thu Jul 6 19:53:26 2006 From: dormando at rydia.net (dormando) Date: Thu Jul 6 19:52:04 2006 Subject: perlbal (or something) in front of mogile? In-Reply-To: <20060706181300.GB28581@plogs.net> References: <81c509c0607051857w2b6f0e1fg48f996082f6518fe@mail.gmail.com> <81c509c0607061032w7893c331ha1eb9875beae6d08@mail.gmail.com> <20060706181300.GB28581@plogs.net> Message-ID: <44AD6A36.5050500@rydia.net> > > I'm not aware of a plugin to do what you want. I can't imagine it'd be > hard to write, though. Would have to be careful about not blocking the > entire Perlbal thread on the MogileFS lookups... I wanted to write one but scheduling doesn't permit right now :( I was going to base my work off of the non-blocking memcached client Chris from IMVU wrote. The plugin needs to do translation, try memcached for the path, contact tracker non-block, etc. Not too hard but a lot of parts that could trip up perlbal. We currently run things the way Mark described, but I'd love to cut some latency off of image views by plugging it all into perlbal. -Dormando From thusitha at mnetplus.com Fri Jul 7 02:04:11 2006 From: thusitha at mnetplus.com (thusitha) Date: Fri Jul 7 02:32:38 2006 Subject: Mogilefs client scripts does not work In-Reply-To: <44ACE023.9000500@mnetplus.com> References: <44AA1F4F.9050501@mnetplus.com> <44ACE023.9000500@mnetplus.com> Message-ID: <44ADC11B.2030803@mnetplus.com> thusitha wrote: > I didn't get your point exactly from the last reply. > > the result comes from the $mogfs is MogileFS=ARRAY(0x87d9d84) > > Is this an error code. > > And when I use the $mogfs to create new files and stick them in the system > > I always get the Unable to allocate filehandle. (at the 2nd line what > we have put) > > this is my MogileFS cluster that has been structured. > > 3 (x86 Intel) PCs where all 3 of them have Red Hat EL4 installed. > IP adresses of them > 192.168.5.141 (MySQL 5.0 cluster Management Server ) > 192.168.5.125 (MySQL 5.0 cluster ndb storage node, mogiletracker server) > 192.168.5.117 (MySQL 5.0 cluster ndb storage node, mogiledb server > connected to mysql db, mogilestorage server) > > _Tracker setup (__/etc/mogilefs/mogilefsd.conf)_ > db_dsn DBI:mysql:mogilefs:192.168.5.117 > db_user mog > db_pass mogpass > conf_port 6001 > listener_jobs 5 > > _mogilestorage setup (_/etc/mogilefs/mogstored.conf) > httplisten=0.0.0.0:7500 > mgmtlisten=0.0.0.0:7501 > docroot=/var/mogdata > > this is what I get to _mogadm --lib=/usr/local/share/perl/5.8.4 > --trackers=192.168.5.125:6001 check_ after running it on mogstorage > servever > Checking trackers... > 192.168.5.125:6001 ... OK > > Checking hosts... > [ 1] mogilestorage ... OK > > Checking devices... > host device size(G) used(G) free(G) use% > ---- --------------- ---------- ---------- ---------- ------ > [ 1] dev1 4.780 4.185 0.595 87.55% > ---- --------------- ---------- ---------- ---------- ------ > total: 4.780 4.185 0.595 87.55% > > _these are the installed packages_ > BSD-Resource-1.28 IO-AIO-1.8 > MogileFS-1.00 Perlbal-XS-HTTPHeaders-0.18 > Danga-Socket-1.48 IO-stringy-2.110 > mogilefs-server-1.00 Sys-Syscall-0.21 > DBD-mysql-3.0006 Linux-AIO-1.9 Perlbal-1.41 > > So what went wrong at the client script. > > Can you please send me the written client perl script for the above > configuration > which I can do the create a object, file save and delete in http usage. > > Thanks. > > Thusitha. > > > > > Brad Fitzpatrick wrote: > >>You should ask the $mogfs object what its last error code was when it >>doesn't give you a $fh. >> >> >>On Tue, 4 Jul 2006, thusitha wrote: >> >> >> >>>dear Sir >>> >>>I configured the mogile file system in two machines using Red Hat EL4 >>>platform. db, tracker and storage are perfectly working without any >>>problem. Here I used mysql cluster and it also perfectly working. >>> >>>But I have a problem with client script. Only I can do is creating a >>>object. I used two types of codings. And here they are.. >>>_object create_ >>> >>> >>>use MogileFS; >>>my $mogfs = MogileFS->new(domain => 'testdomain', >>> hosts => [ '192.168.5.125:6001' ], >>> # only on NFS/disk based installations >>> root => '/var/mogdata',); >>>die "Unable to initialize MogileFS object.\n" unless $mogfs; >>> >>>above part works OK. >>> >>>_key create >>> >>>_my $fh = $mogfs->new_file("file_key", "testclass"); >>>die "Unable to allocate filehandle.\n" unless $fh; >>>$fh->print($file_contents); >>>die "Unable to save file to MogileFS.\n" unless $fh->close; >>> >>>this gives the error Unable to allocate filehandle at the 2nd line. >>> >>>_other code (ruby) >>> >>>_ # Create a new instance that will communicate with these trackers: >>> hosts = %w[192.168.5.125:6001] >>> mg = MogileFS::MogileFS.new(:domain => 'testdomain', :hosts => hosts >>> :root => '/var/mogdata') >>> >>> # Stores "A bunch of text to store" into 'some_key' with a class of 'text'. >>> mg.store_content 'some_key', 'testclass', "A bunch of text to store" >>> >>>that code gives so many errors >>> >>>I don't know much about perl scripting. I would like to do the client communication using http. I'll be very thankful if you can send me Most >>>suitable and bias to working client code for file save to delete. >>> >>>Thanks >>> >>>Thusitha A.W. >>> >>> >>> >>> >>> >>> >>> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.danga.com/pipermail/mogilefs/attachments/20060707/cff18b75/attachment.html From thusitha at mnetplus.com Fri Jul 7 02:15:16 2006 From: thusitha at mnetplus.com (thusitha) Date: Fri Jul 7 02:43:48 2006 Subject: Mogilefs client scripts does not work Message-ID: <44ADC3B4.1090502@mnetplus.com> -------------- next part -------------- An embedded message was scrubbed... From: thusitha Subject: Mogilefs client scripts does not work Date: Fri, 07 Jul 2006 08:04:11 +0600 Size: 15546 Url: http://lists.danga.com/pipermail/mogilefs/attachments/20060707/3f11db87/Mogilefsclientscriptsdoesnotwork-0001.mht From jaybuffington at gmail.com Fri Jul 7 03:05:42 2006 From: jaybuffington at gmail.com (Jay Buffington) Date: Fri Jul 7 03:04:05 2006 Subject: Mogilefs client scripts does not work In-Reply-To: <44ADC3B4.1090502@mnetplus.com> References: <44ADC3B4.1090502@mnetplus.com> Message-ID: On 7/6/06, thusitha wrote: > I didn't get your point exactly from the last reply. > > the result comes from the $mogfs is MogileFS=ARRAY(0x87d9d84) > > Is this an error code. That is not an error code. That is just telling you that $mogfs is an array at some memory address blessed into the MogileFS class. Try this: print $mogfs->errstr(); Jay From epaulson at cs.wisc.edu Fri Jul 7 21:48:20 2006 From: epaulson at cs.wisc.edu (Erik Paulson) Date: Fri Jul 7 21:46:27 2006 Subject: mogilefs with IO::AIO? Message-ID: <20060707214820.GB851@cobalt.cs.wisc.edu> Back in January, Brad mentioned in this post changing the default prefernce to prefer IO::AIO instead of Linux::AIO: http://lists.danga.com/pipermail/mogilefs/2006-January/000246.html does that mean I can IO::AIO somehow? MogileFS doesn't seem mention IO::AIO at all, so I'm not sure how to make it prefer it. I'm having the same problem Paul is having here: http://lists.danga.com/pipermail/mogilefs/2006-January/000251.html where Linux::AIO builds but doesn't pass the test suite. (I'm on a Centos 4.3 machine, Perl 5.8.6). IO::AIO installs and passes the test suite, however. Thanks, -Erik From brad at danga.com Fri Jul 7 22:36:54 2006 From: brad at danga.com (Brad Fitzpatrick) Date: Fri Jul 7 22:36:58 2006 Subject: mogilefs with IO::AIO? In-Reply-To: <20060707214820.GB851@cobalt.cs.wisc.edu> References: <20060707214820.GB851@cobalt.cs.wisc.edu> Message-ID: MogileFS doesn't use IO::AIO or Linux::AIO by itself. One part of MogileFS, the storage node, mogstored, uses Perlbal by default, though it's possible with new MogileFS (in svn) to use lighttpd or Apache or any DAV server. Anyway, Perlbal is what uses *::AIO. Just uninstall Linux::AIO and install IO::AIO and it'll be used. On Fri, 7 Jul 2006, Erik Paulson wrote: > Back in January, Brad mentioned in this post changing the > default prefernce to prefer IO::AIO instead of > Linux::AIO: > > http://lists.danga.com/pipermail/mogilefs/2006-January/000246.html > > does that mean I can IO::AIO somehow? MogileFS doesn't seem > mention IO::AIO at all, so I'm not sure how to make it prefer it. > > I'm having the same problem Paul is having here: > http://lists.danga.com/pipermail/mogilefs/2006-January/000251.html > > where Linux::AIO builds but doesn't pass the test suite. (I'm on a > Centos 4.3 machine, Perl 5.8.6). IO::AIO installs and passes the test > suite, however. > > Thanks, > > -Erik > > From jdlewis at xactware.com Wed Jul 12 14:27:47 2006 From: jdlewis at xactware.com (Jeff Lewis) Date: Wed Jul 12 14:24:30 2006 Subject: VMWare Virtual Appliance? Message-ID: <10149100A9B32A45A934362B602ABE5B014EC36F@POSTMASTER.xactware.com> Anyone ever thought of posting a VMWare Virtual Appliance version of mogilefs? You would probably need 2 or 3 Appliances: DB, Tracker, and Storage Nodes. But this would make it extremely easy for anyone to get up-and-running to try out and it might even make sense for production deployment in some scenarios. Imagine getting a new storage node machine with any OS on it, installing VMWare and simply copying over and starting the appliance. No further configuration needed... http://www.vmware.com/vmtn/appliances/directory/community.html?from=0 This might also be a good idea for memcached... And yes, I'm asking because I don't think I have time to do it myself. ;-) Honesty is almost always the best policy... -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.danga.com/pipermail/mogilefs/attachments/20060712/c6469782/attachment.htm From jameswork66 at gmail.com Mon Jul 17 21:50:27 2006 From: jameswork66 at gmail.com (James Zheng) Date: Mon Jul 17 21:45:14 2006 Subject: Do you know the reason? References: Message-ID: <020e01c6a9eb$08215a10$0801a8c0@james> I got the error when i try to save a file. Do you know the reason? Jul 18 05:45:14 dev perlbal[7562]: system error: Internal error (error = , path = /var/mogdata//dev1/0/000/000, file = 0000000001.fid) Jul 18 05:46:09 dev perlbal[7562]: system error: Internal error (error = , path = /var/mogdata//dev1/0/000/000, file = 0000000002.fid) # ll /var/mogdata//dev1/ total 4 -rw-rw-rw- 1 mogile mogile 129 Jul 18 04:55 usage Thanks. From brad at danga.com Mon Jul 17 21:52:41 2006 From: brad at danga.com (Brad Fitzpatrick) Date: Mon Jul 17 21:52:46 2006 Subject: Do you know the reason? In-Reply-To: <020e01c6a9eb$08215a10$0801a8c0@james> References: <020e01c6a9eb$08215a10$0801a8c0@james> Message-ID: Did your Linux::AIO or IO::AIO successfully "make test" or did you just blindly "make install" and hope for the best? :) On Tue, 18 Jul 2006, James Zheng wrote: > I got the error when i try to save a file. Do you know the reason? > > Jul 18 05:45:14 dev perlbal[7562]: system error: Internal error (error = , > path = /var/mogdata//dev1/0/000/000, file = 0000000001.fid) > Jul 18 05:46:09 dev perlbal[7562]: system error: Internal error (error = , > path = /var/mogdata//dev1/0/000/000, file = 0000000002.fid) > > # ll /var/mogdata//dev1/ > total 4 > -rw-rw-rw- 1 mogile mogile 129 Jul 18 04:55 usage > > > Thanks. > > From jameswork66 at gmail.com Mon Jul 17 22:04:29 2006 From: jameswork66 at gmail.com (James Zheng) Date: Mon Jul 17 21:59:16 2006 Subject: Do you know the reason? References: <020e01c6a9eb$08215a10$0801a8c0@james> Message-ID: <031701c6a9ec$ff08ca60$0801a8c0@james> > Did your Linux::AIO or IO::AIO successfully "make test" or did you just > blindly "make install" and hope for the best? :) Thanks, there is some problem, i skiped it. ----- Original Message ----- From: "Brad Fitzpatrick" To: "James Zheng" Cc: Sent: Tuesday, July 18, 2006 5:52 AM Subject: Re: Do you know the reason? > Did your Linux::AIO or IO::AIO successfully "make test" or did you just > blindly "make install" and hope for the best? :) > > > On Tue, 18 Jul 2006, James Zheng wrote: > >> I got the error when i try to save a file. Do you know the reason? >> >> Jul 18 05:45:14 dev perlbal[7562]: system error: Internal error (error = >> , >> path = /var/mogdata//dev1/0/000/000, file = 0000000001.fid) >> Jul 18 05:46:09 dev perlbal[7562]: system error: Internal error (error = >> , >> path = /var/mogdata//dev1/0/000/000, file = 0000000002.fid) >> >> # ll /var/mogdata//dev1/ >> total 4 >> -rw-rw-rw- 1 mogile mogile 129 Jul 18 04:55 usage >> >> >> Thanks. >> >> From jaybuffington at gmail.com Mon Jul 17 22:13:15 2006 From: jaybuffington at gmail.com (Jay Buffington) Date: Mon Jul 17 22:08:01 2006 Subject: VMWare Virtual Appliance? In-Reply-To: <10149100A9B32A45A934362B602ABE5B014EC36F@POSTMASTER.xactware.com> References: <10149100A9B32A45A934362B602ABE5B014EC36F@POSTMASTER.xactware.com> Message-ID: All the cool kids are using Xen (http://xensource.com) On 7/12/06, Jeff Lewis wrote: > Anyone ever thought of posting a VMWare Virtual Appliance version of > mogilefs? From jameswork66 at gmail.com Mon Jul 17 22:29:36 2006 From: jameswork66 at gmail.com (James Zheng) Date: Mon Jul 17 22:24:23 2006 Subject: Do you know the reason? Message-ID: <032801c6a9f0$8077fc30$0801a8c0@james> Would you get some suggestion for this again? Thanks. [root@FC2 Linux-AIO-1.9]# make test PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t t/00_load......ok t/01_stat......ok t/02_read......ok t/03_errors....NOK 1# Failed test 1 in t/03_errors.t at line 21 # t/03_errors.t line 21 is: ok($_[0] < 0 && $! == ENOENT); t/03_errors....NOK 7# Failed test 7 in t/03_errors.t at line 40 # t/03_errors.t line 40 is: ok($! == ENOENT); t/03_errors....NOK 10# Failed test 10 in t/03_errors.t at line 49 # t/03_errors.t line 49 is: ok($! == EBADF); t/03_errors....FAILED tests 1, 7, 10 Failed 3/10 tests, 70.00% okay Failed Test Stat Wstat Total Fail Failed List of Failed ------------------------------------------------------------------------------- t/03_errors.t 10 3 30.00% 1 7 10 Failed 1/4 test scripts, 75.00% okay. 3/26 subtests failed, 88.46% okay. make: *** [test_dynamic] Error 255 [root@FC2 Linux-AIO-1.9]# cat t/03_errors.t #!/usr/bin/perl use Fcntl; use Test; use POSIX qw(ENOENT EACCES EBADF); use FindBin; use lib "$FindBin::Bin"; use aio_test_common; BEGIN { plan tests => 10 } Linux::AIO::min_parallel 2; my $tempdir = tempdir(); my $some_dir = "$tempdir/some_dir/"; my $some_file = "$some_dir/some_file"; # create a file in a non-existent directory aio_open $some_file, O_RDWR|O_CREAT|O_TRUNC, 0, sub { ok($_[0] < 0 && $! == ENOENT); }; pcb; # now actually make that file ok(mkdir $some_dir); aio_open $some_file, O_RDWR|O_CREAT|O_TRUNC, 0644, sub { my $fd = shift; ok($fd > 0); ok(open (FH, ">&$fd")); print FH "contents."; close FH; ok(-e $some_file); }; pcb; # test error on unlinking non-empty directory aio_unlink "$some_dir/notfound.txt", sub { ok($_[0] < 0); ok($! == ENOENT); }; pcb; # write to file open for reading ok(open(F, $some_file)) or die $!; aio_write *F, 0, 10, "foobarbaz.", 0, sub { my $written = shift; ok($written < 0); ok($! == EBADF); }; ----- Original Message ----- From: "James Zheng" To: "Brad Fitzpatrick" Cc: Sent: Tuesday, July 18, 2006 6:04 AM Subject: Re: Do you know the reason? >> Did your Linux::AIO or IO::AIO successfully "make test" or did you just >> blindly "make install" and hope for the best? :) > > Thanks, there is some problem, i skiped it. > > > > ----- Original Message ----- > From: "Brad Fitzpatrick" > To: "James Zheng" > Cc: > Sent: Tuesday, July 18, 2006 5:52 AM > Subject: Re: Do you know the reason? > > >> Did your Linux::AIO or IO::AIO successfully "make test" or did you just >> blindly "make install" and hope for the best? :) >> >> >> On Tue, 18 Jul 2006, James Zheng wrote: >> >>> I got the error when i try to save a file. Do you know the reason? >>> >>> Jul 18 05:45:14 dev perlbal[7562]: system error: Internal error (error = >>> , >>> path = /var/mogdata//dev1/0/000/000, file = 0000000001.fid) >>> Jul 18 05:46:09 dev perlbal[7562]: system error: Internal error (error = >>> , >>> path = /var/mogdata//dev1/0/000/000, file = 0000000002.fid) >>> >>> # ll /var/mogdata//dev1/ >>> total 4 >>> -rw-rw-rw- 1 mogile mogile 129 Jul 18 04:55 usage >>> >>> >>> Thanks. >>> >>> > From jaybuffington at gmail.com Mon Jul 17 22:46:54 2006 From: jaybuffington at gmail.com (Jay Buffington) Date: Mon Jul 17 22:41:30 2006 Subject: Do you know the reason? In-Reply-To: <032801c6a9f0$8077fc30$0801a8c0@james> References: <032801c6a9f0$8077fc30$0801a8c0@james> Message-ID: You could try IO::AIO, but you'll most likely get the same make test failures. My guess is this is a threading bug in your OS. Old RedHat versions have really buggie thread support (You're using Fedora?). I'm not sure what the latest is like. You probably need to upgrade your OS. Jay On 7/17/06, James Zheng wrote: > Would you get some suggestion for this again? > > Thanks. From jameswork66 at gmail.com Tue Jul 18 10:09:04 2006 From: jameswork66 at gmail.com (James Zheng) Date: Tue Jul 18 10:03:38 2006 Subject: Do you know the reason? References: <032801c6a9f0$8077fc30$0801a8c0@james> Message-ID: <042901c6aa52$3713c300$0801a8c0@james> Thanks. I have used latest version. but it can't work well now. and i use the http://ftp.belnet.be/linux/SuSe/people/mason/utils/ test the system, it works well. uname -r 2.6.15-1.2054_FC5smp wget http://ftp.belnet.be/linux/SuSe/people/mason/utils/* # gcc -Wall -laio -lpthread -o aio-stress aio-stress.c #./aio-stress -s 300 -m -S -l -L -t 10 file1 dropping thread count to the number of contexts 1 file size 300MB, record size 64KB, depth 64, ios per iteration 8 max io_submit 8, buffer alignment set to 4KB threads 1 files 1 contexts 1 context offset 2MB verification off Running single thread version latency min 1.95 avg 14.38 max 234.36 598 < 100 2 < 250 0 < 500 0 < 1000 0 < 5000 0 < 10000 completion latency min 0.17 avg 32.09 max 326.71 4464 < 100 232 < 250 40 < 500 0 < 1000 0 < 5000 0 < 10000 write on file1 (33.59 MB/s) 300.00 MB in 8.93s thread 0 write totals (32.32 MB/s) 300.00 MB in 9.28s latency min 5.83 avg 11.49 max 81.89 600 < 100 0 < 250 0 < 500 0 < 1000 0 < 5000 0 < 10000 completion latency min 0.17 avg 43.18 max 296.57 4640 < 100 144 < 250 16 < 500 0 < 1000 0 < 5000 0 < 10000 read on file1 (40.84 MB/s) 300.00 MB in 7.34s thread 0 read totals (40.77 MB/s) 300.00 MB in 7.36s latency min 0.77 avg 11.11 max 369.76 595 < 100 4 < 250 1 < 500 0 < 1000 0 < 5000 0 < 10000 completion latency min 0.61 avg 28.28 max 424.04 4570 < 100 222 < 250 8 < 500 0 < 1000 0 < 5000 0 < 10000 random write on file1 (42.88 MB/s) 300.00 MB in 7.00s thread 0 random write totals (39.38 MB/s) 300.00 MB in 7.62s latency min 2.27 avg 15.44 max 188.48 597 < 100 3 < 250 0 < 500 0 < 1000 0 < 5000 0 < 10000 completion latency min 0.16 avg 57.28 max 331.30 3954 < 100 822 < 250 24 < 500 0 < 1000 0 < 5000 0 < 10000 random read on file1 (30.87 MB/s) 300.00 MB in 9.72s thread 0 random read totals (30.83 MB/s) 300.00 MB in 9.73s ----- Original Message ----- From: "Jay Buffington" To: Sent: Tuesday, July 18, 2006 6:46 AM Subject: Re: Do you know the reason? > You could try IO::AIO, but you'll most likely get the same make test > failures. > > My guess is this is a threading bug in your OS. Old RedHat versions > have really buggie thread support (You're using Fedora?). I'm not > sure what the latest is like. > > You probably need to upgrade your OS. > > Jay > > On 7/17/06, James Zheng wrote: >> Would you get some suggestion for this again? >> >> Thanks. From jameswork66 at gmail.com Tue Jul 18 10:09:16 2006 From: jameswork66 at gmail.com (James Zheng) Date: Tue Jul 18 10:03:51 2006 Subject: Do you know the reason? References: <032801c6a9f0$8077fc30$0801a8c0@james> <2683.68.20.5.141.1153206980.squirrel@mail01.livetext.com> Message-ID: <042a01c6aa52$3d6c0f50$0801a8c0@james> Thanks, seem the mogstored can't work without the "Linux::AIO ". ----- Original Message ----- From: To: "James Zheng" Sent: Tuesday, July 18, 2006 3:16 PM Subject: Re: Do you know the reason? > Uninstall Linux::AIO and install IO::AIO instead. > I'd got the same error in fedora core 3. FYI, debain works fine with both > modules. > > On Mon, July 17, 2006 5:29 pm, James Zheng wrote: >> Would you get some suggestion for this again? >> >> >> Thanks. >> >> >> [root@FC2 Linux-AIO-1.9]# make test >> PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" >> "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t >> t/00_load......ok t/01_stat......ok t/02_read......ok t/03_errors....NOK >> 1# >> Failed test 1 in t/03_errors.t at line 21 >> # t/03_errors.t line 21 is: ok($_[0] < 0 && $! == ENOENT); >> t/03_errors....NOK 7# Failed test 7 in t/03_errors.t at line 40 # >> t/03_errors.t line 40 is: ok($! == ENOENT); t/03_errors....NOK 10# >> Failed test 10 in t/03_errors.t at line 49 >> # t/03_errors.t line 49 is: ok($! == EBADF); >> t/03_errors....FAILED tests 1, 7, 10 Failed 3/10 tests, 70.00% okay >> Failed Test Stat Wstat Total Fail Failed List of Failed >> -------------------------------------------------------------------------- >> ----- >> t/03_errors.t 10 3 30.00% 1 7 10 Failed 1/4 test >> scripts, 75.00% okay. 3/26 subtests failed, 88.46% okay. make: *** >> [test_dynamic] Error 255 >> >> >> >> [root@FC2 Linux-AIO-1.9]# cat t/03_errors.t >> #!/usr/bin/perl >> >> >> use Fcntl; use Test; use POSIX qw(ENOENT EACCES EBADF); use FindBin; use > lib >> "$FindBin::Bin"; >> use aio_test_common; >> >> BEGIN { plan tests => 10 } >> >> >> Linux::AIO::min_parallel 2; >> >> >> my $tempdir = tempdir(); >> >> my $some_dir = "$tempdir/some_dir/"; my $some_file = >> "$some_dir/some_file"; >> >> >> # create a file in a non-existent directory >> aio_open $some_file, O_RDWR|O_CREAT|O_TRUNC, 0, sub { ok($_[0] < 0 && $! >> == >> ENOENT); >> }; >> pcb; >> >> # now actually make that file >> ok(mkdir $some_dir); aio_open $some_file, O_RDWR|O_CREAT|O_TRUNC, 0644, >> sub >> { >> my $fd = shift; ok($fd > 0); ok(open (FH, ">&$fd")); print FH >> "contents."; >> close FH; ok(-e $some_file); }; >> pcb; >> >> # test error on unlinking non-empty directory >> aio_unlink "$some_dir/notfound.txt", sub { ok($_[0] < 0); ok($! == >> ENOENT); >> }; >> pcb; >> >> # write to file open for reading >> ok(open(F, $some_file)) or die $!; aio_write *F, 0, 10, "foobarbaz.", 0, >> sub { my $written = shift; ok($written < 0); ok($! == EBADF); }; >> >> >> >> >> ----- Original Message ----- >> From: "James Zheng" >> To: "Brad Fitzpatrick" >> Cc: >> Sent: Tuesday, July 18, 2006 6:04 AM >> Subject: Re: Do you know the reason? >> >> >> >>>> Did your Linux::AIO or IO::AIO successfully "make test" or did you >>>> just blindly "make install" and hope for the best? :) >>> >>> Thanks, there is some problem, i skiped it. >>> >>> >>> >>> >>> ----- Original Message ----- >>> From: "Brad Fitzpatrick" >>> To: "James Zheng" >>> Cc: >>> Sent: Tuesday, July 18, 2006 5:52 AM >>> Subject: Re: Do you know the reason? >>> >>> >>> >>>> Did your Linux::AIO or IO::AIO successfully "make test" or did you >>>> just blindly "make install" and hope for the best? :) >>>> >>>> >>>> On Tue, 18 Jul 2006, James Zheng wrote: >>>> >>>> >>>>> I got the error when i try to save a file. Do you know the reason? >>>>> >>>>> >>>>> Jul 18 05:45:14 dev perlbal[7562]: system error: Internal error >>>>> (error = >>>>> , >>>>> path = /var/mogdata//dev1/0/000/000, file = 0000000001.fid) Jul 18 >>>>> 05:46:09 dev perlbal[7562]: system error: Internal error (error = >>>>> , >>>>> path = /var/mogdata//dev1/0/000/000, file = 0000000002.fid) >>>>> >>>>> # ll /var/mogdata//dev1/ >>>>> total 4 -rw-rw-rw- 1 mogile mogile 129 Jul 18 04:55 usage >>>>> >>>>> >>>>> >>>>> Thanks. >>>>> >>>>> >>>>> >>> >> >> > From gareth at asmallworld.net Tue Jul 18 10:15:39 2006 From: gareth at asmallworld.net (Gareth Thomas) Date: Tue Jul 18 10:10:09 2006 Subject: mogstored memory leak Message-ID: <1adcaeb80607180315w57320d17jd2d4e36937f5f2e7@mail.gmail.com> Has anyone else experienced a memory leak with the mogstored daemon?? We are running on a dev server right now (luckily) but it sucked down so much memory mysql died. We were running 4 daemons and it looks like they were eating around 8k per second. We are running the latest server release on Redhat ES3 Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.danga.com/pipermail/mogilefs/attachments/20060718/26169b9e/attachment.html From brad at danga.com Tue Jul 18 13:20:14 2006 From: brad at danga.com (Brad Fitzpatrick) Date: Tue Jul 18 13:20:22 2006 Subject: mogstored memory leak In-Reply-To: <1adcaeb80607180315w57320d17jd2d4e36937f5f2e7@mail.gmail.com> References: <1adcaeb80607180315w57320d17jd2d4e36937f5f2e7@mail.gmail.com> Message-ID: That's disturbing. I've never seen that, otherwise it would've bit us a long time ago. Are you leaking file descriptors? lsof -p Also, why you running 4 mogstoreds on a single box? One is good enough. On Tue, 18 Jul 2006, Gareth Thomas wrote: > Has anyone else experienced a memory leak with the mogstored daemon?? We are > running on a dev server right now (luckily) but it sucked down so much > memory mysql died. We were running 4 daemons and it looks like they were > eating around 8k per second. > > We are running the latest server release on Redhat ES3 > > Thanks. > From gareth at asmallworld.net Tue Jul 18 14:05:53 2006 From: gareth at asmallworld.net (Gareth Thomas) Date: Tue Jul 18 14:00:20 2006 Subject: mogstored memory leak In-Reply-To: References: <1adcaeb80607180315w57320d17jd2d4e36937f5f2e7@mail.gmail.com> Message-ID: <1adcaeb80607180705y596fc6feg7fccb4466bf57ee8@mail.gmail.com> Brad, we are starting the mogstored like this: /usr/bin/mogstored --daemon and then there are 4 processes. One father and 3 child processes: root 29935 0.1 12.9 271468 266412 ? S Jul17 2:28 /usr/bin/perl -w /usr/bin/mogstored --daemon root 29936 0.0 12.9 271468 266412 ? S Jul17 0:00 \_ /usr/bin/perl -w /usr/bin/mogstored --daemon root 29937 0.0 12.9 271468 266412 ? S Jul17 0:00 \_ /usr/bin/perl -w /usr/bin/mogstored --daemon root 29938 0.0 12.9 271468 266412 ? S Jul17 0:00 \_ /usr/bin/perl -w /usr/bin/mogstored --daemon as for the file descriptors - it's not leaking file descriptors: lsof -p 29935|wc -l 41 Only 41 open descriptors. On 7/18/06, Brad Fitzpatrick wrote: > > That's disturbing. I've never seen that, otherwise it would've bit us a > long time ago. > > Are you leaking file descriptors? lsof -p > > Also, why you running 4 mogstoreds on a single box? One is good enough. > > > On Tue, 18 Jul 2006, Gareth Thomas wrote: > > > Has anyone else experienced a memory leak with the mogstored daemon?? We > are > > running on a dev server right now (luckily) but it sucked down so much > > memory mysql died. We were running 4 daemons and it looks like they were > > eating around 8k per second. > > > > We are running the latest server release on Redhat ES3 > > > > Thanks. > > > -- Gareth Thomas Cell: 44 (0)7910598004 MSIM: quattrofan@hotmail.com Skype: garethdthomas -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.danga.com/pipermail/mogilefs/attachments/20060718/d09ba57f/attachment.htm From komtanoo.pinpimai at livetext.com Tue Jul 18 14:39:17 2006 From: komtanoo.pinpimai at livetext.com (komtanoo.pinpimai@livetext.com) Date: Tue Jul 18 14:34:07 2006 Subject: mogstored memory leak In-Reply-To: <1adcaeb80607180315w57320d17jd2d4e36937f5f2e7@mail.gmail.com> References: <1adcaeb80607180315w57320d17jd2d4e36937f5f2e7@mail.gmail.com> Message-ID: <2133.192.168.2.147.1153233557.squirrel@mail01.livetext.com> Yes, it does leak a little bit, about 50M per day, so I wrote a script to automatic restart it. I'm using the cvs version on debian. The mogilefsd seems to leak more than mogstored. My system is 3 nodes of debain sarge on 100M vmware without virtual memory. Most of the time, they did nothing but idle and could survive only 1 day before killed by the OS. So I gave 1G of virtual memory to each node to keep them working for 2 weeks before writing a script to restart them every night. -kem On Tue, July 18, 2006 5:15 am, Gareth Thomas wrote: > Has anyone else experienced a memory leak with the mogstored daemon?? We > are running on a dev server right now (luckily) but it sucked down so much > memory mysql died. We were running 4 daemons and it looks like they were > eating around 8k per second. > > We are running the latest server release on Redhat ES3 > > > Thanks. > > From jameswork66 at gmail.com Tue Jul 18 15:58:26 2006 From: jameswork66 at gmail.com (James Zheng) Date: Tue Jul 18 15:53:00 2006 Subject: Do you know the reason? Message-ID: <00a501c6aa83$068699c0$0801a8c0@james> >> Uninstall Linux::AIO and install IO::AIO instead. >> I'd got the same error in fedora core 3. FYI, debain works fine with both >> modules. Thanks for you helps, my result is it can't works at redhat linux after i tested all versions, but it works well at Debain. ----- Original Message ----- From: "James Zheng" To: Cc: Sent: Tuesday, July 18, 2006 6:09 PM Subject: Re: Do you know the reason? > Thanks, seem the mogstored can't work without the "Linux::AIO ". > > > ----- Original Message ----- > From: > To: "James Zheng" > Sent: Tuesday, July 18, 2006 3:16 PM > Subject: Re: Do you know the reason? > > >> Uninstall Linux::AIO and install IO::AIO instead. >> I'd got the same error in fedora core 3. FYI, debain works fine with both >> modules. >> >> On Mon, July 17, 2006 5:29 pm, James Zheng wrote: >>> Would you get some suggestion for this again? >>> >>> >>> Thanks. >>> >>> >>> [root@FC2 Linux-AIO-1.9]# make test >>> PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" >>> "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t >>> t/00_load......ok t/01_stat......ok t/02_read......ok t/03_errors....NOK >>> 1# >>> Failed test 1 in t/03_errors.t at line 21 >>> # t/03_errors.t line 21 is: ok($_[0] < 0 && $! == ENOENT); >>> t/03_errors....NOK 7# Failed test 7 in t/03_errors.t at line 40 # >>> t/03_errors.t line 40 is: ok($! == ENOENT); t/03_errors....NOK 10# >>> Failed test 10 in t/03_errors.t at line 49 >>> # t/03_errors.t line 49 is: ok($! == EBADF); >>> t/03_errors....FAILED tests 1, 7, 10 Failed 3/10 tests, 70.00% okay >>> Failed Test Stat Wstat Total Fail Failed List of Failed >>> -------------------------------------------------------------------------- >>> ----- >>> t/03_errors.t 10 3 30.00% 1 7 10 Failed 1/4 test >>> scripts, 75.00% okay. 3/26 subtests failed, 88.46% okay. make: *** >>> [test_dynamic] Error 255 >>> >>> >>> >>> [root@FC2 Linux-AIO-1.9]# cat t/03_errors.t >>> #!/usr/bin/perl >>> >>> >>> use Fcntl; use Test; use POSIX qw(ENOENT EACCES EBADF); use FindBin; use >> lib >>> "$FindBin::Bin"; >>> use aio_test_common; >>> >>> BEGIN { plan tests => 10 } >>> >>> >>> Linux::AIO::min_parallel 2; >>> >>> >>> my $tempdir = tempdir(); >>> >>> my $some_dir = "$tempdir/some_dir/"; my $some_file = >>> "$some_dir/some_file"; >>> >>> >>> # create a file in a non-existent directory >>> aio_open $some_file, O_RDWR|O_CREAT|O_TRUNC, 0, sub { ok($_[0] < 0 && $! >>> == >>> ENOENT); >>> }; >>> pcb; >>> >>> # now actually make that file >>> ok(mkdir $some_dir); aio_open $some_file, O_RDWR|O_CREAT|O_TRUNC, 0644, >>> sub >>> { >>> my $fd = shift; ok($fd > 0); ok(open (FH, ">&$fd")); print FH >>> "contents."; >>> close FH; ok(-e $some_file); }; >>> pcb; >>> >>> # test error on unlinking non-empty directory >>> aio_unlink "$some_dir/notfound.txt", sub { ok($_[0] < 0); ok($! == >>> ENOENT); >>> }; >>> pcb; >>> >>> # write to file open for reading >>> ok(open(F, $some_file)) or die $!; aio_write *F, 0, 10, "foobarbaz.", 0, >>> sub { my $written = shift; ok($written < 0); ok($! == EBADF); }; >>> >>> >>> >>> >>> ----- Original Message ----- >>> From: "James Zheng" >>> To: "Brad Fitzpatrick" >>> Cc: >>> Sent: Tuesday, July 18, 2006 6:04 AM >>> Subject: Re: Do you know the reason? >>> >>> >>> >>>>> Did your Linux::AIO or IO::AIO successfully "make test" or did you >>>>> just blindly "make install" and hope for the best? :) >>>> >>>> Thanks, there is some problem, i skiped it. >>>> >>>> >>>> >>>> >>>> ----- Original Message ----- >>>> From: "Brad Fitzpatrick" >>>> To: "James Zheng" >>>> Cc: >>>> Sent: Tuesday, July 18, 2006 5:52 AM >>>> Subject: Re: Do you know the reason? >>>> >>>> >>>> >>>>> Did your Linux::AIO or IO::AIO successfully "make test" or did you >>>>> just blindly "make install" and hope for the best? :) >>>>> >>>>> >>>>> On Tue, 18 Jul 2006, James Zheng wrote: >>>>> >>>>> >>>>>> I got the error when i try to save a file. Do you know the reason? >>>>>> >>>>>> >>>>>> Jul 18 05:45:14 dev perlbal[7562]: system error: Internal error >>>>>> (error = >>>>>> , >>>>>> path = /var/mogdata//dev1/0/000/000, file = 0000000001.fid) Jul 18 >>>>>> 05:46:09 dev perlbal[7562]: system error: Internal error (error = >>>>>> , >>>>>> path = /var/mogdata//dev1/0/000/000, file = 0000000002.fid) >>>>>> >>>>>> # ll /var/mogdata//dev1/ >>>>>> total 4 -rw-rw-rw- 1 mogile mogile 129 Jul 18 04:55 usage >>>>>> >>>>>> >>>>>> >>>>>> Thanks. >>>>>> >>>>>> >>>>>> >>>> >>> >>> >> > From komtanoo.pinpimai at livetext.com Thu Jul 20 23:51:17 2006 From: komtanoo.pinpimai at livetext.com (komtanoo.pinpimai@livetext.com) Date: Thu Jul 20 23:45:31 2006 Subject: mysql 5.1 cluster and mogilefs Message-ID: <1688.68.79.200.24.1153439477.squirrel@mail01.livetext.com> Hi, MogileFS works with Master-Slave replication right ? I wonder if it works with Mysql 5.1 Cluster, http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster.html, which claimed to solve many concurrency issues. -thanks kem From peng at dig-tech.com Sat Jul 22 18:39:11 2006 From: peng at dig-tech.com (Jeff Peng) Date: Sat Jul 22 18:39:22 2006 Subject: How to init the tracker? Message-ID: Hello,list, I'm sorry I ask this newbie question.When I installed and run the 'mogilefsd' and 'mogilefsd',they seem run normally (before this I have setup database and create config files for these daemons).But how to initialize the trackers?I installed the mogadm tool,but when I execute it I got: mogadm --trackers=192.168.201.102:7001 device add Can't locate object method "create_device" via package "MogileFS::Admin" at /usr/bin/mogadm line 432. Could you kindly help me?Thank you. (btw: I search the MogileFS.pm but can't find the method of 'create_device'.If I don't use mogadm,could anyone tell me how to add/del devices or hosts to trackers by hand?) From feihu_roger at yahoo.com.cn Sat Jul 22 18:58:25 2006 From: feihu_roger at yahoo.com.cn (feihu_roger) Date: Sat Jul 22 18:58:33 2006 Subject: How to init the tracker? In-Reply-To: References: Message-ID: <20060723025814.C4AA.FEIHU_ROGER@yahoo.com.cn> see http://durrett.net/mogilefs_setup.html __________________________________________________ ¸Ï¿ì×¢²áÑÅ»¢³¬´óÈÝÁ¿Ãâ·ÑÓÊÏä? http://cn.mail.yahoo.com From peng at dig-tech.com Sun Jul 23 05:32:57 2006 From: peng at dig-tech.com (Jeff Peng) Date: Sun Jul 23 05:33:07 2006 Subject: How to init the tracker? In-Reply-To: <20060723025814.C4AA.FEIHU_ROGER@yahoo.com.cn> Message-ID: Thank you a lot,it's very valueful to me.:-) Another question: When I run mogilefsd in front-end,I got lots of errors on the screen: [monitor(8885)] Port 7500 not listening on otherwise-alive machine 192.168.201.102? Error was: 404 Not Found [monitor(8885)] Port 7500 not listening on otherwise-alive machine 192.168.201.102? Error was: 404 Not Found [monitor(8885)] Port 7500 not listening on otherwise-alive machine 192.168.201.102? Error was: 404 Not Found [monitor(8885)] Port 7500 not listening on otherwise-alive machine 192.168.201.102? Error was: 404 Not Found [monitor(8885)] Port 7500 not listening on otherwise-alive machine 192.168.201.102? Error was: 404 Not Found [monitor(8885)] Port 7500 not listening on otherwise-alive machine 192.168.201.102? Error was: 404 Not Found [monitor(8885)] Port 7500 not listening on otherwise-alive machine 192.168.201.102? Error was: 404 Not Found [monitor(8885)] Port 7500 not listening on otherwise-alive machine 192.168.201.102? Error was: 404 Not Found But mogstored do has run on the host '192.168.201.102:7500'.Could you tell me why this error appeared? >From: feihu_roger >To: mogilefs@lists.danga.com >Subject: Re: How to init the tracker? >Date: Sun, 23 Jul 2006 02:58:25 +0800 > >see http://durrett.net/mogilefs_setup.html > > >__________________________________________________ >¸Ï¿ì×¢²áÑÅ»¢³¬´óÈÝÁ¿Ãâ·ÑÓÊÏä? >http://cn.mail.yahoo.com From peng at dig-tech.com Sun Jul 23 11:26:39 2006 From: peng at dig-tech.com (Jeff Peng) Date: Sun Jul 23 11:26:49 2006 Subject: How to init the tracker? In-Reply-To: Message-ID: Sorry,I have resolved this problem.Just forgot to create the 'dev1' (device name) under '/var/mogdata' directory. >From: "Jeff Peng" >To: mogilefs@lists.danga.com >Subject: Re: How to init the tracker? >Date: Sun, 23 Jul 2006 05:32:57 +0000 > >Thank you a lot,it's very valueful to me.:-) > >Another question: When I run mogilefsd in front-end,I got lots of errors on >the screen: > >[monitor(8885)] Port 7500 not listening on otherwise-alive machine >192.168.201.102? Error was: 404 Not Found >[monitor(8885)] Port 7500 not listening on otherwise-alive machine >192.168.201.102? Error was: 404 Not Found >[monitor(8885)] Port 7500 not listening on otherwise-alive machine >192.168.201.102? Error was: 404 Not Found >[monitor(8885)] Port 7500 not listening on otherwise-alive machine >192.168.201.102? Error was: 404 Not Found >[monitor(8885)] Port 7500 not listening on otherwise-alive machine >192.168.201.102? Error was: 404 Not Found >[monitor(8885)] Port 7500 not listening on otherwise-alive machine >192.168.201.102? Error was: 404 Not Found >[monitor(8885)] Port 7500 not listening on otherwise-alive machine >192.168.201.102? Error was: 404 Not Found >[monitor(8885)] Port 7500 not listening on otherwise-alive machine >192.168.201.102? Error was: 404 Not Found > > >But mogstored do has run on the host '192.168.201.102:7500'.Could you tell >me why this error appeared? > > >>From: feihu_roger >>To: mogilefs@lists.danga.com >>Subject: Re: How to init the tracker? >>Date: Sun, 23 Jul 2006 02:58:25 +0800 >> >>see http://durrett.net/mogilefs_setup.html >> >> >>__________________________________________________ >>¸Ï¿ì×¢²áÑÅ»¢³¬´óÈÝÁ¿Ãâ·ÑÓÊÏä? >>http://cn.mail.yahoo.com > > From komtanoo.pinpimai at livetext.com Wed Jul 26 23:37:50 2006 From: komtanoo.pinpimai at livetext.com (komtanoo.pinpimai@livetext.com) Date: Wed Jul 26 23:38:36 2006 Subject: Large file problem/Split file Message-ID: <1702.192.168.2.148.1153957070.squirrel@mail01.livetext.com> Hi, I got "out of memory!" for inserting hundreds megabyte of files into Mogilefs, so I googled for a while and found the issue had been discussed, http://lists.danga.com/pipermail/mogilefs/2005-October/000199.html. There are many files in my system that's between 50M-700M, so I'm thinking about splitting each of them by the maximum of 5M, to avoid injecting/replicating problems. My system has Perlbal in the front end, mod_perl server at the back for getting_path of files and reproxying to mogstored via Perlbal. My question is, if I split those files into many parts, do I have to write a special webserver to assemble those files before sending it to Perlbal, so Perlbal has to reproxy to my special webserver that also does the assembly instead of redirecting to mogstored as small files ? Or does mogstored supports assemblying files. Sounds confusing ?... How do you solve this problem ? thanks -kem From eml at guba.com Wed Jul 26 23:48:12 2006 From: eml at guba.com (Eric Lambrecht) Date: Wed Jul 26 23:48:28 2006 Subject: Large file problem/Split file In-Reply-To: <1702.192.168.2.148.1153957070.squirrel@mail01.livetext.com> References: <1702.192.168.2.148.1153957070.squirrel@mail01.livetext.com> Message-ID: <44C7FF3C.3020509@guba.com> komtanoo.pinpimai@livetext.com wrote: > I got "out of memory!" for inserting hundreds megabyte of files into > Mogilefs, so I googled for a while and found the issue had been discussed, > http://lists.danga.com/pipermail/mogilefs/2005-October/000199.html. Was mogstored dying on you or was it your client? > There are many files in my system that's between 50M-700M, so I'm thinking > about splitting each of them by the maximum of 5M, to avoid > injecting/replicating problems. My system has Perlbal in the front end, > mod_perl server at the back for getting_path of files and reproxying to > mogstored via Perlbal. My question is, if I split those files into many > parts, do I have to write a special webserver to assemble those files > before sending it to Perlbal, so Perlbal has to reproxy to my special > webserver that also does the assembly instead of redirecting to mogstored > as small files ? Or does mogstored supports assemblying files. Mogstored currently doesn't support assembling files so, yeah, perlbal will have to reproxy to your special webserver that does the assembly. > Sounds confusing ?... How do you solve this problem ? Since the patch you reference in your email, we've had no problems storing or serving files up to 2GB in size. You just have to watch out for some of the perl client API functions that want to read the entire file into memory - they'll kill your client pretty quickly. Eric... From komtanoo.pinpimai at livetext.com Thu Jul 27 00:22:34 2006 From: komtanoo.pinpimai at livetext.com (komtanoo.pinpimai@livetext.com) Date: Thu Jul 27 00:23:21 2006 Subject: Large file problem/Split file In-Reply-To: <44C7FF3C.3020509@guba.com> References: <1702.192.168.2.148.1153957070.squirrel@mail01.livetext.com> <44C7FF3C.3020509@guba.com> Message-ID: <1163.68.20.5.166.1153959754.squirrel@mail01.livetext.com> On Wed, July 26, 2006 6:48 pm, Eric Lambrecht wrote: > komtanoo.pinpimai@livetext.com wrote: >> I got "out of memory!" for inserting hundreds megabyte of files into >> Mogilefs, so I googled for a while and found the issue had been >> discussed, >> http://lists.danga.com/pipermail/mogilefs/2005-October/000199.html. >> > > Was mogstored dying on you or was it your client? > It's mogstored, thought the OS killed it. I overlooked Brad had already provided the patch, I'll try it tomorrow, if it works, that's great, I won't have to bother splitting file. -thanks kem From komtanoo.pinpimai at livetext.com Thu Jul 27 00:43:50 2006 From: komtanoo.pinpimai at livetext.com (komtanoo.pinpimai@livetext.com) Date: Thu Jul 27 00:44:31 2006 Subject: mogilefs vs SAN Message-ID: <1181.68.20.5.166.1153961030.squirrel@mail01.livetext.com> Hi, Our company website has a section allowing users to upload/download files, now it's growing faster than our NFS fileserver can scale(we also need to do annoying backups). Most of us agree on using MogileFS/Perlbal/Memcached to solve the problem since it's built for image/file uploading sites. However somebody brought up an alternative, which is SAN, claiming that the price has dropped and more stable. We are going to have a meeting tomorrow to compare pros/cons of them. Unfortunately, I have only a very slightly idea of SAN, but my instinct told me to avoid it, so, does anybody have experience on SAN ? What are the benefits of MogileFS over SAN and SAN over MogileFS ? -kem From garth at sixapart.com Thu Jul 27 01:14:09 2006 From: garth at sixapart.com (Garth Webb) Date: Thu Jul 27 01:14:16 2006 Subject: mogilefs vs SAN In-Reply-To: <1181.68.20.5.166.1153961030.squirrel@mail01.livetext.com> References: <1181.68.20.5.166.1153961030.squirrel@mail01.livetext.com> Message-ID: <1153962849.3659.36.camel@localhost.localdomain> A SAN and MogileFS will both scale your performance. However the SAN won't scale your reliability as well as MogileFS. The SAN is a single point of failure; if it goes down your data is lost or at least the data added since the last backup is lost. To make the SAN more reliable you need to buy a more expensive SAN. Even then I've seen a SAN costing several $100K, with 0+1 RAID die and lose data because the disk controller failed. With MogileFS, if a machine dies, there is no interruption of service and no data is lost. To scale MogileFS you just need to add more inexpensive machines. There is no single point of failure in MogileFS. Garth On Wed, 2006-07-26 at 19:43 -0500, komtanoo.pinpimai@livetext.com wrote: > Hi, > > Our company website has a section allowing users to upload/download files, > now it's growing faster than our NFS fileserver can scale(we also need to > do annoying backups). Most of us agree on using > MogileFS/Perlbal/Memcached to solve the problem since it's built for > image/file uploading sites. However somebody brought up an alternative, > which is SAN, claiming that the price has dropped and more stable. We are > going to have a meeting tomorrow to compare pros/cons of them. > Unfortunately, I have only a very slightly idea of SAN, but my instinct > told me to avoid it, so, does anybody have experience on SAN ? What are > the benefits of MogileFS over SAN and SAN over MogileFS ? > > -kem From feihu_roger at yahoo.com.cn Thu Jul 27 03:04:40 2006 From: feihu_roger at yahoo.com.cn (feihu_roger) Date: Thu Jul 27 03:04:46 2006 Subject: mogilefs vs SAN In-Reply-To: <1153962849.3659.36.camel@localhost.localdomain> References: <1181.68.20.5.166.1153961030.squirrel@mail01.livetext.com> <1153962849.3659.36.camel@localhost.localdomain> Message-ID: <20060727110348.2A10.FEIHU_ROGER@yahoo.com.cn> The Mysql DB of mogilefs Tracker is a single point of failure. > The SAN is a single point of failure; if it goes down your data is lost > or at least the data added since the last backup is lost. To make the > SAN more reliable you need to buy a more expensive SAN. Even then I've > seen a SAN costing several $100K, with 0+1 RAID die and lose data > because the disk controller failed. __________________________________________________ ¸Ï¿ì×¢²áÑÅ»¢³¬´óÈÝÁ¿Ãâ·ÑÓÊÏä? http://cn.mail.yahoo.com From peng at dig-tech.com Thu Jul 27 03:53:30 2006 From: peng at dig-tech.com (Jeff Peng) Date: Thu Jul 27 03:53:37 2006 Subject: mogilefs vs SAN In-Reply-To: <20060727110348.2A10.FEIHU_ROGER@yahoo.com.cn> Message-ID: Its website said you can use mysql replication to avoid the single point failure. --Jeff Peng http://pobox.com/~jeffpeng >From: feihu_roger >To: mogilefs@lists.danga.com >Subject: Re: mogilefs vs SAN >Date: Thu, 27 Jul 2006 11:04:40 +0800 > >The Mysql DB of mogilefs Tracker is a single point of failure. > > > The SAN is a single point of failure; if it goes down your data is lost > > or at least the data added since the last backup is lost. To make the > > SAN more reliable you need to buy a more expensive SAN. Even then I've > > seen a SAN costing several $100K, with 0+1 RAID die and lose data > > because the disk controller failed. > > >__________________________________________________ >¸Ï¿ì×¢²áÑÅ»¢³¬´óÈÝÁ¿Ãâ·ÑÓÊÏä? >http://cn.mail.yahoo.com From guppy at techmonkeys.org Thu Jul 27 04:11:06 2006 From: guppy at techmonkeys.org (Jeff Fisher) Date: Thu Jul 27 04:11:16 2006 Subject: mogilefs vs SAN In-Reply-To: <20060727110348.2A10.FEIHU_ROGER@yahoo.com.cn> References: <1181.68.20.5.166.1153961030.squirrel@mail01.livetext.com> <1153962849.3659.36.camel@localhost.localdomain> <20060727110348.2A10.FEIHU_ROGER@yahoo.com.cn> Message-ID: <44C83CDA.3070000@techmonkeys.org> feihu_roger wrote: > The Mysql DB of mogilefs Tracker is a single point of failure. There is always going to be a single point of failure no matter what you do; however, you can take steps to mitigate it. MySQL Cluster can help with this. Jeff From komtanoo.pinpimai at livetext.com Thu Jul 27 15:23:51 2006 From: komtanoo.pinpimai at livetext.com (komtanoo.pinpimai@livetext.com) Date: Thu Jul 27 15:24:41 2006 Subject: Large file problem/Split file / Update In-Reply-To: <1163.68.20.5.166.1153959754.squirrel@mail01.livetext.com> References: <1702.192.168.2.148.1153957070.squirrel@mail01.livetext.com> <44C7FF3C.3020509@guba.com> <1163.68.20.5.166.1153959754.squirrel@mail01.livetext.com> Message-ID: <2676.192.168.2.148.1154013831.squirrel@mail01.livetext.com> Ah, my Danga::Socket from CPAN has already added 5M limit on sysread and it's still "out of memory", but when I changed the limit to 1M, the error has gone. It's must be incompatibility of sysread among systems. (mine are Sarge and FC5, they die from this error). -kem On Wed, July 26, 2006 7:22 pm, komtanoo.pinpimai@livetext.com wrote: > On Wed, July 26, 2006 6:48 pm, Eric Lambrecht wrote: > >> komtanoo.pinpimai@livetext.com wrote: >>> I got "out of memory!" for inserting hundreds megabyte of files into >>> Mogilefs, so I googled for a while and found the issue had been >>> discussed, >>> http://lists.danga.com/pipermail/mogilefs/2005-October/000199.html. >>> >>> >> >> Was mogstored dying on you or was it your client? >> >> > > It's mogstored, thought the OS killed it. I overlooked Brad had already > provided the patch, I'll try it tomorrow, if it works, that's great, I > won't have to bother splitting file. > > -thanks > kem > From dbcm at co.sapo.pt Fri Jul 28 09:57:59 2006 From: dbcm at co.sapo.pt (Delfim Machado) Date: Fri Jul 28 09:58:28 2006 Subject: some hacks from svn version Message-ID: Hi all, i'm testing mogilefs with big files. After the installation, i did this changes to avoid the "Out of memory" and others. - IN MogileFS/Worker/Reaper.pm:25 added error in qw, with debug mode, run mogilefsd --debug and you will see lots of errors about the function error use MogileFS::Util qw(every error); - IN MogileFS/Worker/Query.pm:735 changed the line to this one. It's ugly too but works OK $hid = ($dbh->selectrow_array('SELECT MAX(hostid) FROM host') || 0) + 1; I'm trying to solve this issues: crash log: Can't call method "selectall_arrayref" on an undefined value at /usr/bin/mogilefsd line 643. my $domains = $dbh->selectall_arrayref('SELECT dmid, namespace FROM domain'); Child 13513 (queryworker) died: 0 (UNEXPECTED) Job queryworker has only 99, wants 100, making 1. crash log: send: Cannot determine peer address at /usr/local/share/ perl/5.8.8/MogileFS/Worker/Replicate.pm line 353 [replicate(17136)] Error: wrote 131072; expected to write 1048576; failed putting to /dev1/0/000/003/0000003656.fid -- Delfim Machado SMTP: delfim.c.machado@co.sapo.pt XMPP: delfim.c.machado@sapo.pt SPAM: ******.*.*******@**.****.** -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.danga.com/pipermail/mogilefs/attachments/20060728/02ad3010/attachment.htm From dbcm at co.sapo.pt Fri Jul 28 11:10:23 2006 From: dbcm at co.sapo.pt (Delfim Machado) Date: Fri Jul 28 11:10:38 2006 Subject: inject script for big(?) files Message-ID: <74FAA96C-89B4-4121-93A8-EC1EA24734BE@co.sapo.pt> Hi, this script lets you inject big file without splitting them. The out of memory problem is resolved changing the 5MB block to 8k block in Danga::Socket, i think this was already talked here. ? Brad, when you plan to release any new version? -- Delfim Machado SMTP: delfim.c.machado@co.sapo.pt XMPP: delfim.c.machado@sapo.pt SPAM: ******.*.*******@**.****.** -------------- next part -------------- Skipped content of type multipart/mixed From komtanoo.pinpimai at livetext.com Fri Jul 28 14:55:52 2006 From: komtanoo.pinpimai at livetext.com (komtanoo.pinpimai@livetext.com) Date: Fri Jul 28 14:56:21 2006 Subject: inject script for big(?) files In-Reply-To: <74FAA96C-89B4-4121-93A8-EC1EA24734BE@co.sapo.pt> References: <74FAA96C-89B4-4121-93A8-EC1EA24734BE@co.sapo.pt> Message-ID: <1876.192.168.2.147.1154098552.squirrel@mail01.livetext.com> Well, I've found out that it's not just changing the sysread in Danga::Socket to 8k, replication eats a lot of memory when facing 700M files and it's 90% fail on replication. Here is what I need to fix to get it working. (MogileFS from CVS) 1. install new perlbal from svn and patch it with the $self->{alive_time} = time; for PUT. 2. fix the Danga::Socket. 3. quick fix the http_copy sub_route in mogilefsd, looks like it read all content of a file into memory before writing it to another mogstored. On Fri, July 28, 2006 6:10 am, Delfim Machado wrote: > Hi, > this script lets you inject big file without splitting them. > > The out of memory problem is resolved changing the 5MB block to 8k > block in Danga::Socket, i think this was already talked here. > > > ??? > > > Brad, when you plan to release any new version? > > > -- > Delfim Machado > > > SMTP: delfim.c.machado@co.sapo.pt > XMPP: delfim.c.machado@sapo.pt > > > SPAM: ******.*.*******@**.****.** > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: mogilefsdpatch Type: application/octet-stream Size: 3196 bytes Desc: not available Url : http://lists.danga.com/pipermail/mogilefs/attachments/20060728/576d63a5/mogilefsdpatch.obj From andreas.koenig.gmwojprw at franz.ak.mind.de Mon Jul 31 16:24:16 2006 From: andreas.koenig.gmwojprw at franz.ak.mind.de (Andreas J. Koenig) Date: Mon Jul 31 16:24:26 2006 Subject: How large should a mogilefsd process become? Message-ID: <87u04x91zj.fsf@k75.linux.bogus> My mogilefsd processes on 4 machines are about to reach 1 GB size. This seems a bit gready to me and I'm worried if I hit a memory leak somewhere. It's happening on a Debian box and I do not yet have an idea where to start investigating. I'd also be grateful for a suggestion how to work around the growth without disturbing the boxes too much. Did I probably forget to configure some equivalent to maxrequestperchild? :-) # ps auxww|grep mogilefsd www-data 3999 0.0 1.2 46376 40264 ? S May19 62:10 /usr/bin/perl /usr/bin/mogilefsd --daemon www-data 4008 0.1 5.6 979784 175232 ? S May19 121:36 /usr/bin/mogilefsd [replicate] www-data 4009 0.1 5.5 979012 174120 ? S May19 121:39 /usr/bin/mogilefsd [replicate] www-data 4010 0.1 5.6 979368 174576 ? S May19 121:41 /usr/bin/mogilefsd [replicate] www-data 4011 0.0 0.2 33608 8940 ? S May19 12:30 /usr/bin/mogilefsd [delete] www-data 4012 0.0 0.6 38756 19500 ? S May19 6:33 /usr/bin/mogilefsd [queryworker] www-data 4013 0.0 0.9 41908 28848 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] www-data 4014 0.0 0.9 42332 29220 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] www-data 4015 0.0 0.8 39816 26032 ? S May19 6:33 /usr/bin/mogilefsd [queryworker] www-data 4016 0.0 0.8 39512 25844 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] www-data 4017 0.0 0.9 42108 28548 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] www-data 4018 0.0 0.8 39248 25216 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] www-data 4019 0.0 0.8 39276 25312 ? S May19 6:33 /usr/bin/mogilefsd [queryworker] www-data 4020 0.0 0.8 39328 25176 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] www-data 4021 0.0 0.7 38024 23220 ? S May19 6:31 /usr/bin/mogilefsd [queryworker] www-data 4022 0.0 0.5 37988 16152 ? S May19 6:34 /usr/bin/mogilefsd [queryworker] www-data 4023 0.0 0.8 39120 25452 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] www-data 4024 0.0 0.5 38804 18564 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] www-data 4025 0.0 0.8 40016 25708 ? S May19 6:34 /usr/bin/mogilefsd [queryworker] www-data 4026 0.0 0.9 41952 29552 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] www-data 4027 0.0 0.8 39500 25848 ? S May19 6:34 /usr/bin/mogilefsd [queryworker] www-data 4028 0.0 0.7 37920 23400 ? S May19 6:30 /usr/bin/mogilefsd [queryworker] www-data 4029 0.0 0.7 38240 23468 ? S May19 6:34 /usr/bin/mogilefsd [queryworker] www-data 4030 0.0 0.8 39248 25724 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] www-data 4031 0.0 0.8 39376 25520 ? S May19 6:30 /usr/bin/mogilefsd [queryworker] www-data 4032 0.0 0.2 35772 8832 ? S May19 18:16 /usr/bin/mogilefsd [monitor] www-data 4006 0.1 5.5 977928 173016 ? S May19 121:42 /usr/bin/mogilefsd [replicate] www-data 4007 0.1 5.6 979600 175188 ? S May19 121:54 /usr/bin/mogilefsd [replicate] Thanks for any pointers, -- andreas From jaybuffington at gmail.com Mon Jul 31 17:12:21 2006 From: jaybuffington at gmail.com (Jay Buffington) Date: Mon Jul 31 17:12:29 2006 Subject: How large should a mogilefsd process become? In-Reply-To: <87u04x91zj.fsf@k75.linux.bogus> References: <87u04x91zj.fsf@k75.linux.bogus> Message-ID: You could use GTop and monitor the memory usage before and after the replicator does a task to determine where the leak is. This example is for mod_perl, but you could apply it to mogile: http://perl.apache.org/docs/1.0/guide/performance.html#Measuring_the_Memory_of_the_Process This only happens when you insert large files? Jay On 7/31/06, Andreas J. Koenig wrote: > My mogilefsd processes on 4 machines are about to reach 1 GB size. > > This seems a bit gready to me and I'm worried if I hit a memory leak > somewhere. > > It's happening on a Debian box and I do not yet have an idea where to > start investigating. I'd also be grateful for a suggestion how to work > around the growth without disturbing the boxes too much. > > Did I probably forget to configure some equivalent to > maxrequestperchild? :-) > > # ps auxww|grep mogilefsd > www-data 3999 0.0 1.2 46376 40264 ? S May19 62:10 /usr/bin/perl /usr/bin/mogilefsd --daemon > www-data 4008 0.1 5.6 979784 175232 ? S May19 121:36 /usr/bin/mogilefsd [replicate] > www-data 4009 0.1 5.5 979012 174120 ? S May19 121:39 /usr/bin/mogilefsd [replicate] > www-data 4010 0.1 5.6 979368 174576 ? S May19 121:41 /usr/bin/mogilefsd [replicate] > www-data 4011 0.0 0.2 33608 8940 ? S May19 12:30 /usr/bin/mogilefsd [delete] > www-data 4012 0.0 0.6 38756 19500 ? S May19 6:33 /usr/bin/mogilefsd [queryworker] > www-data 4013 0.0 0.9 41908 28848 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] > www-data 4014 0.0 0.9 42332 29220 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] > www-data 4015 0.0 0.8 39816 26032 ? S May19 6:33 /usr/bin/mogilefsd [queryworker] > www-data 4016 0.0 0.8 39512 25844 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] > www-data 4017 0.0 0.9 42108 28548 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] > www-data 4018 0.0 0.8 39248 25216 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] > www-data 4019 0.0 0.8 39276 25312 ? S May19 6:33 /usr/bin/mogilefsd [queryworker] > www-data 4020 0.0 0.8 39328 25176 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] > www-data 4021 0.0 0.7 38024 23220 ? S May19 6:31 /usr/bin/mogilefsd [queryworker] > www-data 4022 0.0 0.5 37988 16152 ? S May19 6:34 /usr/bin/mogilefsd [queryworker] > www-data 4023 0.0 0.8 39120 25452 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] > www-data 4024 0.0 0.5 38804 18564 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] > www-data 4025 0.0 0.8 40016 25708 ? S May19 6:34 /usr/bin/mogilefsd [queryworker] > www-data 4026 0.0 0.9 41952 29552 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] > www-data 4027 0.0 0.8 39500 25848 ? S May19 6:34 /usr/bin/mogilefsd [queryworker] > www-data 4028 0.0 0.7 37920 23400 ? S May19 6:30 /usr/bin/mogilefsd [queryworker] > www-data 4029 0.0 0.7 38240 23468 ? S May19 6:34 /usr/bin/mogilefsd [queryworker] > www-data 4030 0.0 0.8 39248 25724 ? S May19 6:32 /usr/bin/mogilefsd [queryworker] > www-data 4031 0.0 0.8 39376 25520 ? S May19 6:30 /usr/bin/mogilefsd [queryworker] > www-data 4032 0.0 0.2 35772 8832 ? S May19 18:16 /usr/bin/mogilefsd [monitor] > www-data 4006 0.1 5.5 977928 173016 ? S May19 121:42 /usr/bin/mogilefsd [replicate] > www-data 4007 0.1 5.6 979600 175188 ? S May19 121:54 /usr/bin/mogilefsd [replicate] > > > Thanks for any pointers, > -- > andreas > From komtanoo.pinpimai at livetext.com Mon Jul 31 17:15:57 2006 From: komtanoo.pinpimai at livetext.com (komtanoo.pinpimai@livetext.com) Date: Mon Jul 31 17:16:42 2006 Subject: How large should a mogilefsd process become? In-Reply-To: <87u04x91zj.fsf@k75.linux.bogus> References: <87u04x91zj.fsf@k75.linux.bogus> Message-ID: <2237.192.168.2.147.1154366157.squirrel@mail01.livetext.com> you can see what really eat memory are the replicate processes. It happened to me when they replicate large files, ie. if a replicator replicates a 500M file, the process eats 500M memory. You might want to try this patch. -kem On Mon, July 31, 2006 11:24 am, Andreas J. Koenig wrote: > My mogilefsd processes on 4 machines are about to reach 1 GB size. > > > This seems a bit gready to me and I'm worried if I hit a memory leak > somewhere. > > It's happening on a Debian box and I do not yet have an idea where to > start investigating. I'd also be grateful for a suggestion how to work > around the growth without disturbing the boxes too much. > > Did I probably forget to configure some equivalent to > maxrequestperchild? :-) > > # ps auxww|grep mogilefsd > www-data 3999 0.0 1.2 46376 40264 ? S May19 62:10 > /usr/bin/perl /usr/bin/mogilefsd --daemon > www-data 4008 0.1 5.6 979784 175232 ? S May19 121:36 > /usr/bin/mogilefsd [replicate] > www-data 4009 0.1 5.5 979012 174120 ? S May19 121:39 > /usr/bin/mogilefsd [replicate] > www-data 4010 0.1 5.6 979368 174576 ? S May19 121:41 > /usr/bin/mogilefsd [replicate] > www-data 4011 0.0 0.2 33608 8940 ? S May19 12:30 > /usr/bin/mogilefsd [delete] > www-data 4012 0.0 0.6 38756 19500 ? S May19 6:33 > /usr/bin/mogilefsd [queryworker] > www-data 4013 0.0 0.9 41908 28848 ? S May19 6:32 > /usr/bin/mogilefsd [queryworker] > www-data 4014 0.0 0.9 42332 29220 ? S May19 6:32 > /usr/bin/mogilefsd [queryworker] > www-data 4015 0.0 0.8 39816 26032 ? S May19 6:33 > /usr/bin/mogilefsd [queryworker] > www-data 4016 0.0 0.8 39512 25844 ? S May19 6:32 > /usr/bin/mogilefsd [queryworker] > www-data 4017 0.0 0.9 42108 28548 ? S May19 6:32 > /usr/bin/mogilefsd [queryworker] > www-data 4018 0.0 0.8 39248 25216 ? S May19 6:32 > /usr/bin/mogilefsd [queryworker] > www-data 4019 0.0 0.8 39276 25312 ? S May19 6:33 > /usr/bin/mogilefsd [queryworker] > www-data 4020 0.0 0.8 39328 25176 ? S May19 6:32 > /usr/bin/mogilefsd [queryworker] > www-data 4021 0.0 0.7 38024 23220 ? S May19 6:31 > /usr/bin/mogilefsd [queryworker] > www-data 4022 0.0 0.5 37988 16152 ? S May19 6:34 > /usr/bin/mogilefsd [queryworker] > www-data 4023 0.0 0.8 39120 25452 ? S May19 6:32 > /usr/bin/mogilefsd [queryworker] > www-data 4024 0.0 0.5 38804 18564 ? S May19 6:32 > /usr/bin/mogilefsd [queryworker] > www-data 4025 0.0 0.8 40016 25708 ? S May19 6:34 > /usr/bin/mogilefsd [queryworker] > www-data 4026 0.0 0.9 41952 29552 ? S May19 6:32 > /usr/bin/mogilefsd [queryworker] > www-data 4027 0.0 0.8 39500 25848 ? S May19 6:34 > /usr/bin/mogilefsd [queryworker] > www-data 4028 0.0 0.7 37920 23400 ? S May19 6:30 > /usr/bin/mogilefsd [queryworker] > www-data 4029 0.0 0.7 38240 23468 ? S May19 6:34 > /usr/bin/mogilefsd [queryworker] > www-data 4030 0.0 0.8 39248 25724 ? S May19 6:32 > /usr/bin/mogilefsd [queryworker] > www-data 4031 0.0 0.8 39376 25520 ? S May19 6:30 > /usr/bin/mogilefsd [queryworker] > www-data 4032 0.0 0.2 35772 8832 ? S May19 18:16 > /usr/bin/mogilefsd [monitor] > www-data 4006 0.1 5.5 977928 173016 ? S May19 121:42 > /usr/bin/mogilefsd [replicate] > www-data 4007 0.1 5.6 979600 175188 ? S May19 121:54 > /usr/bin/mogilefsd [replicate] > > > > Thanks for any pointers, > -- > andreas > -------------- next part -------------- A non-text attachment was scrubbed... Name: mogilefsdpatch Type: application/octet-stream Size: 2858 bytes Desc: not available Url : http://lists.danga.com/pipermail/mogilefs/attachments/20060731/78bc530c/mogilefsdpatch.obj From junior at danga.com Mon Jul 31 18:59:03 2006 From: junior at danga.com (Mark Smith) Date: Mon Jul 31 19:05:52 2006 Subject: How large should a mogilefsd process become? In-Reply-To: <87u04x91zj.fsf@k75.linux.bogus> References: <87u04x91zj.fsf@k75.linux.bogus> Message-ID: <20060731185903.GA13061@plogs.net> > My mogilefsd processes on 4 machines are about to reach 1 GB size. Definitely not that large. This seems to be caused by using large files and the replicator not dealing with them correctly. I haven't looked at the patch emailed out in response to this issue, so I can't recommend it or not... Anyway, MogileFS is undergoing a lot of change right now. I will add it to my list to investigate replication of large files, and shore up the process/make it work better. -- Mark Smith junior@danga.com From komtanoo.pinpimai at livetext.com Mon Jul 31 20:45:21 2006 From: komtanoo.pinpimai at livetext.com (komtanoo.pinpimai@livetext.com) Date: Mon Jul 31 20:45:34 2006 Subject: sharing query result Message-ID: <3132.192.168.2.147.1154378721.squirrel@mail01.livetext.com> Just another thought. I notice that the replicators in the same host can share the query results with one another inorder to reduce database polling using some kinds of ipc or ithread. -kem From junior at danga.com Mon Jul 31 21:11:38 2006 From: junior at danga.com (Mark Smith) Date: Mon Jul 31 21:11:45 2006 Subject: sharing query result In-Reply-To: <3132.192.168.2.147.1154378721.squirrel@mail01.livetext.com> References: <3132.192.168.2.147.1154378721.squirrel@mail01.livetext.com> Message-ID: <20060731211138.GB13061@plogs.net> > Just another thought. I notice that the replicators in the same host can > share the query results with one another inorder to reduce database > polling using some kinds of ipc or ithread. Realistically, yes. It'd be very nice to have some sort of replication manager that identifies files that need replicating, and then pass that information on down to the replication workers. Even better, setup the UDP transport between the trackers that we've been wanting since the beginning, and have them elect someone to 'master' the process and pass out tasks to everybody. The blocking factor will be the actual replication and not figuring out what needs replicating. This is actually more feasible given the architecture that we've got going now, but would still need some work... -- Mark Smith junior@danga.com