From ngaugler@ngworld.net Sun Feb 1 04:58:03 2004 From: ngaugler@ngworld.net (Nick) Date: Sat, 31 Jan 2004 23:58:03 -0500 Subject: Reserval of memory? Message-ID: <000a01c3e87f$fa7d1d60$6601a8c0@potstn01.pa.comcast.net> This is a multi-part message in MIME format. ------=_NextPart_000_0007_01C3E856.102E3100 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable I have two memcacheds setup to use 512MB, one has been running for about = one month, the other for only a week. But no matter how long they're = running, they stats command never shows them using more than ~370MB of = memory to store items. Is this because memcached is reserving this = memory? Or is this memory used for other purposes to handle the storage = of these items. I show the memcacheds using just about all 512MB in the = operating system, but they never seem to get close to the = limit_maxbytes. Each server has anywhere from 1.3 to 1.4million items = currently stored in it, under curr_items. Does that mean there's = roughly 114 bytes to store each item in the daemon? Nick STAT pid 776 STAT uptime 170166 STAT time 1075610968 STAT version 1.1.9 STAT rusage_user 690:410000 STAT rusage_system 853:650000 STAT curr_items 1316421 STAT total_items 7998033 STAT bytes 369595318 STAT curr_connections 321 STAT total_connections 247576 STAT connection_structures 734 STAT cmd_get 31288079 STAT cmd_set 7994951 STAT get_hits 28654473 STAT get_misses 2633606 STAT bytes_read 1774989064 STAT bytes_written 9845509166 STAT limit_maxbytes 536870912 ------=_NextPart_000_0007_01C3E856.102E3100 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable
I have two memcacheds setup to use = 512MB, one has=20 been running for about one month, the other for only a = week.  But=20 no matter how long they're running, they stats command never shows them = using=20 more than ~370MB of memory to store items.  Is this because = memcached is=20 reserving this memory? Or is this memory used for other purposes to = handle the=20 storage of these items.  I show the memcacheds using just about all = 512MB=20 in the operating system, but they never seem to get close to the = limit_maxbytes.=20 Each server has anywhere from 1.3 to 1.4million items currently stored = in it,=20 under curr_items.  Does that mean there's roughly 114 bytes to = store each=20 item in the daemon?
 
 
Nick
 
STAT pid 776
STAT uptime = 170166
STAT time=20 1075610968
STAT version 1.1.9
STAT rusage_user 690:410000
STAT=20 rusage_system 853:650000
STAT curr_items 1316421
STAT total_items=20 7998033
STAT bytes 369595318
STAT curr_connections 321
STAT=20 total_connections 247576
STAT connection_structures 734
STAT = cmd_get=20 31288079
STAT cmd_set 7994951
STAT get_hits 28654473
STAT = get_misses=20 2633606
STAT bytes_read 1774989064
STAT bytes_written = 9845509166
STAT=20 limit_maxbytes 536870912
------=_NextPart_000_0007_01C3E856.102E3100-- From brad@danga.com Tue Feb 3 18:58:36 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Tue, 3 Feb 2004 10:58:36 -0800 (PST) Subject: embarassing bug in perl module Message-ID: I just found and fixed (in cvs) an embarassing bug in Cache::Memcached 0.11 related to get_multi. If a 2k read boundary came in the middle of an item header, all subsequent items were ignored. It wasn't always like this, but we missed it during the later parser changes. (a test suite would be nice.... :/) So your memcached server stats would report high hit rates, but the Perl module would just throw stuff away. This bug was a lot easier to hit with a large get_multi query with a mix of large items and small items. (large enough to roll over into multiple 2k regions, but a lot of small items so the headers were longer than the data and the boundary was likely to be within a header....) I'll be putting up a new release once Avva confirms my fix. (which is now live on LiveJournal and seems to be kicking ass...) I like parens. (too much coffee) - Brad From brad@danga.com Wed Feb 4 18:13:04 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Wed, 4 Feb 2004 10:13:04 -0800 (PST) Subject: memcached on IA64: no problem Message-ID: I currently have access to an IA64 machine for evaluation so I thought I'd try memcached on it. No difficulties whatsoever, and look at all that address space! :) [root@localhost memcached-1.1.10]# cat /proc/cpuinfo processor : 0 vendor : GenuineIntel arch : IA-64 family : Itanium 2 model : 1 revision : 5 archrev : 0 features : branchlong cpu number : 0 cpu regs : 4 cpu MHz : 1396.220994 itc MHz : 1396.220994 BogoMIPS : 2088.76 [root@localhost memcached-1.1.10]# make gcc -DHAVE_CONFIG_H -I. -I. -I. -DNDEBUG -g -O2 -I/include -c memcached.c gcc -DHAVE_CONFIG_H -I. -I. -I. -DNDEBUG -g -O2 -I/include -c slabs.c gcc -DHAVE_CONFIG_H -I. -I. -I. -DNDEBUG -g -O2 -I/include -c items.c gcc -DHAVE_CONFIG_H -I. -I. -I. -DNDEBUG -g -O2 -I/include -c assoc.c gcc -DNDEBUG -g -O2 -I/include -L/lib -o memcached memcached.o slabs.o items.o assoc.o -levent [root@localhost memcached-1.1.10]# ./memcached -d -u nobody [root@localhost memcached-1.1.10]# telnet localhost 11211 Trying 127.0.0.1... Connected to localhost.localdomain (127.0.0.1). Escape character is '^]'. stats maps 0000000000000000-0000000000004000 r--p 0000000000000000 00:00 0 2000000000000000-200000000002c000 r-xp 0000000000000000 03:02 540676 /lib/ld-2.3.2.so 2000000000034000-2000000000038000 rw-p 0000000000000000 00:00 0 2000000000038000-2000000000040000 rw-p 0000000000028000 03:02 540676 /lib/ld-2.3.2.so 2000000000040000-200000000028c000 r-xp 0000000000000000 03:02 2293764 /lib/tls/libc-2.3.2.so 200000000028c000-2000000000290000 ---p 000000000024c000 03:02 2293764 /lib/tls/libc-2.3.2.so 2000000000290000-20000000002a4000 rw-p 0000000000240000 03:02 2293764 /lib/tls/libc-2.3.2.so 20000000002a4000-20000000002ac000 rw-p 0000000000000000 00:00 0 20000000002ac000-20000000002c4000 r-xp 0000000000000000 03:02 540697 /lib/libnss_files-2.3.2.so 20000000002c4000-20000000002cc000 ---p 0000000000018000 03:02 540697 /lib/libnss_files-2.3.2.so 20000000002cc000-20000000002d4000 rw-p 0000000000010000 03:02 540697 /lib/libnss_files-2.3.2.so 20000000002d4000-2000000000adc000 rw-p 0000000000000000 00:00 0 4000000000000000-4000000000014000 r-xp 0000000000000000 03:02 2555985 /usr/src/memcached-1.1.10/memcached 6000000000000000-6000000000004000 rw-p 0000000000010000 03:02 2555985 /usr/src/memcached-1.1.10/memcached 6000000000004000-6000000000028000 rw-p 0000000000000000 00:00 0 60000fff80000000-60000fff80004000 rw-p 0000000000000000 00:00 0 60000fffffff8000-60000fffffffc000 rw-p 0000000000000000 00:00 0 END set foo 0 0 3 bar STORED get foo VALUE foo 0 3 bar END ^]q Connection closed. [root@localhost memcached-1.1.10] From brad@danga.com Wed Feb 4 22:22:06 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Wed, 4 Feb 2004 14:22:06 -0800 (PST) Subject: Reserval of memory? In-Reply-To: <000a01c3e87f$fa7d1d60$6601a8c0@potstn01.pa.comcast.net> References: <000a01c3e87f$fa7d1d60$6601a8c0@potstn01.pa.comcast.net> Message-ID: So you're getting 72% efficiency. All of LJ's memcaches are around 69-70%. See the file doc/memory_management.txt in the memcached distribution for details on the slab allocation. We plan to change it at some point so we have one global LRU and store the items in chunks, but that may kill performance, so we haven't been too eager to put all the effort into it to get disappointed. The reason we use a slab allocator instead of just using malloc() is because earlier experience with malloc() showed the memory allocator locking up and getting confused, unable to find enough contiguous memory for even small items, after weeks/months of production. A slab allocator is guaranteed to never lock up, at the cost of wasting memory. (not that malloc() is perfectly efficient either.....) - Brad On Sat, 31 Jan 2004, Nick wrote: > I have two memcacheds setup to use 512MB, one has been running for about one month, the other for only a week. But no matter how long they're running, they stats command never shows them using more than ~370MB of memory to store items. Is this because memcached is reserving this memory? Or is this memory used for other purposes to handle the storage of these items. I show the memcacheds using just about all 512MB in the operating system, but they never seem to get close to the limit_maxbytes. Each server has anywhere from 1.3 to 1.4million items currently stored in it, under curr_items. Does that mean there's roughly 114 bytes to store each item in the daemon? > > > Nick > > STAT pid 776 > STAT uptime 170166 > STAT time 1075610968 > STAT version 1.1.9 > STAT rusage_user 690:410000 > STAT rusage_system 853:650000 > STAT curr_items 1316421 > STAT total_items 7998033 > STAT bytes 369595318 > STAT curr_connections 321 > STAT total_connections 247576 > STAT connection_structures 734 > STAT cmd_get 31288079 > STAT cmd_set 7994951 > STAT get_hits 28654473 > STAT get_misses 2633606 > STAT bytes_read 1774989064 > STAT bytes_written 9845509166 > STAT limit_maxbytes 536870912 From jtitus@postini.com Mon Feb 9 23:01:20 2004 From: jtitus@postini.com (Jason Titus) Date: Mon, 9 Feb 2004 15:01:20 -0800 Subject: Cache::Memcached 1.0.12? Message-ID: The changelog shows an important fix in version 1.0.12. Any idea when = it might be packaged up and uploaded to the web site or CPAN? Are there = known issues or things people should be testing? Thanks=20 Jason From brad@danga.com Mon Feb 9 23:12:15 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Mon, 9 Feb 2004 15:12:15 -0800 (PST) Subject: Cache::Memcached 1.0.12? In-Reply-To: References: Message-ID: No, just me forgetting. (both to do it and now my CPAN credentials...) Let me do the website first, then CPAN shortly thereafter. On Mon, 9 Feb 2004, Jason Titus wrote: > The changelog shows an important fix in version 1.0.12. Any idea when it might be packaged up and uploaded to the web site or CPAN? Are there known issues or things people should be testing? > > Thanks > Jason > > From brad@danga.com Mon Feb 9 23:26:33 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Mon, 9 Feb 2004 15:26:33 -0800 (PST) Subject: Cache::Memcached 1.0.12 Message-ID: New Cache::Memcached is available. Changes: +2003-02-03 + * fix bug with 2k read boundaries falling in the middle + of "VALUE ..." or "END" lines, thus halting future + parsing and responses. (eek!) + * version 1.0.12 + +2003-12-01 + * merge stats/stats_reset patch from Jamie McCarthy + * trailing whitespace cleanup + +2003-11-08 + * work on Solaris/BSD where there's no MSG_NOSIGNAL. + the expense is extra syscalls to change the local + SIGPIPE handler all the time. in the future, it'd + be nice to have an option so Solaris/BSD callers + can say, "Hey, I've turned off SIGPIPE globally, + don't worry about it." + And also, the "set_norehash" constructor attribute and method. ---------- Forwarded message ---------- Date: Tue, 10 Feb 2004 00:24:28 +0100 From: PAUSE Reply-To: cpan-testers@perl.org To: Brad Fitzpatrick Subject: CPAN Upload: B/BR/BRADFITZ/Cache-Memcached-1.0.12.tar.gz The URL http://www.danga.com/memcached/dist/Cache-Memcached-1.0.12.tar.gz has entered CPAN as file: $CPAN/authors/id/B/BR/BRADFITZ/Cache-Memcached-1.0.12.tar.gz size: 11006 bytes md5: c2a5c93fdcdb2311cfe170815f18af93 No action is required on your part Request entered by: BRADFITZ (Brad Fitzpatrick) Request entered on: Mon, 09 Feb 2004 23:23:24 GMT Request completed: Mon, 09 Feb 2004 23:24:28 GMT Thanks, -- paused, v460 From perrin@elem.com Mon Feb 9 23:30:15 2004 From: perrin@elem.com (Perrin Harkins) Date: Mon, 09 Feb 2004 18:30:15 -0500 Subject: Cache::Memcached 1.0.12? In-Reply-To: References: Message-ID: <1076369414.6851.53.camel@localhost.localdomain> On Mon, 2004-02-09 at 18:12, Brad Fitzpatrick wrote: > No, just me forgetting. If you like, you could set up a cron that will upload it to CPAN every time you make a new release. There is some code for automating a CPAN upload here: http://www.stonehenge.com/merlyn/UnixReview/col50.html - Perrin From jtitus@postini.com Mon Feb 9 23:30:57 2004 From: jtitus@postini.com (Jason Titus) Date: Mon, 9 Feb 2004 15:30:57 -0800 Subject: Cache::Memcached 1.0.12? Message-ID: Great. I grabbed it and will start banging on it a bit. Keep up the good work, Jason -----Original Message----- From: Brad Fitzpatrick [mailto:brad@danga.com] Sent: Monday, February 09, 2004 3:12 PM To: Jason Titus Cc: memcached@lists.danga.com Subject: Re: Cache::Memcached 1.0.12? No, just me forgetting. (both to do it and now my CPAN credentials...) Let me do the website first, then CPAN shortly thereafter. On Mon, 9 Feb 2004, Jason Titus wrote: > The changelog shows an important fix in version 1.0.12. Any idea when = it might be packaged up and uploaded to the web site or CPAN? Are there = known issues or things people should be testing? > > Thanks > Jason > > From jtitus@postini.com Tue Feb 10 06:18:29 2004 From: jtitus@postini.com (Jason Titus) Date: Mon, 9 Feb 2004 22:18:29 -0800 Subject: RHEL 3? Message-ID: Anyone running memcached on RedHat Enterprise Linux 3? I'm having a = hard time finding getting epoll working on this distro. Seems like the = glibc supports it, but the kernel doesn't, and I can't find a patch that = will go against 2.4.21-EL cleanly (or even 2.4.21). I suppose we could = go for the vanilla kernel, but that would mean a bunch of time to harden = it. Anyone gotten epoll (or rtsig) working with RHEL? We want to deploy on = RHEL 3 w/ Opterons. Should be a nice memcached server if we can get a = fast libevent setup... Thanks for any help, Jason p.s. - For the record, memcached works fine with poll on the Opterons. = Loaded up a couple of gigs of data and hammered it for a while. I just = imagine it will slow down a bit when we give it more connections. From brad@danga.com Tue Feb 10 06:23:45 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Mon, 9 Feb 2004 22:23:45 -0800 (PST) Subject: RHEL 3? In-Reply-To: References: Message-ID: > Anyone running memcached on RedHat Enterprise Linux 3? I'm having a > hard time finding getting epoll working on this distro. Seems like the > glibc supports it, but the kernel doesn't, and I can't find a patch that > will go against 2.4.21-EL cleanly (or even 2.4.21). I suppose we could > go for the vanilla kernel, but that would mean a bunch of time to harden > it. Weird... I'd have thought RHEL would do epoll by default. I guess very little code uses it yet, so it's not considered a big deal. Here's the patch for 2.4.21 vanilla: http://www.xmailserver.org/linux-patches/epoll-lt-2.4.21-0.18.diff What parts don't merge cleanly? Usually resolving it by hand isn't so hard. - Brad From jtitus@postini.com Tue Feb 10 06:50:33 2004 From: jtitus@postini.com (Jason Titus) Date: Mon, 9 Feb 2004 22:50:33 -0800 Subject: RHEL 3? Message-ID: Doesn't look good against linux-2.4.21-4.EL.... [root src]# patch -p0 < /root/epoll-lt-2.4.21-0.18.diff can't find file to patch at input line 4 Perhaps you used the wrong -p or --strip option? The text leading up to this was: -------------------------- |diff -Nru linux-2.4.21.vanilla/arch/i386/kernel/entry.S = linux-2.4.21.epoll/arch/i386/kernel/entry.S |--- linux-2.4.21.vanilla/arch/i386/kernel/entry.S 2003-06-13 = 07:51:29.000000000 -0700 |+++ linux-2.4.21.epoll/arch/i386/kernel/entry.S 2003-08-24 = 12:37:17.000000000 -0700 -------------------------- File to patch:=20 [root@rcdev1mx5 src]# mv linux-2.4.21-4.EL-bkp linux-2.4.21.epoll [root@rcdev1mx5 src]# patch -p0 < /root/epoll-lt-2.4.21-0.18.diff patching file linux-2.4.21.epoll/arch/i386/kernel/entry.S Hunk #1 FAILED at 658. 1 out of 1 hunk FAILED -- saving rejects to file = linux-2.4.21.epoll/arch/i386/kernel/entry.S.rej patching file linux-2.4.21.epoll/arch/ia64/ia32/ia32_entry.S Hunk #1 FAILED at 401. 1 out of 1 hunk FAILED -- saving rejects to file = linux-2.4.21.epoll/arch/ia64/ia32/ia32_entry.S.rej patching file linux-2.4.21.epoll/arch/ia64/ia32/sys_ia32.c Hunk #2 succeeded at 4053 (offset 106 lines). patching file linux-2.4.21.epoll/arch/ia64/kernel/ivt.S Hunk #1 FAILED at 847. 1 out of 1 hunk FAILED -- saving rejects to file = linux-2.4.21.epoll/arch/ia64/kernel/ivt.S.rej patching file linux-2.4.21.epoll/arch/sparc64/solaris/timod.c Hunk #1 succeeded at 665 (offset 14 lines). patching file linux-2.4.21.epoll/fs/eventpoll.c patching file linux-2.4.21.epoll/fs/file_table.c Hunk #4 FAILED at 107. 1 out of 4 hunks FAILED -- saving rejects to file = linux-2.4.21.epoll/fs/file_table.c.rej patching file linux-2.4.21.epoll/fs/Makefile Hunk #1 FAILED at 14. 1 out of 1 hunk FAILED -- saving rejects to file = linux-2.4.21.epoll/fs/Makefile.rej patching file linux-2.4.21.epoll/fs/ncpfs/sock.c patching file linux-2.4.21.epoll/fs/select.c Hunk #1 FAILED at 19. Hunk #2 FAILED at 53. Hunk #3 succeeded at 229 with fuzz 1 (offset 148 lines). Hunk #4 succeeded at 118 (offset 5 lines). Hunk #5 succeeded at 322 (offset 148 lines). Hunk #6 succeeded at 193 (offset 5 lines). Hunk #7 succeeded at 543 (offset 148 lines). Hunk #8 succeeded at 430 (offset 5 lines). 2 out of 8 hunks FAILED -- saving rejects to file = linux-2.4.21.epoll/fs/select.c.rej patching file linux-2.4.21.epoll/fs/smbfs/sock.c patching file linux-2.4.21.epoll/include/asm-i386/unistd.h Hunk #1 FAILED at 257. 1 out of 1 hunk FAILED -- saving rejects to file = linux-2.4.21.epoll/include/asm-i386/unistd.h.rej patching file linux-2.4.21.epoll/include/asm-ppc/unistd.h Hunk #1 FAILED at 238. 1 out of 1 hunk FAILED -- saving rejects to file = linux-2.4.21.epoll/include/asm-ppc/unistd.h.rej patching file linux-2.4.21.epoll/include/linux/eventpoll.h patching file linux-2.4.21.epoll/include/linux/fs.h Hunk #1 succeeded at 602 (offset 59 lines). The next patch would create the file = linux-2.4.21.epoll/include/linux/hash.h, which already exists! Assume -R? [n]=20 Apply anyway? [n]=20 Skipping patch. 1 out of 1 hunk ignored -- saving rejects to file = linux-2.4.21.epoll/include/linux/hash.h.rej patching file linux-2.4.21.epoll/include/linux/kernel.h Hunk #1 succeeded at 209 (offset 35 lines). patching file linux-2.4.21.epoll/include/linux/poll.h Hunk #1 FAILED at 10. 1 out of 1 hunk FAILED -- saving rejects to file = linux-2.4.21.epoll/include/linux/poll.h.rej patching file linux-2.4.21.epoll/include/linux/sched.h Hunk #1 succeeded at 157 with fuzz 2 (offset 22 lines). patching file linux-2.4.21.epoll/include/linux/wait.h Hunk #2 FAILED at 35. Hunk #3 succeeded at 149 with fuzz 2 (offset 8 lines). Hunk #4 FAILED at 186. Hunk #5 succeeded at 217 (offset 16 lines). 2 out of 5 hunks FAILED -- saving rejects to file = linux-2.4.21.epoll/include/linux/wait.h.rej patching file linux-2.4.21.epoll/kernel/ksyms.c Hunk #1 succeeded at 301 (offset 42 lines). patching file linux-2.4.21.epoll/kernel/sched.c Hunk #1 FAILED at 714. 1 out of 1 hunk FAILED -- saving rejects to file = linux-2.4.21.epoll/kernel/sched.c.rej ---- Seems pretty hard to work out by hand. Bummer that RedHat hasn't = decided to include epoll in their distro although the header and man = files are there in a few places: [root src]# locate epoll /usr/share/man/man2/epoll_create.2.gz /usr/share/man/man2/epoll_ctl.2.gz /usr/share/man/man2/epoll_wait.2.gz /usr/share/man/man4/epoll.4.gz /usr/include/sys/epoll.h /usr/lib/perl5/5.8.0/i386-linux-thread-multi/sys/epoll.ph Jason -----Original Message----- From: Brad Fitzpatrick [mailto:brad@danga.com] Sent: Mon 2/9/2004 10:23 PM To: Jason Titus Cc: memcached@lists.danga.com Subject: Re: RHEL 3? =20 > Anyone running memcached on RedHat Enterprise Linux 3? I'm having a > hard time finding getting epoll working on this distro. Seems like = the > glibc supports it, but the kernel doesn't, and I can't find a patch = that > will go against 2.4.21-EL cleanly (or even 2.4.21). I suppose we = could > go for the vanilla kernel, but that would mean a bunch of time to = harden > it. Weird... I'd have thought RHEL would do epoll by default. I guess very little code uses it yet, so it's not considered a big deal. Here's the patch for 2.4.21 vanilla: http://www.xmailserver.org/linux-patches/epoll-lt-2.4.21-0.18.diff What parts don't merge cleanly? Usually resolving it by hand isn't so hard. - Brad From yusufg@outblaze.com Tue Feb 10 07:41:37 2004 From: yusufg@outblaze.com (Yusuf Goolamabbas) Date: Tue, 10 Feb 2004 15:41:37 +0800 Subject: RHEL 3? In-Reply-To: References: Message-ID: <20040210074137.GF13465@outblaze.com> > > Anyone running memcached on RedHat Enterprise Linux 3? I'm having a > > hard time finding getting epoll working on this distro. Seems like the > > glibc supports it, but the kernel doesn't, and I can't find a patch that > > will go against 2.4.21-EL cleanly (or even 2.4.21). I suppose we could > > go for the vanilla kernel, but that would mean a bunch of time to harden > > it. > > Weird... I'd have thought RHEL would do epoll by default. I guess very > little code uses it yet, so it's not considered a big deal. The RHEL glibc has weak symbols for epoll so they will do the right thing with the appropiate kernel. But don't you violate your RHEL support/warranty if you muck around with your own kernel. Try getting a Fedora Core x86-64 build and they compile 2.6.x on it -- If you're not using Firefox, you're not surfing the web you're suffering it http://www.mozilla.org/products/firefox/why/ From jamie@mccarthy.vg Tue Feb 10 15:38:55 2004 From: jamie@mccarthy.vg (Jamie McCarthy) Date: Tue, 10 Feb 2004 10:38:55 -0500 Subject: Cache::Memcached 1.0.12? In-Reply-To: <1076369414.6851.53.camel@localhost.localdomain> Message-ID: perrin@elem.com (Perrin Harkins) writes: > If you like, you could set up a cron that will upload it to CPAN > every time you make a new release. There is some code for > automating a CPAN upload here: > http://www.stonehenge.com/merlyn/UnixReview/col50.html Or try Module::Release. --=20 Jamie McCarthy http://mccarthy.vg/ jamie@mccarthy.vg From jamie@mccarthy.vg Tue Feb 10 15:53:55 2004 From: jamie@mccarthy.vg (Jamie McCarthy) Date: Tue, 10 Feb 2004 10:53:55 -0500 Subject: Red Hat 9 and epoll/libevent? In-Reply-To: <20040210074137.GF13465@outblaze.com> Message-ID: I don't use Red Hat at home so I'm not familiar with it. Has anyone else had problems getting epoll support working on "Red Hat Linux release 9 (Shrike)"? uname -a: Linux foo.com 2.4.23 #2 SMP Mon Jan 19 16:38:49 PST 2004 i686 i686 i386 GNU= /Linux rpm -qi glibc: Name : glibc Relocations: (not relocateable) Version : 2.3.2 Vendor: Red Hat, Inc. Release : 27.9.7 Build Date: Wed 12 Nov 2003 05:= 01:36 PM PST Install Date: Fri 14 Nov 2003 01:37:34 PM PST Build Host: porky.devel.= redhat.com Group : System Environment/Libraries Source RPM: glibc-2.3.2-27.9.7= =2Esrc.rpm ls -l /dev/epoll: crw-r--r-- 1 root root 10, 124 Jan 19 17:02 /dev/epoll I don't have root on this box so I have to trust that the admins applied the epoll kernel patch correctly. The problem in compiling libevent is the same one I've seen on other boxes when the patch is not applied: =2E./libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not impl= emented and will always fail Anyone have any ideas? # ./configure checking for a BSD compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking whether make sets ${MAKE}... yes checking for working aclocal-1.4... found checking for working autoconf... found checking for working automake-1.4... found checking for working autoheader... found checking for working makeinfo... found checking whether to enable maintainer-specific portions of Makefiles... no checking for gcc... gcc checking for C compiler default output... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for executable suffix...=20 checking for object suffix... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for ranlib... ranlib checking for a BSD compatible install... /usr/bin/install -c checking whether ln -s works... yes checking for socket in -lsocket... no checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for inttypes.h... yes checking for stdint.h... yes checking for poll.h... yes checking for signal.h... yes checking for unistd.h... yes checking for sys/epoll.h... yes checking for sys/time.h... yes checking for sys/queue.h... yes checking for sys/event.h... no checking for TAILQ_FOREACH in sys/queue.h... no checking for timeradd in sys/time.h... yes checking whether time.h and sys/time.h may both be included... yes checking for gettimeofday... yes checking for select... yes checking for poll... yes checking for epoll_ctl... yes checking for err... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... (cached) yes checking for stdint.h... (cached) yes checking for unistd.h... (cached) yes checking for pid_t... yes checking for size_t... yes checking for u_int64_t... yes checking for u_int32_t... yes checking for u_int16_t... yes checking for u_int8_t... yes checking for socklen_t... yes configure: creating ./config.status config.status: creating Makefile config.status: creating test/Makefile config.status: creating sample/Makefile config.status: creating config.h # make make all-recursive make[1]: Entering directory `/usr/local/src/libevent-0.7c' Making all in . make[2]: Entering directory `/usr/local/src/libevent-0.7c' gcc -DHAVE_CONFIG_H -I. -I. -I. -Icompat -Wall -g -O2 -c event.c gcc -DHAVE_CONFIG_H -I. -I. -I. -Icompat -Wall -g -O2 -c select.c gcc -DHAVE_CONFIG_H -I. -I. -I. -Icompat -Wall -g -O2 -c poll.c gcc -DHAVE_CONFIG_H -I. -I. -I. -Icompat -Wall -g -O2 -c epoll.c gcc -DHAVE_CONFIG_H -I. -I. -I. -Icompat -Wall -g -O2 -c signal.c rm -f libevent.a ar cru libevent.a event.o select.o poll.o epoll.o signal.o ranlib libevent.a make[2]: Leaving directory `/usr/local/src/libevent-0.7c' Making all in sample make[2]: Entering directory `/usr/local/src/libevent-0.7c/sample' gcc -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -c event-test.c gcc -I../compat -o event-test event-test.o -L.. -levent=20 =2E./libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not impl= emented and will always fail =2E./libevent.a(epoll.o)(.text+0x394): In function `epoll_add': /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not impleme= nted and will always fail =2E./libevent.a(epoll.o)(.text+0x1ca): In function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: warning: epoll_wait is not implem= ented and will always fail gcc -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -c time-test.c gcc -I../compat -o time-test time-test.o -L.. -levent=20 =2E./libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not impl= emented and will always fail =2E./libevent.a(epoll.o)(.text+0x394): In function `epoll_add': /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not impleme= nted and will always fail =2E./libevent.a(epoll.o)(.text+0x1ca): In function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: warning: epoll_wait is not implem= ented and will always fail gcc -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -c signal-test.c gcc -I../compat -o signal-test signal-test.o -L.. -levent=20 =2E./libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not impl= emented and will always fail =2E./libevent.a(epoll.o)(.text+0x394): In function `epoll_add': /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not impleme= nted and will always fail =2E./libevent.a(epoll.o)(.text+0x1ca): In function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: warning: epoll_wait is not implem= ented and will always fail make[2]: Leaving directory `/usr/local/src/libevent-0.7c/sample' Making all in test make[2]: Entering directory `/usr/local/src/libevent-0.7c/test' gcc -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -Wall -g -O2 -c test-init.= c gcc -I../compat -Wall -g -O2 -o test-init test-init.o -L.. -levent=20 =2E./libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not impl= emented and will always fail =2E./libevent.a(epoll.o)(.text+0x394): In function `epoll_add': /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not impleme= nted and will always fail =2E./libevent.a(epoll.o)(.text+0x1ca): In function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: warning: epoll_wait is not implem= ented and will always fail gcc -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -Wall -g -O2 -c test-eof.c gcc -I../compat -Wall -g -O2 -o test-eof test-eof.o -L.. -levent=20 =2E./libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not impl= emented and will always fail =2E./libevent.a(epoll.o)(.text+0x394): In function `epoll_add': /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not impleme= nted and will always fail =2E./libevent.a(epoll.o)(.text+0x1ca): In function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: warning: epoll_wait is not implem= ented and will always fail gcc -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -Wall -g -O2 -c test-weof.= c gcc -I../compat -Wall -g -O2 -o test-weof test-weof.o -L.. -levent=20 =2E./libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not impl= emented and will always fail =2E./libevent.a(epoll.o)(.text+0x394): In function `epoll_add': /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not impleme= nted and will always fail =2E./libevent.a(epoll.o)(.text+0x1ca): In function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: warning: epoll_wait is not implem= ented and will always fail gcc -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -Wall -g -O2 -c test-time.= c gcc -I../compat -Wall -g -O2 -o test-time test-time.o -L.. -levent=20 =2E./libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not impl= emented and will always fail =2E./libevent.a(epoll.o)(.text+0x394): In function `epoll_add': /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not impleme= nted and will always fail =2E./libevent.a(epoll.o)(.text+0x1ca): In function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: warning: epoll_wait is not implem= ented and will always fail gcc -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -Wall -g -O2 -c regress.c gcc -I../compat -Wall -g -O2 -o regress regress.o -L.. -levent=20 =2E./libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not impl= emented and will always fail =2E./libevent.a(epoll.o)(.text+0x394): In function `epoll_add': /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not impleme= nted and will always fail =2E./libevent.a(epoll.o)(.text+0x1ca): In function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: warning: epoll_wait is not implem= ented and will always fail Running tests: KQUEUE Skipping test POLL test-eof: OKAY test-weof: OKAY test-time: OKAY regress: OKAY SELECT test-eof: OKAY test-weof: OKAY test-time: OKAY regress: OKAY RTSIG Skipping test EPOLL Skipping test make[2]: Leaving directory `/usr/local/src/libevent-0.7c/test' make[1]: Leaving directory `/usr/local/src/libevent-0.7c' --=20 Jamie McCarthy http://mccarthy.vg/ jamie@mccarthy.vg From joachim.bauernberger@friendscout24.de Tue Feb 10 16:18:26 2004 From: joachim.bauernberger@friendscout24.de (Joachim Bauernberger) Date: Tue, 10 Feb 2004 17:18:26 +0100 Subject: Red Hat 9 and epoll/libevent? In-Reply-To: References: Message-ID: <200402101718.26507.joachim.bauernberger@friendscout24.de> Hi I just patched a 2.4.24 kernel and am having similar problems. Useing the instructions on: http://epoll.hackerdojo.com/=20 =20 this site says that one should have a menuconfig option after applying the= =20 patch under character devices.=20 However I do not see anythig that suggests CONFIG_EPOLL in the kernel confi= g=20 after patching.=20 the following code fails in the configure script of libevent: #include #include #include #include #include #include int epoll_create(int size) { return (syscall(__NR_epoll_create, size)); } int main(int argc, char **argv) { int epfd; epfd =3D epoll_create(256); exit (epfd =3D=3D -1 ? 1 : 0); } the actual failure is: In function `epoll_create': `__NR_epoll_create' undeclared (first use in this function) (Each undeclared identifier is reported only once for each function it appears in.) I didn't have time to try out 2.6.x on my testbox that runs memcached at th= e=20 moment. I will play more with getting epoll to work once we move to=20 production Any ideas in the meantime? cheers, ~/joachim On Tuesday 10 February 2004 16:53, Jamie McCarthy wrote: > I don't use Red Hat at home so I'm not familiar with it. Has anyone > else had problems getting epoll support working on "Red Hat Linux > release 9 (Shrike)"? > > > uname -a: > > Linux foo.com 2.4.23 #2 SMP Mon Jan 19 16:38:49 PST 2004 i686 i686 i386 > GNU/Linux > > rpm -qi glibc: > > Name : glibc Relocations: (not relocateable) > Version : 2.3.2 Vendor: Red Hat, Inc. > Release : 27.9.7 Build Date: Wed 12 Nov 2003 > 05:01:36 PM PST Install Date: Fri 14 Nov 2003 01:37:34 PM PST Build > Host: porky.devel.redhat.com Group : System Environment/Libraries = =20 > Source RPM: glibc-2.3.2-27.9.7.src.rpm > > ls -l /dev/epoll: > > crw-r--r-- 1 root root 10, 124 Jan 19 17:02 /dev/epoll > > > I don't have root on this box so I have to trust that the admins > applied the epoll kernel patch correctly. The problem in compiling > libevent is the same one I've seen on other boxes when the patch is > not applied: > > ../libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': > /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not > implemented and will always fail > > Anyone have any ideas? > > > # ./configure > checking for a BSD compatible install... /usr/bin/install -c > checking whether build environment is sane... yes > checking whether make sets ${MAKE}... yes > checking for working aclocal-1.4... found > checking for working autoconf... found > checking for working automake-1.4... found > checking for working autoheader... found > checking for working makeinfo... found > checking whether to enable maintainer-specific portions of Makefiles... no > checking for gcc... gcc > checking for C compiler default output... a.out > checking whether the C compiler works... yes > checking whether we are cross compiling... no > checking for executable suffix... > checking for object suffix... o > checking whether we are using the GNU C compiler... yes > checking whether gcc accepts -g... yes > checking for ranlib... ranlib > checking for a BSD compatible install... /usr/bin/install -c > checking whether ln -s works... yes > checking for socket in -lsocket... no > checking how to run the C preprocessor... gcc -E > checking for ANSI C header files... yes > checking for inttypes.h... yes > checking for stdint.h... yes > checking for poll.h... yes > checking for signal.h... yes > checking for unistd.h... yes > checking for sys/epoll.h... yes > checking for sys/time.h... yes > checking for sys/queue.h... yes > checking for sys/event.h... no > checking for TAILQ_FOREACH in sys/queue.h... no > checking for timeradd in sys/time.h... yes > checking whether time.h and sys/time.h may both be included... yes > checking for gettimeofday... yes > checking for select... yes > checking for poll... yes > checking for epoll_ctl... yes > checking for err... yes > checking for sys/types.h... yes > checking for sys/stat.h... yes > checking for stdlib.h... yes > checking for string.h... yes > checking for memory.h... yes > checking for strings.h... yes > checking for inttypes.h... (cached) yes > checking for stdint.h... (cached) yes > checking for unistd.h... (cached) yes > checking for pid_t... yes > checking for size_t... yes > checking for u_int64_t... yes > checking for u_int32_t... yes > checking for u_int16_t... yes > checking for u_int8_t... yes > checking for socklen_t... yes > configure: creating ./config.status > config.status: creating Makefile > config.status: creating test/Makefile > config.status: creating sample/Makefile > config.status: creating config.h > > # make > make all-recursive > make[1]: Entering directory `/usr/local/src/libevent-0.7c' > Making all in . > make[2]: Entering directory `/usr/local/src/libevent-0.7c' > gcc -DHAVE_CONFIG_H -I. -I. -I. -Icompat -Wall -g -O2 -c event.c > gcc -DHAVE_CONFIG_H -I. -I. -I. -Icompat -Wall -g -O2 -c select.c > gcc -DHAVE_CONFIG_H -I. -I. -I. -Icompat -Wall -g -O2 -c poll.c > gcc -DHAVE_CONFIG_H -I. -I. -I. -Icompat -Wall -g -O2 -c epoll.c > gcc -DHAVE_CONFIG_H -I. -I. -I. -Icompat -Wall -g -O2 -c signal.c > rm -f libevent.a > ar cru libevent.a event.o select.o poll.o epoll.o signal.o > ranlib libevent.a > make[2]: Leaving directory `/usr/local/src/libevent-0.7c' > Making all in sample > make[2]: Entering directory `/usr/local/src/libevent-0.7c/sample' > gcc -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -c event-test.c > gcc -I../compat -o event-test event-test.o -L.. -levent > ../libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': > /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x394): In > function `epoll_add': > /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x1ca): In > function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: > warning: epoll_wait is not implemented and will always fail gcc > -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -c time-test.c > gcc -I../compat -o time-test time-test.o -L.. -levent > ../libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': > /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x394): In > function `epoll_add': > /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x1ca): In > function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: > warning: epoll_wait is not implemented and will always fail gcc > -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -c signal-test.c > gcc -I../compat -o signal-test signal-test.o -L.. -levent > ../libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': > /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x394): In > function `epoll_add': > /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x1ca): In > function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: > warning: epoll_wait is not implemented and will always fail make[2]: > Leaving directory `/usr/local/src/libevent-0.7c/sample' > Making all in test > make[2]: Entering directory `/usr/local/src/libevent-0.7c/test' > gcc -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -Wall -g -O2 -c > test-init.c gcc -I../compat -Wall -g -O2 -o test-init test-init.o -L.. > -levent ../libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': > /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x394): In > function `epoll_add': > /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x1ca): In > function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: > warning: epoll_wait is not implemented and will always fail gcc > -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -Wall -g -O2 -c test-eof.c g= cc > -I../compat -Wall -g -O2 -o test-eof test-eof.o -L.. -levent > ../libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': > /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x394): In > function `epoll_add': > /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x1ca): In > function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: > warning: epoll_wait is not implemented and will always fail gcc > -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -Wall -g -O2 -c test-weof.c > gcc -I../compat -Wall -g -O2 -o test-weof test-weof.o -L.. -levent > ../libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': > /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x394): In > function `epoll_add': > /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x1ca): In > function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: > warning: epoll_wait is not implemented and will always fail gcc > -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -Wall -g -O2 -c test-time.c > gcc -I../compat -Wall -g -O2 -o test-time test-time.o -L.. -levent > ../libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': > /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x394): In > function `epoll_add': > /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x1ca): In > function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: > warning: epoll_wait is not implemented and will always fail gcc > -DHAVE_CONFIG_H -I. -I. -I.. -I../compat -Wall -g -O2 -c regress.c gc= c=20 > -I../compat -Wall -g -O2 -o regress regress.o -L.. -levent > ../libevent.a(epoll.o)(.text+0x5d): In function `epoll_init': > /usr/local/src/libevent-0.7c/epoll.c:115: warning: epoll_create is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x394): In > function `epoll_add': > /usr/local/src/libevent-0.7c/epoll.c:274: warning: epoll_ctl is not > implemented and will always fail ../libevent.a(epoll.o)(.text+0x1ca): In > function `epoll_dispatch': /usr/local/src/libevent-0.7c/epoll.c:179: > warning: epoll_wait is not implemented and will always fail Running tests: > KQUEUE > Skipping test > POLL > test-eof: OKAY > test-weof: OKAY > test-time: OKAY > regress: OKAY > SELECT > test-eof: OKAY > test-weof: OKAY > test-time: OKAY > regress: OKAY > RTSIG > Skipping test > EPOLL > Skipping test > make[2]: Leaving directory `/usr/local/src/libevent-0.7c/test' > make[1]: Leaving directory `/usr/local/src/libevent-0.7c' =2D-=20 Phone: +49 (0) 89 490 267 726 =46ax: +49 (0) 89 490 267 701 Mobile: +49 (0) 179 674 3611 mailto: joachim.bauernberger@friendscout24.de =A0 =A0 =A0 =A0=20 Web: http://www.friendscout24.de From Kyle R. Burton" I've had my ear to the ground on the memcached list for a few months. The project looks like it works extremely well for it's intended purpose. I have an alternate, but possibly similar, need for functionality for some of the distributed processing I'm currently working on. I'm performing a distributed, parallelized task that requires a synchronization point on a specific key. Currently we're using a table lock in a database (Oracle) as the mutex. This was a simpler solution to code, but locks out a large amount of the potential parallelism we could achieve. The task involves processing work units and caching the results (into a database currently). Each result is assigned an unique result id (currently from a sequence in the database). The software's requirements are such that if two workers produce the same result they are to both produce output and the output must have the same result id. After thinking about it, I remembered memcached. What I'm looking for in a distributed mutex may not be that much of an extension to what memcached is already designed to do. There are two modes of mutex that I'd need. The first is a simple mutual exclusion based on a key specific to each work unit. Which corresponds to the above scenario. The api for this could possibly be as simple as: int mutex_lock( char* mutex_key, time_t duration ); int mutex_unlock( char* mutex_key ); int mutex_lock_nonblock( char *mutex_key, time_t duration ); Where mutex_lock could block until the lock is obtained, and mutex_lock_nonblock would return an indicator of whether or not the lock could be obtained. The duration parameter is a maximum time that the lock would be held, which could be short-circuited by calling mutex_unlock before the duration expired. The duration would be a maximum expected lock time - so if a lock is obtained, but the worker that obtained the lock never unlocked it, the lock would be released and processing could otherwise continue. This is basically to support unreliable workers, which is a feature we need. The second mode I'd be looking for would be for supporting distributed 'distincting' types of operations. An unique list of all keys would be built (stored in the server) as processing progressed, where duplicate additions to the key list would be blocked. This type of api could be fairly simple as well, perhaps something like: int distinct_list_set_up( char *session, time_t duration ); int distinct_list_insert( char *session, char *key ); int distinct_list_tear_down( char *session ); Do you feel that either of these features might be useful extensions to memcached? Or are would they just add unnecessary complexity to memcached and therefore be inappropriate additions? I look forward to your feedback. Kyle R. Burton -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mortis@voicenet.com http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ From tony2001@phpclub.net Wed Feb 11 14:21:30 2004 From: tony2001@phpclub.net (Antony Dovgal) Date: Wed, 11 Feb 2004 17:21:30 +0300 Subject: PHP extension announce Message-ID: <20040211172130.56c168bf.tony2001@phpclub.net> Hi all. I want to announce memcache extension for PHP. This extension allows you to interact with memcached a little bit faster, than others PHP APIs, presented at memcached site. Sources of the extension can be found here: http://tony2001.phpclub.net/memcache-0.1.1.tar.gz It's still under development, but this version was tested by some people and seems to be working. The extension doesn't depend on any libraries and interacts with daemon using PHP streams. I would be very grateful for any comments and suggestions (and testing, of course). --- WBR, Antony Dovgal aka tony2001 tony2001@phpclub.net From rg@tcslon.com Wed Feb 11 14:40:57 2004 From: rg@tcslon.com (Russell Garrett) Date: Wed, 11 Feb 2004 14:40:57 +0000 Subject: PHP extension announce In-Reply-To: <20040211173412.14b041cd.tony2001@phpclub.net> References: <20040211172130.56c168bf.tony2001@phpclub.net> <1076509677.584.1.camel@russ> <20040211173412.14b041cd.tony2001@phpclub.net> Message-ID: <1076510457.584.11.camel@russ> On Wed, 2004-02-11 at 14:34, Antony Dovgal wrote: > > Fantastic, this is exactly what we needed - sockets functions in PHP > > don't work particularly well when multibyte support is enabled. > hmmm.. > this sounds strange. > can you fill a bug report at bugs.php.net ? It's simply because both the PHP memcached classes use strlen() when they need to count the number of bytes. On our install, strlen() is overloaded to the multibyte version, so it's all broken. The bug in PHP was that it was designed, through severe lack of foresight, for single-byte character sets, and multibyte support is a real hack. Russ Garrett russ@last.fm From russ@garrett.co.uk Wed Feb 11 14:34:23 2004 From: russ@garrett.co.uk (Russ Garrett) Date: Wed, 11 Feb 2004 14:34:23 +0000 Subject: PHP extension announce In-Reply-To: <20040211172130.56c168bf.tony2001@phpclub.net> References: <20040211172130.56c168bf.tony2001@phpclub.net> Message-ID: <1076510063.584.3.camel@russ> (note to self: reply-to-all is your friend) Fantastic, this is exactly what we needed - sockets functions in PHP don't work particularly well when multibyte support is enabled. I was going to write one myself once the C library came out. Russ On Wed, 2004-02-11 at 14:21, Antony Dovgal wrote: > Hi all. > > I want to announce memcache extension for PHP. > This extension allows you to interact with memcached a little bit faster, than others PHP APIs, presented at memcached site. > > Sources of the extension can be found here: http://tony2001.phpclub.net/memcache-0.1.1.tar.gz > > It's still under development, but this version was tested by some people and seems to be working. > The extension doesn't depend on any libraries and interacts with daemon using PHP streams. > > I would be very grateful for any comments and suggestions (and testing, of course). > > --- > WBR, > Antony Dovgal aka tony2001 > tony2001@phpclub.net From brad@danga.com Wed Feb 11 18:05:23 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Wed, 11 Feb 2004 10:05:23 -0800 (PST) Subject: PHP extension announce In-Reply-To: <20040211172130.56c168bf.tony2001@phpclub.net> References: <20040211172130.56c168bf.tony2001@phpclub.net> Message-ID: Whoa, this looks hard-core. Avva, take a look at this.... it's written mostly in C. On Wed, 11 Feb 2004, Antony Dovgal wrote: > Hi all. > > I want to announce memcache extension for PHP. > This extension allows you to interact with memcached a little bit faster, than others PHP APIs, presented at memcached site. > > Sources of the extension can be found here: http://tony2001.phpclub.net/memcache-0.1.1.tar.gz > > It's still under development, but this version was tested by some people and seems to be working. > The extension doesn't depend on any libraries and interacts with daemon using PHP streams. > > I would be very grateful for any comments and suggestions (and testing, of course). > > --- > WBR, > Antony Dovgal aka tony2001 > tony2001@phpclub.net > > From brad@danga.com Thu Feb 12 04:19:09 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Wed, 11 Feb 2004 20:19:09 -0800 (PST) Subject: Distributed Mutex In-Reply-To: <20040210184353.GX22116@paranoia.neverlight.com> References: <20040210184353.GX22116@paranoia.neverlight.com> Message-ID: I've thought about the mutex case myself (wanting it for LJ) but each time I realize memcached already provides enough primitives to build that API on top of it. (well, sort of) The idea is you use the "add" command to set a key (mutex name) to some value. The add command returns "STORED" or "NOT_STORED", but it's atomic, so two callers won't both pass. Then your app can just poll (re-trying the lock once a second) until it gets it. And when it's done, die. Now, this is lame for a couple reasons: -- it polls. -- the key/value pair in memcached you're using as your mutex state could be pushed out of memory. there's no way to pin items inside memcached. (that'd be easy to add, though) -- if the client dies before unlocking the mutex, it's never unlocked. (except of course by time). A better solution (if you need something quick) is run a bunch of MySQL servers and use GET_LOCK() and RELEASE_LOCK() which have the semantics you want already, and is Free. (unlike Oracle) I don't quite follow your second need, but it sounds like you're just trying to build lists of items? You can already do that with memcached as well... no changes should be necessary. You just want to build named sets you can enumerate? (a set is a bag with the constraint that items are unique) Just use keys, say: _items = 2 _item_1 = key_foo _item_2 = key_bar _key_foo = _key_bar = Now, to add to a set: "add set_name_key_baz 0 0 0 ... " if it returns "NOT_STORED", the item is already in the set. if it returns "STORED", then do: "add set_names_items = 0" (initialize the item count) "incr set_name_items" (increment the item count) Now, "incr" command returns the new value. So say it returns "3" Now: "set set_name_item_3 = key_baz" (pseudo protocol talk above) So yeah, you can already do that. The only concern, again, is items falling out. Maybe we need an attribute (or server-global setting) to not dump items. Would that be sufficient? On Tue, 10 Feb 2004, Kyle R. Burton wrote: > I've had my ear to the ground on the memcached list for a few months. > The project looks like it works extremely well for it's intended > purpose. > > I have an alternate, but possibly similar, need for functionality for > some of the distributed processing I'm currently working on. I'm > performing a distributed, parallelized task that requires a > synchronization point on a specific key. Currently we're using a > table lock in a database (Oracle) as the mutex. This was a simpler > solution to code, but locks out a large amount of the potential > parallelism we could achieve. > > The task involves processing work units and caching the results (into > a database currently). Each result is assigned an unique result id > (currently from a sequence in the database). The software's > requirements are such that if two workers produce the same result they > are to both produce output and the output must have the same result > id. > > After thinking about it, I remembered memcached. What I'm looking for > in a distributed mutex may not be that much of an extension to what > memcached is already designed to do. > > There are two modes of mutex that I'd need. The first is a simple > mutual exclusion based on a key specific to each work unit. Which > corresponds to the above scenario. The api for this could possibly be > as simple as: > > int mutex_lock( char* mutex_key, time_t duration ); > int mutex_unlock( char* mutex_key ); > int mutex_lock_nonblock( char *mutex_key, time_t duration ); > > Where mutex_lock could block until the lock is obtained, and > mutex_lock_nonblock would return an indicator of whether or not the > lock could be obtained. The duration parameter is a maximum time that > the lock would be held, which could be short-circuited by calling > mutex_unlock before the duration expired. The duration would be a > maximum expected lock time - so if a lock is obtained, but the worker > that obtained the lock never unlocked it, the lock would be released > and processing could otherwise continue. This is basically to support > unreliable workers, which is a feature we need. > > The second mode I'd be looking for would be for supporting distributed > 'distincting' types of operations. An unique list of all keys would > be built (stored in the server) as processing progressed, where > duplicate additions to the key list would be blocked. This type of > api could be fairly simple as well, perhaps something like: > > int distinct_list_set_up( char *session, time_t duration ); > int distinct_list_insert( char *session, char *key ); > int distinct_list_tear_down( char *session ); > > > Do you feel that either of these features might be useful extensions > to memcached? Or are would they just add unnecessary complexity to > memcached and therefore be inappropriate additions? > > > I look forward to your feedback. > > > Kyle R. Burton > > -- > > ------------------------------------------------------------------------------ > Wisdom and Compassion are inseparable. > -- Christmas Humphreys > mortis@voicenet.com http://www.voicenet.com/~mortis > ------------------------------------------------------------------------------ > > From jmat@shutdown.net Thu Feb 12 05:04:04 2004 From: jmat@shutdown.net (Justin Matlock) Date: Thu, 12 Feb 2004 00:04:04 -0500 Subject: PHP extension announce In-Reply-To: <20040211172130.56c168bf.tony2001@phpclub.net> Message-ID: <200402120504.i1C545MM008562@mail.shutdown.net> Any plans to put this in PECL? Very cool, by the way. I was hoping someone who knew C would do this. :) J -----Original Message----- From: memcached-admin@lists.danga.com [mailto:memcached-admin@lists.danga.com] On Behalf Of Antony Dovgal Sent: Wednesday, February 11, 2004 9:22 AM To: memcached@lists.danga.com Subject: PHP extension announce Hi all. I want to announce memcache extension for PHP. This extension allows you to interact with memcached a little bit faster, than others PHP APIs, presented at memcached site. Sources of the extension can be found here: http://tony2001.phpclub.net/memcache-0.1.1.tar.gz It's still under development, but this version was tested by some people and seems to be working. The extension doesn't depend on any libraries and interacts with daemon using PHP streams. I would be very grateful for any comments and suggestions (and testing, of course). --- WBR, Antony Dovgal aka tony2001 tony2001@phpclub.net From jmat@shutdown.net Thu Feb 12 05:11:26 2004 From: jmat@shutdown.net (Justin Matlock) Date: Thu, 12 Feb 2004 00:11:26 -0500 Subject: PHP extension announce Message-ID: <200402120511.i1C5BRMM008733@mail.shutdown.net> Doh. Nevermind. I didn't open package.xml *hides* :) J -----Original Message----- From: Justin Matlock Sent: Thursday, February 12, 2004 12:04 AM To: memcached@lists.danga.com Subject: RE: PHP extension announce Any plans to put this in PECL? Very cool, by the way. I was hoping someone who knew C would do this. :) J -----Original Message----- From: memcached-admin@lists.danga.com [mailto:memcached-admin@lists.danga.com] On Behalf Of Antony Dovgal Sent: Wednesday, February 11, 2004 9:22 AM To: memcached@lists.danga.com Subject: PHP extension announce Hi all. I want to announce memcache extension for PHP. This extension allows you to interact with memcached a little bit faster, than others PHP APIs, presented at memcached site. Sources of the extension can be found here: http://tony2001.phpclub.net/memcache-0.1.1.tar.gz It's still under development, but this version was tested by some people and seems to be working. The extension doesn't depend on any libraries and interacts with daemon using PHP streams. I would be very grateful for any comments and suggestions (and testing, of course). --- WBR, Antony Dovgal aka tony2001 tony2001@phpclub.net From tony2001@phpclub.net Thu Feb 12 07:31:22 2004 From: tony2001@phpclub.net (Antony Dovgal) Date: Thu, 12 Feb 2004 10:31:22 +0300 Subject: PHP extension announce In-Reply-To: <200402120504.i1C545MM008562@mail.shutdown.net> References: <20040211172130.56c168bf.tony2001@phpclub.net> <200402120504.i1C545MM008562@mail.shutdown.net> Message-ID: <20040212103122.16a7533b.tony2001@phpclub.net> On Thu, 12 Feb 2004 00:04:04 -0500 "Justin Matlock" wrote: > Any plans to put this in PECL? it's already there =) http://pecl.php.net/package/memcache --- WBR, Antony Dovgal aka tony2001 tony2001@phpclub.net From joachim@bauernberger.org Thu Feb 12 18:40:27 2004 From: joachim@bauernberger.org (Joachim Bauernberger) Date: Thu, 12 Feb 2004 19:40:27 +0100 Subject: syslog patch Message-ID: <200402121940.27300.joachim.bauernberger@friendscout24.de> Hi, I have done a small patch which adds syslog functionality as a configure=20 option, since I needed this for a project.=20 (--with-syslog=3Dyes) As syslog-facility I use LOG_DAEMON. (edit log.c if you want to change this) Instead of perror() log() is called which makes the decision whether to log= =20 via syslog or write to stderr.=20 log() understands printf() like arguments in case you want to use it ... The patch is here: http://www.bauernberger.org/patches/memcached-1.1.10_syslog.patch.gz Thanks for this great piece of software, ~/joachim =2D-=20 Phone: +49 (0) 89 1553874 =46ax: +49 (0) 89 1553874 Mobile: +49 (0) 179 674 3611 mailto: joachim@bauernberger.org =A0 =A0 =A0 =A0=20 Web: http://www.bauernberger.org From brad@danga.com Thu Feb 12 19:13:28 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Thu, 12 Feb 2004 11:13:28 -0800 (PST) Subject: syslog patch In-Reply-To: <200402121940.27300.joachim.bauernberger@friendscout24.de> References: <200402121940.27300.joachim.bauernberger@friendscout24.de> Message-ID: Joachim, Can you submit a cleaner patch that just touches the distributed files, and not the auto-generated files? I'm having a hard time reading this patch, trying to ignore all the junk. Thanks! - Brad On Thu, 12 Feb 2004, Joachim Bauernberger wrote: > Hi, > > I have done a small patch which adds syslog functionality as a configure > option, since I needed this for a project. > (--with-syslog=3Dyes) > > As syslog-facility I use LOG_DAEMON. (edit log.c if you want to change th= is) > > Instead of perror() log() is called which makes the decision whether to l= og > via syslog or write to stderr. > log() understands printf() like arguments in case you want to use it ... > > The patch is here: > http://www.bauernberger.org/patches/memcached-1.1.10_syslog.patch.gz > > Thanks for this great piece of software, > ~/joachim > > -- > Phone: +49 (0) 89 1553874 > Fax: +49 (0) 89 1553874 > Mobile: +49 (0) 179 674 3611 > mailto: joachim@bauernberger.org =A0 =A0 =A0 =A0 > Web: http://www.bauernberger.org > > From joachim@bauernberger.org Thu Feb 12 20:52:25 2004 From: joachim@bauernberger.org (Joachim Bauernberger) Date: Thu, 12 Feb 2004 21:52:25 +0100 Subject: syslog patch In-Reply-To: References: <200402121940.27300.joachim.bauernberger@friendscout24.de> Message-ID: <200402122152.25572.joachim@bauernberger.org> Hi, On Thursday 12 February 2004 20:13, you wrote: > Joachim, > > Can you submit a cleaner patch that just touches the distributed files, > and not the auto-generated files? > Sorry for the messy patch. Forgot to delete all irrelevant autoconf,automake, etc stuff before doing the diff.... I uploaded a cleaner version: http://www.bauernberger.org/patches/memcached-1.1.10_syslog.patch.gz You might have to run aclocal/atoconf/auotoheader/automake after applying it, since configure.ac changed ... Regards, ~/joachim > I'm having a hard time reading this patch, trying to ignore all the junk. > > Thanks! > > - Brad -- http://www.bauernberger.org/ mailto:joachim@bauernberger.org Tel/Fax: +(49)-0-89/1588 3874 HP: +(49)-0-179/674 3611 From joachim.bauernberger@friendscout24.de Fri Feb 13 17:39:35 2004 From: joachim.bauernberger@friendscout24.de (Joachim Bauernberger) Date: Fri, 13 Feb 2004 18:39:35 +0100 Subject: Perl API and error handling Message-ID: <200402131839.35040.joachim.bauernberger@friendscout24.de> Hi, I was wondering how you guys trap errors in the perl API. If for example $memd->set or $memd->add fails it says that undef is returne= d,=20 but will I know the reason for the failure (maybe in $!). perldoc Cache::Memcached does not mention this. Thanks & regards, ~/joachim =2D-=20 Phone: +49 (0) 89 490 267 726 =46ax: +49 (0) 89 490 267 701 Mobile: +49 (0) 179 674 3611 mailto: joachim.bauernberger@friendscout24.de =A0 =A0 =A0 =A0=20 Web: http://www.friendscout24.de From joachim.bauernberger@friendscout24.de Fri Feb 13 18:19:29 2004 From: joachim.bauernberger@friendscout24.de (Joachim Bauernberger) Date: Fri, 13 Feb 2004 19:19:29 +0100 Subject: Perl API one more question Message-ID: <200402131919.29965.joachim.bauernberger@friendscout24.de> Hi, the $memd->delete function wasn't mentioned in the perldoc, but I found it= =20 implemented in Memcached.pm. Is this cause the docs havn't just haven't been updated, or is there an iss= ue=20 that makes this not officially supported, etc ..? Thanks & best regards. ~/joachim =2D-=20 Phone: +49 (0) 89 490 267 726 =46ax: +49 (0) 89 490 267 701 Mobile: +49 (0) 179 674 3611 mailto: joachim.bauernberger@friendscout24.de =A0 =A0 =A0 =A0=20 Web: http://www.friendscout24.de From cpisto@nmxs.com Fri Feb 13 20:11:56 2004 From: cpisto@nmxs.com (Cody Pisto) Date: Fri, 13 Feb 2004 13:11:56 -0700 Subject: C Api Message-ID: <402D2F8C.20905@nmxs.com> Hi everyone, Just wanted to try and get a consensus on what is happening with efforts around the C Client API. I see there are several people interested in it & several possibly working on separate implementations. I'm interesting in forming an organized group to get it done, any takers? :-) --- Cody Pisto From brad@danga.com Tue Feb 17 18:55:02 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Tue, 17 Feb 2004 10:55:02 -0800 (PST) Subject: Perl API one more question In-Reply-To: <200402131919.29965.joachim.bauernberger@friendscout24.de> References: <200402131919.29965.joachim.bauernberger@friendscout24.de> Message-ID: Documented now in CVS. That was an oversight. Thanks! "delete" $memd->delete($key[, $time]); Deletes a key. You may optionally provide an integer time value (in seconds) to tell the memcached server to block new writes to this key for that many seconds. (Sometimes useful as a hacky means to prevent races.) Returns true if key was found and deleted, and false otherwise. On Fri, 13 Feb 2004, Joachim Bauernberger wrote: > Hi, > > the $memd->delete function wasn't mentioned in the perldoc, but I found i= t > implemented in Memcached.pm. > > Is this cause the docs havn't just haven't been updated, or is there an i= ssue > that makes this not officially supported, etc ..? > > Thanks & best regards. > ~/joachim > -- > Phone: +49 (0) 89 490 267 726 > Fax: +49 (0) 89 490 267 701 > Mobile: +49 (0) 179 674 3611 > mailto: joachim.bauernberger@friendscout24.de =A0 =A0 =A0 =A0 > Web: http://www.friendscout24.de > > From brad@danga.com Tue Feb 17 18:57:53 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Tue, 17 Feb 2004 10:57:53 -0800 (PST) Subject: Perl API and error handling In-Reply-To: <200402131839.35040.joachim.bauernberger@friendscout24.de> References: <200402131839.35040.joachim.bauernberger@friendscout24.de> Message-ID: Generally I've never needed the return code for failing adds/sets. I'd take a patch, though, if you need it. I'm not sure $! is the best place, however. Perhaps $memc->err / $memc->errstr like DBI? On Fri, 13 Feb 2004, Joachim Bauernberger wrote: > Hi, > > I was wondering how you guys trap errors in the perl API. > If for example $memd->set or $memd->add fails it says that undef is retur= ned, > but will I know the reason for the failure (maybe in $!). > > perldoc Cache::Memcached does not mention this. > > Thanks & regards, > ~/joachim > > -- > Phone: +49 (0) 89 490 267 726 > Fax: +49 (0) 89 490 267 701 > Mobile: +49 (0) 179 674 3611 > mailto: joachim.bauernberger@friendscout24.de =A0 =A0 =A0 =A0 > Web: http://www.friendscout24.de > > From brad@danga.com Tue Feb 17 18:59:29 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Tue, 17 Feb 2004 10:59:29 -0800 (PST) Subject: C Api In-Reply-To: <402D2F8C.20905@nmxs.com> References: <402D2F8C.20905@nmxs.com> Message-ID: I keep telling Avva I'll pay him to write one (my motivation being I want the Perl module to use it, if available) but it hasn't happened yet. On Fri, 13 Feb 2004, Cody Pisto wrote: > Hi everyone, > > Just wanted to try and get a consensus on what is happening with > efforts around the C Client API. I see there are several people > interested in it & several possibly working on separate implementations. > I'm interesting in forming an organized group to get it done, any > takers? :-) > > --- > Cody Pisto > > From cpisto@nmxs.com Tue Feb 17 20:18:44 2004 From: cpisto@nmxs.com (Cody Pisto) Date: Tue, 17 Feb 2004 13:18:44 -0700 Subject: C Api In-Reply-To: References: <402D2F8C.20905@nmxs.com> Message-ID: <40327724.9050100@nmxs.com> I will second the offer of cash, I can probably even get my employer to add additional monetary incentive and coding time... Brad Fitzpatrick wrote: > I keep telling Avva I'll pay him to write one (my motivation being I want > the Perl module to use it, if available) but it hasn't happened yet. > > > On Fri, 13 Feb 2004, Cody Pisto wrote: > > >>Hi everyone, >> >> Just wanted to try and get a consensus on what is happening with >>efforts around the C Client API. I see there are several people >>interested in it & several possibly working on separate implementations. >>I'm interesting in forming an organized group to get it done, any >>takers? :-) >> >>--- >>Cody Pisto >> >> > From brad@danga.com Tue Feb 17 21:08:40 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Tue, 17 Feb 2004 13:08:40 -0800 (PST) Subject: syslog patch In-Reply-To: <200402122152.25572.joachim@bauernberger.org> References: <200402121940.27300.joachim.bauernberger@friendscout24.de> <200402122152.25572.joachim@bauernberger.org> Message-ID: Avva, Go ahead and check this in after you review. I skimmed it and it seems fine. On Thu, 12 Feb 2004, Joachim Bauernberger wrote: > Hi, > On Thursday 12 February 2004 20:13, you wrote: > > Joachim, > > > > Can you submit a cleaner patch that just touches the distributed files, > > and not the auto-generated files? > > > > Sorry for the messy patch. Forgot to delete all irrelevant autoconf,automake, > etc stuff before doing the diff.... > > I uploaded a cleaner version: > http://www.bauernberger.org/patches/memcached-1.1.10_syslog.patch.gz > > You might have to run aclocal/atoconf/auotoheader/automake after applying it, > since configure.ac changed ... > > Regards, > ~/joachim > > > I'm having a hard time reading this patch, trying to ignore all the junk. > > > > Thanks! > > > > - Brad > > -- > http://www.bauernberger.org/ > mailto:joachim@bauernberger.org > Tel/Fax: +(49)-0-89/1588 3874 > HP: +(49)-0-179/674 3611 > > From martine@danga.com Tue Feb 17 22:23:49 2004 From: martine@danga.com (Evan Martin) Date: Tue, 17 Feb 2004 14:23:49 -0800 Subject: C Api In-Reply-To: <40327724.9050100@nmxs.com> References: <402D2F8C.20905@nmxs.com> <40327724.9050100@nmxs.com> Message-ID: <20040217222349.GA23409@danga.com> On Tue, Feb 17, 2004 at 01:18:44PM -0700, Cody Pisto wrote: > I will second the offer of cash, I can probably even get my employer to > add additional monetary incentive and coding time... > > Brad Fitzpatrick wrote: > >I keep telling Avva I'll pay him to write one (my motivation being I want > >the Perl module to use it, if available) but it hasn't happened yet. Crikey, the one I wrote was mostly working... I wonder if I can still find the code for it. -- Evan Martin martine@danga.com http://neugierig.org From joachim.bauernberger@friendscout24.de Wed Feb 18 08:27:17 2004 From: joachim.bauernberger@friendscout24.de (Joachim Bauernberger) Date: Wed, 18 Feb 2004 09:27:17 +0100 Subject: Perl API and error handling In-Reply-To: References: <200402131839.35040.joachim.bauernberger@friendscout24.de> Message-ID: <200402180927.17610.joachim.bauernberger@friendscout24.de> Hi Brad, On Tuesday 17 February 2004 19:57, Brad Fitzpatrick wrote: > Generally I've never needed the return code for failing adds/sets. I'd > take a patch, though, if you need it. > One can check the serverlogs if errors are happening inside memcached, howe= ver=20 this requires a separate script that constantly monitors the memcached logs. So I think it's nice if the client can take proper action in case if the=20 memcached server returned out of memory, unable to open fd's or=20 whatever ...... > I'm not sure $! is the best place, however. Perhaps > $memc->err / $memc->errstr like DBI? Yes that would be neat and surely more elegant than $! Thanks, ~/joachim > > On Fri, 13 Feb 2004, Joachim Bauernberger wrote: > > Hi, > > > > I was wondering how you guys trap errors in the perl API. > > If for example $memd->set or $memd->add fails it says that undef is > > returned, but will I know the reason for the failure (maybe in $!). > > > > perldoc Cache::Memcached does not mention this. > > > > Thanks & regards, > > ~/joachim > > > > -- > > Phone: +49 (0) 89 490 267 726 > > Fax: +49 (0) 89 490 267 701 > > Mobile: +49 (0) 179 674 3611 > > mailto: joachim.bauernberger@friendscout24.de =A0 =A0 =A0 =A0 > > Web: http://www.friendscout24.de =2D-=20 Phone: +49 (0) 89 490 267 726 =46ax: +49 (0) 89 490 267 701 Mobile: +49 (0) 179 674 3611 mailto: joachim.bauernberger@friendscout24.de =A0 =A0 =A0 =A0=20 Web: http://www.friendscout24.de From joachim.bauernberger@friendscout24.de Wed Feb 18 09:32:00 2004 From: joachim.bauernberger@friendscout24.de (Joachim Bauernberger) Date: Wed, 18 Feb 2004 10:32:00 +0100 Subject: make install Message-ID: <200402181032.00921.joachim.bauernberger@friendscout24.de> Hi, another minor thing I noticed is that when I do make install the memcached.= 1=20 man page does not get installed, probably it's been fortgotten in=20 Makefile.am ... regards, ~/joachim =2D-=20 Phone: +49 (0) 89 490 267 726 =46ax: +49 (0) 89 490 267 701 Mobile: +49 (0) 179 674 3611 mailto: joachim.bauernberger@friendscout24.de =A0 =A0 =A0 =A0=20 Web: http://www.friendscout24.de From Tom.Keeble@citadelgroup.com Wed Feb 18 14:40:40 2004 From: Tom.Keeble@citadelgroup.com (Keeble, Tom) Date: Wed, 18 Feb 2004 08:40:40 -0600 Subject: Object creation/locking issue Message-ID: <74113FC499DAF240AF7A61B15512604E01506338@CORPEMAIL.citadelgroup.com> This is a multi-part message in MIME format. ------_=_NextPart_001_01C3F62D.2E58D504 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit I'd like to ask about the feasibility of a couple of things, including blocking and cache reuse strategy. 1. Requirement for blocking on access to a key: Here is a scenario -- taking usage out of the pure web-interface implementation: Supposing I wish to cache objects that are computationally expensive to generate, ie to the order of a few seconds up to 15 minutes. I can have unique keys, put them in namespaces, and otherwise identify them, but I really do not want to be generating them in parallel, as that chews up resources. So, if a process discovers that the object is not in the cache, it needs to 'lock' that key before going off to generate the object, so that any other process looking up that object are blocked until it has been put in to the cache. Clearly, the block would have to be dropped if that other process dies (or unlocks the key) without generating the object. 2. Caching on mfu, creation expense and time Having a cache capacity limit that drops objects based on their popularity as well as age would be a strong advantage when there are limited memory resources for caching, or under-utilised objects are stored. Better still, storing a 'construction expense' value with an object would permit relatively cheap objects to be purged in order to maintain the most expensive cache elements longer. As it stands, I can't see memcache catering to either of these requirements - is it planned for the future, or is this just not the tool for the job? Regards, Tom ------------------------------------------------------------------------ --------------------- The information contained in this transmission and any attached documents is privileged, confidential and intended only for the use of the individual or entity named above. If the reader of this message is not the intended recipient, you are hereby directed not to read the contents of this transmission, and are hereby notified that any disclosure, copying, distribution, dissemination or use of the contents of this transmission, including any attachments, or the taking of any action in reliance thereon, is strictly prohibited. If you have received this communication in error, please notify the sender and/or Citadel Investment Group (Europe) Ltd immediately by telephone at +44 (0) 20 7645 9700 and destroy any copy of this transmission. Citadel Investment Group (Europe) Ltd is authorised and regulated by the Financial Services Authority (FSA Firm Ref No 190260). Registered in England. Registration No. 3666898. Registered Office: 10th Floor, 2 George Yard, Lombard Street, London EC3V 9DH ------_=_NextPart_001_01C3F62D.2E58D504 Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Object creation/locking issue

I'd like to ask about the feasibility = of a couple of things, including blocking and cache reuse = strategy.


1. Requirement for blocking on access = to a key:

Here is a scenario -- taking usage out = of the pure web-interface implementation:

Supposing I wish to cache objects that = are computationally expensive to generate, ie to the order of a few = seconds up to 15 minutes.  I can have unique keys, put them in = namespaces, and otherwise identify them, but I really do not want to be = generating them in parallel, as that chews up = resources…

So, if a process discovers that the = object is not in the cache, it needs to 'lock' that key before going off = to generate the object, so that any other process looking up that object = are blocked until it has been put in to the cache.  Clearly, the = block would have to be dropped if that other process dies (or unlocks = the key) without generating the object…


2. Caching on mfu, creation expense and = time

Having a cache capacity limit that = drops objects based on their popularity as well as age would be a strong = advantage when there are limited memory resources for caching, or = under-utilised objects are stored.  Better still, storing a = 'construction expense' value with an object would permit relatively = cheap objects to be purged in order to maintain the most expensive cache = elements longer.


As it stands, I can't see memcache = catering to either of these requirements - is it planned for the future, = or is this just not the tool for the job?


Regards,
Tom





-------------------------------------------------------------------------= --------------------

The information contained in this transmission and any attached = documents is privileged, confidential and intended only for the use of = the individual or entity named above. If the reader of this message is = not the intended recipient, you are hereby directed not to read the = contents of this transmission, and are hereby notified that any = disclosure, copying, distribution, dissemination or use of the contents = of this transmission, including any attachments, or the taking of any = action in reliance thereon, is strictly prohibited. If you have = received this communication in error, please notify the sender and/or = Citadel Investment Group (Europe) Ltd immediately by telephone at +44 = (0) 20 7645 9700 and destroy any copy of this transmission.

Citadel Investment Group (Europe) Ltd is authorised and regulated by the = Financial Services Authority (FSA Firm Ref No 190260).
Registered in England. Registration No. 3666898.
Registered Office: 10th Floor, 2 George Yard, Lombard Street, London = EC3V 9DH
------_=_NextPart_001_01C3F62D.2E58D504-- From martine@danga.com Wed Feb 18 17:16:53 2004 From: martine@danga.com (Evan Martin) Date: Wed, 18 Feb 2004 09:16:53 -0800 Subject: Object creation/locking issue In-Reply-To: <74113FC499DAF240AF7A61B15512604E01506338@CORPEMAIL.citadelgroup.com> References: <74113FC499DAF240AF7A61B15512604E01506338@CORPEMAIL.citadelgroup.com> Message-ID: <20040218171653.GC25266@danga.com> On Wed, Feb 18, 2004 at 08:40:40AM -0600, Keeble, Tom wrote: > Here is a scenario -- taking usage out of the pure web-interface > implementation: > > Supposing I wish to cache objects that are computationally expensive to > generate, ie to the order of a few seconds up to 15 minutes. I can have > unique keys, put them in namespaces, and otherwise identify them, but I > really do not want to be generating them in parallel, as that chews up > resources. > > So, if a process discovers that the object is not in the cache, it needs > to 'lock' that key before going off to generate the object, so that any > other process looking up that object are blocked until it has been put > in to the cache. Clearly, the block would have to be dropped if that > other process dies (or unlocks the key) without generating the object. When process A locks key K, I assume process B wants to be able to see that A has locked it and have it choose some other process, right? (Otherwise, you may as well have both A and B compute it because they're both going to be waiting that long anyway.) So just add the key "K_lock" to "A", and have all your processes check that before calculating K. (More specifically: "add" will not replace an existing key, so to prevent races each process should add the key, then immediately get it to verify they were the one who managed to grab the lock.) If A decides to give up, it clears K_lock without setting K. Detecting when another process dies is more complicated, and I'm not sure how any system could help you. :) (I guess if there was some way to bind K_lock's lifetime to the lifetime of the A's TCP connection? But memcache definitely doesn't do that.) Just thinking aloud: - One option would be to set K_lock to a timestamp and have B decide A has died after some amount of time. - Another would be to set K_lock to "A", and have A periodically update "A_alive" to the current time. Whenever B pokes K_lock, it follows it to A_alive to make sure that A is still processing. -- Evan Martin martine@danga.com http://neugierig.org From brad@danga.com Wed Feb 18 17:17:10 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Wed, 18 Feb 2004 09:17:10 -0800 (PST) Subject: Object creation/locking issue In-Reply-To: <74113FC499DAF240AF7A61B15512604E01506338@CORPEMAIL.citadelgroup.com> References: <74113FC499DAF240AF7A61B15512604E01506338@CORPEMAIL.citadelgroup.com> Message-ID: Tom, > 1. Requirement for blocking on access to a key: > > Here is a scenario -- taking usage out of the pure web-interface > implementation: > > Supposing I wish to cache objects that are computationally expensive to > generate, ie to the order of a few seconds up to 15 minutes. I can have > unique keys, put them in namespaces, and otherwise identify them, but I > really do not want to be generating them in parallel, as that chews up > resources. > > So, if a process discovers that the object is not in the cache, it needs > to 'lock' that key before going off to generate the object, so that any > other process looking up that object are blocked until it has been put > in to the cache. Clearly, the block would have to be dropped if that > other process dies (or unlocks the key) without generating the object. LiveJournal has the same issue. (where we don't want to compute something expensive in parallel). Since we use MySQL, we use MySQL's GET_LOCK() and RELEASE_LOCK(). Our code often looks like: object = memcache_get if (object) objeect GET_LOCK("making_object_name") if (object) { # maybe it was made in the meantime RELEASE_LOCK() return object; } # make object .... memcache_set(object) RELEASE_LOCK() return object What database are you using? This is probably a common enough requirement that we could push it into memcached. Any volunteers? :) > 2. Caching on mfu, creation expense and time > > Having a cache capacity limit that drops objects based on their > popularity as well as age would be a strong advantage when there are Memcached currently drops the items that were accessed the longest time ago. (well, in each size class, but if you keep your size classes balanced, it's effectively the same thing.) > limited memory resources for caching, or under-utilised objects are > stored. Better still, storing a 'construction expense' value with an > object would permit relatively cheap objects to be purged in order to > maintain the most expensive cache elements longer. Could work. The details would need to be worked out, though. I'm not sure how useful it'd end up being. So you'd do some weighted combination of size, access pattern, and construction expense to discard objects? Could get painful. - Brad From brad@danga.com Wed Feb 18 17:21:21 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Wed, 18 Feb 2004 09:21:21 -0800 (PST) Subject: Object creation/locking issue In-Reply-To: <20040218171653.GC25266@danga.com> References: <74113FC499DAF240AF7A61B15512604E01506338@CORPEMAIL.citadelgroup.com> <20040218171653.GC25266@danga.com> Message-ID: > Detecting when another process dies is more complicated, and I'm not > sure how any system could help you. :) > (I guess if there was some way to bind K_lock's lifetime to the lifetime > of the A's TCP connection? But memcache definitely doesn't do that.) That'd be easy to add, and would make implementing named locks easiest. We'd make a special flag to set/add/replace that adds the item to the connection's "to delete" list off. Conn dies, we go delete all the items it touched in that mode. Then the lock acquirer could just poll. (which is ghetto, but we're talking about the easiest way to implement this) - Brad From jtitus@postini.com Thu Feb 19 02:20:25 2004 From: jtitus@postini.com (Jason Titus) Date: Wed, 18 Feb 2004 18:20:25 -0800 Subject: Large memory support Message-ID: Looks like we just stumbled onto some 32 bit limits in memcached. Just = wondering how you guys want to deal with supporting >2GB per cache. We = have experimented with switching some of the unsigned ints into unsigned = long longs and it seemed to work (>4GB seemed ok. No thorough testing = yet though). A cleaner solution might be adjusting the memory related = variables to size_t or some such so that they work well on 32 and 64 bit = systems. What makes the most sense to you folks? Jason From martine@danga.com Thu Feb 19 04:46:47 2004 From: martine@danga.com (Evan Martin) Date: Wed, 18 Feb 2004 20:46:47 -0800 Subject: C API help Message-ID: <20040219044647.GA20714@danga.com> I have my old client that seems to do get and set tolerably well, but I'm not sure it's the API we want: /* this will take in a server list */ MemCacheClient* memcachec_new(void); void* memcachec_get(MemCacheClient *mc, char *key, int *rlen); int memcachec_set(MemCacheClient *mc, char *key, void *val, int len, time_t exp); The problem here is that memcachec_get returns allocated memory, which is fine for your typical day to day applications (and is what Perl does, after all) but for C-level languages it's probably more appropriate to use a client-provided buffer. (Isn't this what Brian Aker wanted for MySQL?) I can think of a few ways to do it: - int memcachec_get(MemCacheClient *mc, char *key, void *buf, int max); which would read at most max bytes and store them into buf, returning how many bytes were read in the int. This is fine if you know how much data you're getting, but what if you don't? - int memcachec_start_get(MemCacheClient *mc, char *key); starts the get, returns number of bytes in the key. int memcachec_read(MemCacheClient *mc, char *key, void *buf, int len); which must be called after start_get, and repeatedly called until you've retrieved len bytes. but that's weird because of all the "you musts". - int memcachec_get(MemCacheClient *mc, char *key, int (*cb)(void *env, void *data, int len), void *env); which repeatedly calls cb with more and more data. or something like that. This looks pretty ugly, but we'll definitely need a function like this for get_multi, unless we want to have it return something like a linked lists of structs (key, value pairs). Maybe I should elaborate on that get_multi comment. If we follow the form of the topmost API, where the we return allocated memory: int get_multi(MemCacheClient *mc, char **keys, int keycount, int (*cb)(void *env, char *key, void *val, int len), void *env); The enduser could easily hook this up to, for example, a hash table, by providing the appropriate cb and env. But callbacks are annoying. But C is annoying, when you're spoiled by nicer languages. :) Any thoughts? -- Evan Martin martine@danga.com http://neugierig.org From martine@danga.com Thu Feb 19 04:51:52 2004 From: martine@danga.com (Evan Martin) Date: Wed, 18 Feb 2004 20:51:52 -0800 Subject: C API help In-Reply-To: <20040219044647.GA20714@danga.com> References: <20040219044647.GA20714@danga.com> Message-ID: <20040219045152.GA21283@danga.com> On Wed, Feb 18, 2004 at 08:46:47PM -0800, Evan Martin wrote: > I have my old client that seems to do get and set tolerably well, but > I'm not sure it's the API we want: Whoops, just noticed the thread from a month ago where this was discussed already. (http://lists.danga.com/pipermail/memcached/2004-January/000450.html) -- Evan Martin martine@danga.com http://neugierig.org From brad@danga.com Thu Feb 19 05:37:36 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Wed, 18 Feb 2004 21:37:36 -0800 (PST) Subject: Large memory support In-Reply-To: References: Message-ID: Jason, I've built and run memcached on 64-bit machines. On a 32-bit machine you won't be able to store more memory than your operating system is giving you address space for. (Not sure if your OS is giving you a 2G/2G, 3G/1G, or 3.5G/0.5G split) I assume you're on 32-bit? Otherwise, isn't an "unsigned long" just a uint64 on 64-bi machines? Or should we be using void* somewhere we're using unsigned int? Let me know your OS, arch, and the contents of "stats maps" (if on Linux): $ telnet localhost 11211 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. stats maps 08048000-08050000 r-xp 00000000 03:03 2167117 /usr/local/bin/memcached-1.1.10pre2 08050000-08051000 rw-p 00008000 03:03 2167117 /usr/local/bin/memcached-1.1.10pre2 08051000-09c02000 rwxp 00000000 00:00 0 40000000-40011000 r-xp 00000000 03:03 165006 /lib/ld-2.3.1.so 40011000-40012000 rw-p 00011000 03:03 165006 /lib/ld-2.3.1.so 40017000-4011f000 r-xp 00000000 03:03 165009 /lib/libc-2.3.1.so 4011f000-40125000 rw-p 00107000 03:03 165009 /lib/libc-2.3.1.so 40125000-40128000 rw-p 00000000 00:00 0 40128000-40131000 r-xp 00000000 03:03 165014 /lib/libnss_compat-2.3.1.so 40131000-40132000 rw-p 00009000 03:03 165014 /lib/libnss_compat-2.3.1.so 40132000-40142000 r-xp 00000000 03:03 165013 /lib/libnsl-2.3.1.so 40142000-40143000 rw-p 00010000 03:03 165013 /lib/libnsl-2.3.1.so 40143000-80946000 rw-p 00000000 00:00 0 bfffc000-c0000000 rwxp ffffd000 00:00 0 END - Brad On Wed, 18 Feb 2004, Jason Titus wrote: > Looks like we just stumbled onto some 32 bit limits in memcached. Just > wondering how you guys want to deal with supporting >2GB per cache. We > have experimented with switching some of the unsigned ints into unsigned > long longs and it seemed to work (>4GB seemed ok. No thorough testing > yet though). A cleaner solution might be adjusting the memory related > variables to size_t or some such so that they work well on 32 and 64 bit > systems. > > What makes the most sense to you folks? > > Jason > > From jtitus@postini.com Thu Feb 19 05:41:35 2004 From: jtitus@postini.com (Jason Titus) Date: Wed, 18 Feb 2004 21:41:35 -0800 Subject: Large memory support Message-ID: Actually, this was on RedHat Enterprise Linux 64 on a dual Opteron = system. No matter what I entered as a memory size, it listed 2GB as = maxbytes in STATS. When I added more records after it reached 1.9GB it = would start tossing things. When I replaced the unsigned ints with unsigned long longs, it would = grow past 2GB. Seems like switching to size_t would be the best path. What do you = think? Jason p.s. - Our goal is to have a ~5GB cache with an entire database. We = want to disable expiration (i.e. nothing getting removed out of the = cache) and have the cache be the definitive read only source. It would = have a syncing process to keep it up to date with any changes. -----Original Message----- From: Brad Fitzpatrick [mailto:brad@danga.com] Sent: Wed 2/18/2004 9:37 PM To: Jason Titus Cc: memcached@lists.danga.com Subject: Re: Large memory support =20 Jason, I've built and run memcached on 64-bit machines. On a 32-bit machine you won't be able to store more memory than your operating system is giving you address space for. (Not sure if your OS is giving you a 2G/2G, 3G/1G, or 3.5G/0.5G split) I assume you're on 32-bit? Otherwise, isn't an "unsigned long" just a uint64 on 64-bi machines? Or should we be using void* somewhere we're using unsigned int? Let me know your OS, arch, and the contents of "stats maps" (if on = Linux): $ telnet localhost 11211 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. stats maps 08048000-08050000 r-xp 00000000 03:03 2167117 = /usr/local/bin/memcached-1.1.10pre2 08050000-08051000 rw-p 00008000 03:03 2167117 = /usr/local/bin/memcached-1.1.10pre2 08051000-09c02000 rwxp 00000000 00:00 0 40000000-40011000 r-xp 00000000 03:03 165006 /lib/ld-2.3.1.so 40011000-40012000 rw-p 00011000 03:03 165006 /lib/ld-2.3.1.so 40017000-4011f000 r-xp 00000000 03:03 165009 /lib/libc-2.3.1.so 4011f000-40125000 rw-p 00107000 03:03 165009 /lib/libc-2.3.1.so 40125000-40128000 rw-p 00000000 00:00 0 40128000-40131000 r-xp 00000000 03:03 165014 = /lib/libnss_compat-2.3.1.so 40131000-40132000 rw-p 00009000 03:03 165014 = /lib/libnss_compat-2.3.1.so 40132000-40142000 r-xp 00000000 03:03 165013 /lib/libnsl-2.3.1.so 40142000-40143000 rw-p 00010000 03:03 165013 /lib/libnsl-2.3.1.so 40143000-80946000 rw-p 00000000 00:00 0 bfffc000-c0000000 rwxp ffffd000 00:00 0 END - Brad On Wed, 18 Feb 2004, Jason Titus wrote: > Looks like we just stumbled onto some 32 bit limits in memcached. = Just > wondering how you guys want to deal with supporting >2GB per cache. = We > have experimented with switching some of the unsigned ints into = unsigned > long longs and it seemed to work (>4GB seemed ok. No thorough testing > yet though). A cleaner solution might be adjusting the memory related > variables to size_t or some such so that they work well on 32 and 64 = bit > systems. > > What makes the most sense to you folks? > > Jason > > From brad@danga.com Thu Feb 19 05:52:06 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Wed, 18 Feb 2004 21:52:06 -0800 (PST) Subject: Large memory support In-Reply-To: References: Message-ID: Jason, I have access to an Opteron right now, but I haven't done extensive testing on it past building it. I'll try and work on that. What would help, though, is if you could get me: -- a test script to fill up memcached with 5GB of data. (or hell, this is like 10 lines, so I could do it) -- a patch to change things to size_t (or void* or whatever's appropriate) Then I'll test it before/after and review other changes and get it committed. I don't want to commit without personally testing, though. So unsigned int is really just 32-bits on 64 bit archs? Can anybody point me at a reference to what types map to on different archs? - Brad On Wed, 18 Feb 2004, Jason Titus wrote: > Actually, this was on RedHat Enterprise Linux 64 on a dual Opteron system. No matter what I entered as a memory size, it listed 2GB as maxbytes in STATS. When I added more records after it reached 1.9GB it would start tossing things. > > When I replaced the unsigned ints with unsigned long longs, it would grow past 2GB. > > Seems like switching to size_t would be the best path. What do you think? > > Jason > > p.s. - Our goal is to have a ~5GB cache with an entire database. We want to disable expiration (i.e. nothing getting removed out of the cache) and have the cache be the definitive read only source. It would have a syncing process to keep it up to date with any changes. > > -----Original Message----- > From: Brad Fitzpatrick [mailto:brad@danga.com] > Sent: Wed 2/18/2004 9:37 PM > To: Jason Titus > Cc: memcached@lists.danga.com > Subject: Re: Large memory support > > Jason, > > I've built and run memcached on 64-bit machines. > > On a 32-bit machine you won't be able to store more memory than your > operating system is giving you address space for. (Not sure if > your OS is giving you a 2G/2G, 3G/1G, or 3.5G/0.5G split) > > I assume you're on 32-bit? Otherwise, isn't an "unsigned long" just a > uint64 on 64-bi machines? Or should we be using void* somewhere we're > using unsigned int? > > Let me know your OS, arch, and the contents of "stats maps" (if on Linux): > > $ telnet localhost 11211 > Trying 127.0.0.1... > Connected to localhost. > Escape character is '^]'. > stats maps > 08048000-08050000 r-xp 00000000 03:03 2167117 /usr/local/bin/memcached-1.1.10pre2 > 08050000-08051000 rw-p 00008000 03:03 2167117 /usr/local/bin/memcached-1.1.10pre2 > 08051000-09c02000 rwxp 00000000 00:00 0 > 40000000-40011000 r-xp 00000000 03:03 165006 /lib/ld-2.3.1.so > 40011000-40012000 rw-p 00011000 03:03 165006 /lib/ld-2.3.1.so > 40017000-4011f000 r-xp 00000000 03:03 165009 /lib/libc-2.3.1.so > 4011f000-40125000 rw-p 00107000 03:03 165009 /lib/libc-2.3.1.so > 40125000-40128000 rw-p 00000000 00:00 0 > 40128000-40131000 r-xp 00000000 03:03 165014 /lib/libnss_compat-2.3.1.so > 40131000-40132000 rw-p 00009000 03:03 165014 /lib/libnss_compat-2.3.1.so > 40132000-40142000 r-xp 00000000 03:03 165013 /lib/libnsl-2.3.1.so > 40142000-40143000 rw-p 00010000 03:03 165013 /lib/libnsl-2.3.1.so > 40143000-80946000 rw-p 00000000 00:00 0 > bfffc000-c0000000 rwxp ffffd000 00:00 0 > END > > > - Brad > > > On Wed, 18 Feb 2004, Jason Titus wrote: > > > Looks like we just stumbled onto some 32 bit limits in memcached. Just > > wondering how you guys want to deal with supporting >2GB per cache. We > > have experimented with switching some of the unsigned ints into unsigned > > long longs and it seemed to work (>4GB seemed ok. No thorough testing > > yet though). A cleaner solution might be adjusting the memory related > > variables to size_t or some such so that they work well on 32 and 64 bit > > systems. > > > > What makes the most sense to you folks? > > > > Jason > > > > > > From mellon@pobox.com Thu Feb 19 06:21:49 2004 From: mellon@pobox.com (Anatoly Vorobey) Date: Thu, 19 Feb 2004 08:21:49 +0200 Subject: Large memory support In-Reply-To: References: Message-ID: <20040219062149.GA17672@pobox.com> On Wed, Feb 18, 2004 at 09:52:06PM -0800, Brad Fitzpatrick wrote: > So unsigned int is really just 32-bits on 64 bit archs? Can anybody point > me at a reference to what types map to on different archs? This seems to be a good doc: http://www.unix-systems.org/version2/whatsnew/lp64_wp.html It appears that Linux on 64bit archs is always LP64 and not ILP64. This suggests that it should be enough to change unsigned int to unsigned long (*not* unsigned long long as Jason did!) in maxbytes and perhaps other limits-related variables in memcache. This wouldn't change anything on 32-bit systems and will give 64-bit limits on 64-bit systems. Warning: I didn't test anything, I'm just theoretizing from my armchair. It's a comfy armchair. -- avva From jtitus@postini.com Thu Feb 19 07:39:38 2004 From: jtitus@postini.com (Jason Titus) Date: Wed, 18 Feb 2004 23:39:38 -0800 Subject: Large memory support Message-ID: Seems like there is much confusion on the zie of a unsigned int w/ = regard to 32 bit/64 bit systems. But we found that on RHEL 3 AMD64, w/ = gcc 3.2.3 20030502 the unsigned ints are 32 bits (the same as RHEL 3 = x86). Other folks seem to have stumbled upon the same info: "> I see you have a background in environments where you move between = 16-=20 > and 32-bit machines. Guess what, in Linux the major movement is=20 > between 32- and 64-bit machines, and "unsigned int" is consistent,=20 > whereas "unsigned long" isn't (long is 32 bits on 32-bit machines, 64=20 > bits on 64-bit machines.)=20 " http://www.ussg.iu.edu/hypermail/linux/kernel/0112.1/0186.html I'd propose making it clearer and using size_t for the memory size = related values. From Sun's web site = (http://developers.sun.com/solaris/articles/solarisupgrade/64bit/Convert.= html) : " Derived Types Using the system derived types helps make code 32-bit and 64-bit safe, = since the derived types themselves must be safe for both the ILP32 and = LP64 data models. In general, using derived types to allow for change is = good programming practice. Should the data model change in the future, = or when porting to a different platform, only the system derived types = need to change rather than the application. The system include files and , which contain = constants, macros, and derived types that are helpful in making = applications 32-bit and 64-bit safe. An application source file that includes makes the = definitions of _LP64 and _ILP32 available through inclusion of = . This header also contains a number of basic derived = types that should be used whenever appropriate. In particular, the = following are of special interest: Type Purpose clock_t - Represents the system times in clock ticks.=A0 dev_t - Used for device numbers.=A0=A0 off_t - Used for file sizes and offsets.=A0 ptrdiff_t - The signed integral type for the result of subtracting two = pointers.=A0 size_t - The size, in bytes, of objects in memory.=A0 ssize_t - Used by functions that return a count of bytes or an error = indication. time_t - Used for time in seconds.=A0 " Here is a first pass diff: [linux]$ diff memcached-bigmem-1.1.10/ memcached-1.1.10 diff memcached-bigmem-1.1.10/memcached.c memcached-1.1.10/memcached.c 339c339 < pos +=3D sprintf(pos, "STAT limit_maxbytes %llu\r\n", = settings.maxbytes); --- > pos +=3D sprintf(pos, "STAT limit_maxbytes %u\r\n", = settings.maxbytes); 1282c1282 < settings.maxbytes =3D (size_t) atoi(optarg)* (size_t) = 1024* (size_t) 1024; --- > settings.maxbytes =3D atoi(optarg)*1024*1024; diff memcached-bigmem-1.1.10/memcached.h memcached-1.1.10/memcached.h 13c13 < size_t curr_bytes; --- > unsigned long long curr_bytes; 27c27 < size_t maxbytes; --- > unsigned int maxbytes; 150c150 < void slabs_init(size_t limit); --- > void slabs_init(unsigned int limit); 154c154 < unsigned int slabs_clsid(size_t size); --- > unsigned int slabs_clsid(unsigned int size); 157c157 < void *slabs_alloc(size_t size); --- > void *slabs_alloc(unsigned int size); 160c160 < void slabs_free(void *ptr, size_t size); --- > void slabs_free(void *ptr, unsigned int size); diff memcached-bigmem-1.1.10/slabs.c memcached-1.1.10/slabs.c 52,53c52,53 < static size_t mem_limit =3D 0; < static size_t mem_malloced =3D 0; --- > static unsigned int mem_limit =3D 0; > static unsigned int mem_malloced =3D 0; 55c55 < unsigned int slabs_clsid(size_t size) { --- > unsigned int slabs_clsid(unsigned int size) { 70c70 < void slabs_init(size_t limit) { --- > void slabs_init(unsigned int limit) { 91c91 < size_t new_size =3D p->list_size ? p->list_size * 2 : 16; --- > unsigned int new_size =3D p->list_size ? p->list_size * 2 : = 16; 123c123 < void *slabs_alloc(size_t size) { --- > void *slabs_alloc(unsigned int size) { 163c163 < void slabs_free(void *ptr, size_t size) { --- > void slabs_free(void *ptr, unsigned int size) { Not sure if all of the changes are necessary, but combined they seem to = work. I just changed all of the areas that seemed to deal with memory = limits. Jason -----Original Message----- From: Brad Fitzpatrick [mailto:brad@danga.com] Sent: Wed 2/18/2004 9:52 PM To: Jason Titus Cc: memcached@lists.danga.com Subject: RE: Large memory support =20 Jason, I have access to an Opteron right now, but I haven't done extensive testing on it past building it. I'll try and work on that. What would help, though, is if you could get me: -- a test script to fill up memcached with 5GB of data. (or hell, this = is like 10 lines, so I could do it) -- a patch to change things to size_t (or void* or whatever's = appropriate) Then I'll test it before/after and review other changes and get it committed. I don't want to commit without personally testing, though. So unsigned int is really just 32-bits on 64 bit archs? Can anybody = point me at a reference to what types map to on different archs? - Brad On Wed, 18 Feb 2004, Jason Titus wrote: > Actually, this was on RedHat Enterprise Linux 64 on a dual Opteron = system. No matter what I entered as a memory size, it listed 2GB as = maxbytes in STATS. When I added more records after it reached 1.9GB it = would start tossing things. > > When I replaced the unsigned ints with unsigned long longs, it would = grow past 2GB. > > Seems like switching to size_t would be the best path. What do you = think? > > Jason > > p.s. - Our goal is to have a ~5GB cache with an entire database. We = want to disable expiration (i.e. nothing getting removed out of the = cache) and have the cache be the definitive read only source. It would = have a syncing process to keep it up to date with any changes. > > -----Original Message----- > From: Brad Fitzpatrick [mailto:brad@danga.com] > Sent: Wed 2/18/2004 9:37 PM > To: Jason Titus > Cc: memcached@lists.danga.com > Subject: Re: Large memory support > > Jason, > > I've built and run memcached on 64-bit machines. > > On a 32-bit machine you won't be able to store more memory than your > operating system is giving you address space for. (Not sure if > your OS is giving you a 2G/2G, 3G/1G, or 3.5G/0.5G split) > > I assume you're on 32-bit? Otherwise, isn't an "unsigned long" just a > uint64 on 64-bi machines? Or should we be using void* somewhere we're > using unsigned int? > > Let me know your OS, arch, and the contents of "stats maps" (if on = Linux): > > $ telnet localhost 11211 > Trying 127.0.0.1... > Connected to localhost. > Escape character is '^]'. > stats maps > 08048000-08050000 r-xp 00000000 03:03 2167117 = /usr/local/bin/memcached-1.1.10pre2 > 08050000-08051000 rw-p 00008000 03:03 2167117 = /usr/local/bin/memcached-1.1.10pre2 > 08051000-09c02000 rwxp 00000000 00:00 0 > 40000000-40011000 r-xp 00000000 03:03 165006 /lib/ld-2.3.1.so > 40011000-40012000 rw-p 00011000 03:03 165006 /lib/ld-2.3.1.so > 40017000-4011f000 r-xp 00000000 03:03 165009 /lib/libc-2.3.1.so > 4011f000-40125000 rw-p 00107000 03:03 165009 /lib/libc-2.3.1.so > 40125000-40128000 rw-p 00000000 00:00 0 > 40128000-40131000 r-xp 00000000 03:03 165014 = /lib/libnss_compat-2.3.1.so > 40131000-40132000 rw-p 00009000 03:03 165014 = /lib/libnss_compat-2.3.1.so > 40132000-40142000 r-xp 00000000 03:03 165013 /lib/libnsl-2.3.1.so > 40142000-40143000 rw-p 00010000 03:03 165013 /lib/libnsl-2.3.1.so > 40143000-80946000 rw-p 00000000 00:00 0 > bfffc000-c0000000 rwxp ffffd000 00:00 0 > END > > > - Brad > > > On Wed, 18 Feb 2004, Jason Titus wrote: > > > Looks like we just stumbled onto some 32 bit limits in memcached. = Just > > wondering how you guys want to deal with supporting >2GB per cache. = We > > have experimented with switching some of the unsigned ints into = unsigned > > long longs and it seemed to work (>4GB seemed ok. No thorough = testing > > yet though). A cleaner solution might be adjusting the memory = related > > variables to size_t or some such so that they work well on 32 and 64 = bit > > systems. > > > > What makes the most sense to you folks? > > > > Jason > > > > > > From jmat@shutdown.net Thu Feb 19 16:59:56 2004 From: jmat@shutdown.net (Justin Matlock) Date: Thu, 19 Feb 2004 11:59:56 -0500 Subject: epoll In-Reply-To: Message-ID: <200402191659.i1JGxw1I007284@mail.shutdown.net> Is there anyway to easily tell if memcached is actually using epoll? I'm using the gentoo emerge, and this is what I saw during libevent's compile, which concerned me... RTSIG Skipping test EPOLL Skipping test Kernel: Linux cartman 2.6.3-rc3-gentoo #6 SMP Thu Feb 19 02:11:26 GMT 2004 i686 AMD Athlon(tm) MP 2400+ AuthenticAMD GNU/Linux Libevent: libevent-0.7c Memcached from gentoo: 1.10 From jamie@mccarthy.vg Thu Feb 19 18:08:44 2004 From: jamie@mccarthy.vg (Jamie McCarthy) Date: Thu, 19 Feb 2004 13:08:44 -0500 Subject: epoll In-Reply-To: <200402191659.i1JGxw1I007284@mail.shutdown.net> Message-ID: jmat@shutdown.net (Justin Matlock) writes: > Is there anyway to easily tell if memcached is actually using epoll? Yep, set the EVENT_SHOW_METHOD environment var: $ EVENT_SHOW_METHOD=3D1 /usr/local/bin/memcached -l 11212 libevent using: epoll --=20 Jamie McCarthy http://mccarthy.vg/ jamie@mccarthy.vg From mellon@pobox.com Thu Feb 19 18:29:20 2004 From: mellon@pobox.com (Anatoly Vorobey) Date: Thu, 19 Feb 2004 20:29:20 +0200 Subject: epoll In-Reply-To: <200402191659.i1JGxw1I007284@mail.shutdown.net> References: <200402191659.i1JGxw1I007284@mail.shutdown.net> Message-ID: <20040219182920.GA20906@pobox.com> On Thu, Feb 19, 2004 at 11:59:56AM -0500, Justin Matlock wrote: > Is there anyway to easily tell if memcached is actually using epoll? Yes, run it with EVENT_SHOW_METHOD set in the environment: $ EVENT_SHOW_METHOD=1 ./memcached It'll print the method used, whether epoll or anything else. -- avva From brad@danga.com Thu Feb 19 19:40:08 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Thu, 19 Feb 2004 11:40:08 -0800 (PST) Subject: make install In-Reply-To: <200402181032.00921.joachim.bauernberger@friendscout24.de> References: <200402181032.00921.joachim.bauernberger@friendscout24.de> Message-ID: I'll take a patch. I don't do the autofoo. On Wed, 18 Feb 2004, Joachim Bauernberger wrote: > Hi, > > another minor thing I noticed is that when I do make install the memcache= d.1 > man page does not get installed, probably it's been fortgotten in > Makefile.am ... > > regards, > ~/joachim > -- > Phone: +49 (0) 89 490 267 726 > Fax: +49 (0) 89 490 267 701 > Mobile: +49 (0) 179 674 3611 > mailto: joachim.bauernberger@friendscout24.de =A0 =A0 =A0 =A0 > Web: http://www.friendscout24.de > > From jamie@mccarthy.vg Thu Feb 19 19:53:20 2004 From: jamie@mccarthy.vg (Jamie McCarthy) Date: Thu, 19 Feb 2004 14:53:20 -0500 Subject: Any other uses of memcached? Message-ID: I'm talking about memcached to my local LUG next week. I wondered if anyone had any ideas for less-than-obvious uses for it. Its most obvious use is caching DB query results. A second use that I can think of is storing data for large-scale numerical processing, perhaps scientific modelling, where many CPUs have to work together on gigabytes of data, and the results of the number-crunching are small enough to be written to disk periodically (so if the power goes out, the most you lose is a few hours' work). Any other ideas? --=20 Jamie McCarthy http://mccarthy.vg/ jamie@mccarthy.vg From Kyle R. Burton" References: <20040210184353.GX22116@paranoia.neverlight.com> Message-ID: <20040219200333.GK22116@paranoia.neverlight.com> > I've thought about the mutex case myself (wanting it for LJ) but each time > I realize memcached already provides enough primitives to build that API > on top of it. > > (well, sort of) > > The idea is you use the "add" command to set a key (mutex name) to some > value. The add command returns "STORED" or "NOT_STORED", but it's atomic, > so two callers won't both pass. Then your app can just poll (re-trying > the lock once a second) until it gets it. And when it's done, die. > > Now, this is lame for a couple reasons: > > -- it polls. Polling works, but is, as you have said, an undesirable solution. Polling doesn't tend to scale with larger numbers of workers - the 'pings' will increase with the number of workers. Though in most scenarios, mutex/key collisions should be infrequent. Also, the poll frequency basically ends up being designed-in latency in the system, which is also undesirable for cluster computing where those types of delays start to add up. > -- the key/value pair in memcached you're using as your mutex state > could be pushed out of memory. there's no way to pin items > inside memcached. (that'd be easy to add, though) > > -- if the client dies before unlocking the mutex, it's never unlocked. > (except of course by time). > > A better solution (if you need something quick) is run a bunch of MySQL > servers and use GET_LOCK() and RELEASE_LOCK() which have the semantics you > want already, and is Free. (unlike Oracle) That is very interesting, I was unaware of the GET_LOCK/RELEASE_LOCK functions in MySQL. They provide a very simple, easy to use arbitrary mutex system. Thank you for making me aware of them. I suppose for scaling you could do key hashing as the memcached client api does to support multiple servers...though clients having to know the list of all servers is also somewhat undesirable. > I don't quite follow your second need, but it sounds like you're just > trying to build lists of items? You can already do that with memcached as > well... no changes should be necessary. Yes, unique lists of items. The process is analogous to a 'SELECT DISTINCT' database operation. In the cluster environment, basically the first worker to start processing a given key marks it, then any other workers that receive that same key as a unit of work should discard it. That way only the distinct set of unique (based on the key) work units is processed. > You just want to build named sets you can enumerate? (a set is a bag > with the constraint that items are unique) For our purposes, we have a [large] set of work units to be processed. Part of processing each individual work unit is generating a unique key based on the content (it is expensive enough an operation that performance is better if the key generation is farmed out), and then other calculations are performed on the data. Once both parts are finished, the results are accumulated by a central process. > Just use keys, say: > > _items = 2 > _item_1 = key_foo > _item_2 = key_bar > _key_foo = > _key_bar = > > Now, to add to a set: > > "add set_name_key_baz 0 0 0 ... " > > if it returns "NOT_STORED", the item is already in the set. if it returns > "STORED", then do: > > "add set_names_items = 0" (initialize the item count) > "incr set_name_items" (increment the item count) > > Now, "incr" command returns the new value. So say it returns "3" > > Now: > > "set set_name_item_3 = key_baz" > > (pseudo protocol talk above) > > So yeah, you can already do that. The only concern, again, is items > falling out. Maybe we need an attribute (or server-global setting) to not > dump items. > > Would that be sufficient? For enumeration yes, but enumeration does not exactly meet my needs. Thank you for your reply. Kyle R. Burton -- ------------------------------------------------------------------------------ Wisdom and Compassion are inseparable. -- Christmas Humphreys mortis@voicenet.com http://www.voicenet.com/~mortis ------------------------------------------------------------------------------ From jmat@shutdown.net Thu Feb 19 20:17:20 2004 From: jmat@shutdown.net (Justin Matlock) Date: Thu, 19 Feb 2004 15:17:20 -0500 Subject: epoll In-Reply-To: <20040219182920.GA20906@pobox.com> Message-ID: <200402192017.i1JKHM1I015061@mail.shutdown.net> Yuck. Okay.. Thanks. Apparently the gentoo portage install uses standard poll, even on 2.6.3. *grumble* :) J -----Original Message----- From: memcached-admin@lists.danga.com [mailto:memcached-admin@lists.danga.com] On Behalf Of Anatoly Vorobey Sent: Thursday, February 19, 2004 1:29 PM To: memcached@lists.danga.com Subject: Re: epoll On Thu, Feb 19, 2004 at 11:59:56AM -0500, Justin Matlock wrote: > Is there anyway to easily tell if memcached is actually using epoll? Yes, run it with EVENT_SHOW_METHOD set in the environment: $ EVENT_SHOW_METHOD=1 ./memcached It'll print the method used, whether epoll or anything else. -- avva From jmat@shutdown.net Thu Feb 19 20:21:02 2004 From: jmat@shutdown.net (Justin Matlock) Date: Thu, 19 Feb 2004 15:21:02 -0500 Subject: Any other uses of memcached? In-Reply-To: Message-ID: <200402192021.i1JKL51I015218@mail.shutdown.net> I've used it for sharing temporary data between webservers. For example, should I need to flip the site into read-only mode, and write update/insert/delete SQL down to a disklog instead of the database (mySQL replication seems to screw up a lot, so I have to do this every now and then), I set a key "SITE_READ_ONLY" to "1"; the separate webservers check for this on every page load. Since they're already querying memcached for userdata, there's no real performance hit here. It's a lot faster than doing a seek on a NFS-based file. Granted, it's not the smartest thing to do, since that key could theoretically disappear on its own... :) I *was* using it for session data, until Brad pointed out how stupid of an idea that was. ;) J -----Original Message----- From: memcached-admin@lists.danga.com [mailto:memcached-admin@lists.danga.com] On Behalf Of Jamie McCarthy Sent: Thursday, February 19, 2004 2:53 PM To: memcached@lists.danga.com Subject: Any other uses of memcached? I'm talking about memcached to my local LUG next week. I wondered if anyone had any ideas for less-than-obvious uses for it. Its most obvious use is caching DB query results. A second use that I can think of is storing data for large-scale numerical processing, perhaps scientific modelling, where many CPUs have to work together on gigabytes of data, and the results of the number-crunching are small enough to be written to disk periodically (so if the power goes out, the most you lose is a few hours' work). Any other ideas? -- Jamie McCarthy http://mccarthy.vg/ jamie@mccarthy.vg From jmat@shutdown.net Thu Feb 19 20:46:17 2004 From: jmat@shutdown.net (Justin Matlock) Date: Thu, 19 Feb 2004 15:46:17 -0500 Subject: epoll In-Reply-To: <20040219182920.GA20906@pobox.com> Message-ID: <200402192046.i1JKkJ1I015727@mail.shutdown.net> Never mind - problem solved. Gentoo doesn't automatically update the headers in /usr/include/linux to match the current kernel, so I was still running 2.4 headers with the 2.6 kernel (I'm surprised more things didn't break). After I emerged linux-headers-2.6.1 and re-emerging libevent and memcached, it started using epoll. J -----Original Message----- From: memcached-admin@lists.danga.com [mailto:memcached-admin@lists.danga.com] On Behalf Of Anatoly Vorobey Sent: Thursday, February 19, 2004 1:29 PM To: memcached@lists.danga.com Subject: Re: epoll On Thu, Feb 19, 2004 at 11:59:56AM -0500, Justin Matlock wrote: > Is there anyway to easily tell if memcached is actually using epoll? Yes, run it with EVENT_SHOW_METHOD set in the environment: $ EVENT_SHOW_METHOD=1 ./memcached It'll print the method used, whether epoll or anything else. -- avva From brad@danga.com Thu Feb 19 21:22:23 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Thu, 19 Feb 2004 13:22:23 -0800 (PST) Subject: Any other uses of memcached? In-Reply-To: References: Message-ID: I'm giving a few talks on memcached and writing an article for LinuxJournal coming up here, so I too would be interested in who's using it, and how.... On Thu, 19 Feb 2004, Jamie McCarthy wrote: > I'm talking about memcached to my local LUG next week. I wondered > if anyone had any ideas for less-than-obvious uses for it. > > Its most obvious use is caching DB query results. > > A second use that I can think of is storing data for large-scale > numerical processing, perhaps scientific modelling, where many CPUs > have to work together on gigabytes of data, and the results of the > number-crunching are small enough to be written to disk periodically > (so if the power goes out, the most you lose is a few hours' work). > > Any other ideas? > -- > Jamie McCarthy > http://mccarthy.vg/ > jamie@mccarthy.vg > > From chris@paymentonline.com Thu Feb 19 21:44:14 2004 From: chris@paymentonline.com (Chris Ochs) Date: Thu, 19 Feb 2004 13:44:14 -0800 Subject: Any other uses of memcached? References: Message-ID: <01bc01c3f731$84a686b0$250a8b0a@chris> We started using memcached in our development branch a couple of months ago, due to go into production next month when we release a new version of our service. For the most part we are using it as a database cache for the various address/fraud checks that our system does during a transaction (where speed is critical), and for caching static configuration data. Chris Ochs Payment Online Inc ----- Original Message ----- From: "Brad Fitzpatrick" To: "Jamie McCarthy" Cc: Sent: Thursday, February 19, 2004 1:22 PM Subject: Re: Any other uses of memcached? > I'm giving a few talks on memcached and writing an article for > LinuxJournal coming up here, so I too would be interested in who's using > it, and how.... > > > > On Thu, 19 Feb 2004, Jamie McCarthy wrote: > > > I'm talking about memcached to my local LUG next week. I wondered > > if anyone had any ideas for less-than-obvious uses for it. > > > > Its most obvious use is caching DB query results. > > > > A second use that I can think of is storing data for large-scale > > numerical processing, perhaps scientific modelling, where many CPUs > > have to work together on gigabytes of data, and the results of the > > number-crunching are small enough to be written to disk periodically > > (so if the power goes out, the most you lose is a few hours' work). > > > > Any other ideas? > > -- > > Jamie McCarthy > > http://mccarthy.vg/ > > jamie@mccarthy.vg > > > > > From rj@last.fm Thu Feb 19 22:01:04 2004 From: rj@last.fm (Richard Jones) Date: Thu, 19 Feb 2004 22:01:04 +0000 Subject: Any other uses of memcached? In-Reply-To: References: Message-ID: <40353220.2030707@last.fm> Hi We're using memcache at www.last.fm to pass data between a java streaming server and the php website the streamer puts the name of the song it's playing in memcache with the username as part of the key php reads it on every page load to show you what you are listening to. We used to have to hit the database to do this, which was rather obscene. hitting memcache on every page load is much nicer :) wwe also use it for the audioscrobbler.com submission system, because we receive a high volume of submissions we use memcache to store authentication details and session keys in memory; again this was a massive strain on the database before memcache came along. RJ From martine@danga.com Thu Feb 19 22:46:11 2004 From: martine@danga.com (Evan Martin) Date: Thu, 19 Feb 2004 14:46:11 -0800 Subject: make install In-Reply-To: References: <200402181032.00921.joachim.bauernberger@friendscout24.de> Message-ID: <20040219224611.GA21335@danga.com> On Thu, Feb 19, 2004 at 11:40:08AM -0800, Brad Fitzpatrick wrote: > I'll take a patch. I don't do the autofoo. Now in CVS. -- Evan Martin martine@danga.com http://neugierig.org From jtitus@postini.com Fri Feb 20 17:19:02 2004 From: jtitus@postini.com (Jason Titus) Date: Fri, 20 Feb 2004 09:19:02 -0800 Subject: Large memory support Message-ID: Did I give you what you needed for the large memory patch? We are using = it now and it seems to work well w/ caches of 4GB and more. Should be = clean as well since the size_t type will still be 32 bits on 32 bit = architectures. Did you still need a script to fill up a cache with more than 2GB of = data? Let me know if I need to get you anything else. Thanks for the great tool, Jason p.s. - here is a cleaner patch I made with 'diff -c -b -r = memcached-1.1.10 memcached-bigmem-1.1.10/' -------- diff -c -b -r memcached-1.1.10/memcached.c = memcached-bigmem-1.1.10/memcached.c *** memcached-1.1.10/memcached.c 2003-12-04 09:50:57.000000000 = -0800 --- memcached-bigmem-1.1.10/memcached.c 2004-02-18 23:27:39.000000000 = -0800 *************** *** 336,342 **** pos +=3D sprintf(pos, "STAT get_misses %u\r\n", = stats.get_misses); pos +=3D sprintf(pos, "STAT bytes_read %llu\r\n", = stats.bytes_read); pos +=3D sprintf(pos, "STAT bytes_written %llu\r\n", = stats.bytes_written); ! pos +=3D sprintf(pos, "STAT limit_maxbytes %u\r\n", = settings.maxbytes); pos +=3D sprintf(pos, "END"); out_string(c, temp); return; --- 336,342 ---- pos +=3D sprintf(pos, "STAT get_misses %u\r\n", = stats.get_misses); pos +=3D sprintf(pos, "STAT bytes_read %llu\r\n", = stats.bytes_read); pos +=3D sprintf(pos, "STAT bytes_written %llu\r\n", = stats.bytes_written); ! pos +=3D sprintf(pos, "STAT limit_maxbytes %llu\r\n", = settings.maxbytes); pos +=3D sprintf(pos, "END"); out_string(c, temp); return; *************** *** 1279,1285 **** settings.port =3D atoi(optarg); break; case 'm': ! settings.maxbytes =3D atoi(optarg)*1024*1024; break; case 'c': settings.maxconns =3D atoi(optarg); --- 1279,1285 ---- settings.port =3D atoi(optarg); break; case 'm': ! settings.maxbytes =3D (size_t) atoi(optarg)* (size_t) = 1024* (size_t) 1024; break; case 'c': settings.maxconns =3D atoi(optarg); diff -c -b -r memcached-1.1.10/memcached.h = memcached-bigmem-1.1.10/memcached.h *** memcached-1.1.10/memcached.h 2003-12-04 09:50:57.000000000 = -0800 --- memcached-bigmem-1.1.10/memcached.h 2004-02-18 23:11:28.000000000 = -0800 *************** *** 10,16 **** struct stats { unsigned int curr_items; unsigned int total_items; ! unsigned long long curr_bytes; unsigned int curr_conns; unsigned int total_conns; unsigned int conn_structs; --- 10,16 ---- struct stats { unsigned int curr_items; unsigned int total_items; ! size_t curr_bytes; unsigned int curr_conns; unsigned int total_conns; unsigned int conn_structs; *************** *** 24,30 **** }; =20 struct settings { ! unsigned int maxbytes; int maxconns; int port; struct in_addr interface; --- 24,30 ---- }; =20 struct settings { ! size_t maxbytes; int maxconns; int port; struct in_addr interface; *************** *** 147,163 **** /* slabs memory allocation */ =20 /* Init the subsystem. The argument is the limit on no. of bytes to = allocate, 0 if no limit */ ! void slabs_init(unsigned int limit); =20 /* Given object size, return id to use when allocating/freeing memory = for object */ /* 0 means error: can't store such a large object */ ! unsigned int slabs_clsid(unsigned int size); =20 /* Allocate object of given length. 0 on error */ ! void *slabs_alloc(unsigned int size); =20 /* Free previously allocated object */ ! void slabs_free(void *ptr, unsigned int size); =20 /* Fill buffer with stats */ char* slabs_stats(int *buflen); --- 147,163 ---- /* slabs memory allocation */ =20 /* Init the subsystem. The argument is the limit on no. of bytes to = allocate, 0 if no limit */ ! void slabs_init(size_t limit); =20 /* Given object size, return id to use when allocating/freeing memory = for object */ /* 0 means error: can't store such a large object */ ! unsigned int slabs_clsid(size_t size); =20 /* Allocate object of given length. 0 on error */ ! void *slabs_alloc(size_t size); =20 /* Free previously allocated object */ ! void slabs_free(void *ptr, size_t size); =20 /* Fill buffer with stats */ char* slabs_stats(int *buflen); Only in memcached-bigmem-1.1.10/: mem.log diff -c -b -r memcached-1.1.10/slabs.c memcached-bigmem-1.1.10/slabs.c *** memcached-1.1.10/slabs.c 2003-09-05 15:37:36.000000000 -0700 --- memcached-bigmem-1.1.10/slabs.c 2004-02-18 23:08:52.000000000 = -0800 *************** *** 49,58 **** } slabclass_t; =20 static slabclass_t slabclass[POWER_LARGEST+1]; ! static unsigned int mem_limit =3D 0; ! static unsigned int mem_malloced =3D 0; =20 ! unsigned int slabs_clsid(unsigned int size) { int res =3D 1; =20 if(size=3D=3D0) --- 49,58 ---- } slabclass_t; =20 static slabclass_t slabclass[POWER_LARGEST+1]; ! static size_t mem_limit =3D 0; ! static size_t mem_malloced =3D 0; =20 ! unsigned int slabs_clsid(size_t size) { int res =3D 1; =20 if(size=3D=3D0) *************** *** 67,73 **** return res; } =20 ! void slabs_init(unsigned int limit) { int i; int size=3D1; =20 --- 67,73 ---- return res; } =20 ! void slabs_init(size_t limit) { int i; int size=3D1; =20 *************** *** 88,94 **** static int grow_slab_list (unsigned int id) {=20 slabclass_t *p =3D &slabclass[id]; if (p->slabs =3D=3D p->list_size) { ! unsigned int new_size =3D p->list_size ? p->list_size * 2 : = 16; void *new_list =3D realloc(p->slab_list, = new_size*sizeof(void*)); if (new_list =3D=3D 0) return 0; p->list_size =3D new_size; --- 88,94 ---- static int grow_slab_list (unsigned int id) {=20 slabclass_t *p =3D &slabclass[id]; if (p->slabs =3D=3D p->list_size) { ! size_t new_size =3D p->list_size ? p->list_size * 2 : 16; void *new_list =3D realloc(p->slab_list, = new_size*sizeof(void*)); if (new_list =3D=3D 0) return 0; p->list_size =3D new_size; *************** *** 120,126 **** return 1; } =20 ! void *slabs_alloc(unsigned int size) { slabclass_t *p; =20 unsigned char id =3D slabs_clsid(size); --- 120,126 ---- return 1; } =20 ! void *slabs_alloc(size_t size) { slabclass_t *p; =20 unsigned char id =3D slabs_clsid(size); *************** *** 160,166 **** return 0; /* shouldn't ever get here */ } =20 ! void slabs_free(void *ptr, unsigned int size) { unsigned char id =3D slabs_clsid(size); slabclass_t *p; =20 --- 160,166 ---- return 0; /* shouldn't ever get here */ } =20 ! void slabs_free(void *ptr, size_t size) { unsigned char id =3D slabs_clsid(size); slabclass_t *p; =20 Only in memcached-bigmem-1.1.10/: stamp-h From brad@danga.com Fri Feb 20 17:24:29 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Fri, 20 Feb 2004 09:24:29 -0800 (PST) Subject: Large memory support In-Reply-To: References: Message-ID: Would you mind resending this in diff -u format? On Fri, 20 Feb 2004, Jason Titus wrote: > Did I give you what you needed for the large memory patch? We are using it now and it seems to work well w/ caches of 4GB and more. Should be clean as well since the size_t type will still be 32 bits on 32 bit architectures. > > Did you still need a script to fill up a cache with more than 2GB of data? > > Let me know if I need to get you anything else. > > Thanks for the great tool, > Jason > > p.s. - here is a cleaner patch I made with 'diff -c -b -r memcached-1.1.10 memcached-bigmem-1.1.10/' > > -------- > > diff -c -b -r memcached-1.1.10/memcached.c memcached-bigmem-1.1.10/memcached.c > *** memcached-1.1.10/memcached.c 2003-12-04 09:50:57.000000000 -0800 > --- memcached-bigmem-1.1.10/memcached.c 2004-02-18 23:27:39.000000000 -0800 > *************** > *** 336,342 **** > pos += sprintf(pos, "STAT get_misses %u\r\n", stats.get_misses); > pos += sprintf(pos, "STAT bytes_read %llu\r\n", stats.bytes_read); > pos += sprintf(pos, "STAT bytes_written %llu\r\n", stats.bytes_written); > ! pos += sprintf(pos, "STAT limit_maxbytes %u\r\n", settings.maxbytes); > pos += sprintf(pos, "END"); > out_string(c, temp); > return; > --- 336,342 ---- > pos += sprintf(pos, "STAT get_misses %u\r\n", stats.get_misses); > pos += sprintf(pos, "STAT bytes_read %llu\r\n", stats.bytes_read); > pos += sprintf(pos, "STAT bytes_written %llu\r\n", stats.bytes_written); > ! pos += sprintf(pos, "STAT limit_maxbytes %llu\r\n", settings.maxbytes); > pos += sprintf(pos, "END"); > out_string(c, temp); > return; > *************** > *** 1279,1285 **** > settings.port = atoi(optarg); > break; > case 'm': > ! settings.maxbytes = atoi(optarg)*1024*1024; > break; > case 'c': > settings.maxconns = atoi(optarg); > --- 1279,1285 ---- > settings.port = atoi(optarg); > break; > case 'm': > ! settings.maxbytes = (size_t) atoi(optarg)* (size_t) 1024* (size_t) 1024; > break; > case 'c': > settings.maxconns = atoi(optarg); > diff -c -b -r memcached-1.1.10/memcached.h memcached-bigmem-1.1.10/memcached.h > *** memcached-1.1.10/memcached.h 2003-12-04 09:50:57.000000000 -0800 > --- memcached-bigmem-1.1.10/memcached.h 2004-02-18 23:11:28.000000000 -0800 > *************** > *** 10,16 **** > struct stats { > unsigned int curr_items; > unsigned int total_items; > ! unsigned long long curr_bytes; > unsigned int curr_conns; > unsigned int total_conns; > unsigned int conn_structs; > --- 10,16 ---- > struct stats { > unsigned int curr_items; > unsigned int total_items; > ! size_t curr_bytes; > unsigned int curr_conns; > unsigned int total_conns; > unsigned int conn_structs; > *************** > *** 24,30 **** > }; > > struct settings { > ! unsigned int maxbytes; > int maxconns; > int port; > struct in_addr interface; > --- 24,30 ---- > }; > > struct settings { > ! size_t maxbytes; > int maxconns; > int port; > struct in_addr interface; > *************** > *** 147,163 **** > /* slabs memory allocation */ > > /* Init the subsystem. The argument is the limit on no. of bytes to allocate, 0 if no limit */ > ! void slabs_init(unsigned int limit); > > /* Given object size, return id to use when allocating/freeing memory for object */ > /* 0 means error: can't store such a large object */ > ! unsigned int slabs_clsid(unsigned int size); > > /* Allocate object of given length. 0 on error */ > ! void *slabs_alloc(unsigned int size); > > /* Free previously allocated object */ > ! void slabs_free(void *ptr, unsigned int size); > > /* Fill buffer with stats */ > char* slabs_stats(int *buflen); > --- 147,163 ---- > /* slabs memory allocation */ > > /* Init the subsystem. The argument is the limit on no. of bytes to allocate, 0 if no limit */ > ! void slabs_init(size_t limit); > > /* Given object size, return id to use when allocating/freeing memory for object */ > /* 0 means error: can't store such a large object */ > ! unsigned int slabs_clsid(size_t size); > > /* Allocate object of given length. 0 on error */ > ! void *slabs_alloc(size_t size); > > /* Free previously allocated object */ > ! void slabs_free(void *ptr, size_t size); > > /* Fill buffer with stats */ > char* slabs_stats(int *buflen); > Only in memcached-bigmem-1.1.10/: mem.log > diff -c -b -r memcached-1.1.10/slabs.c memcached-bigmem-1.1.10/slabs.c > *** memcached-1.1.10/slabs.c 2003-09-05 15:37:36.000000000 -0700 > --- memcached-bigmem-1.1.10/slabs.c 2004-02-18 23:08:52.000000000 -0800 > *************** > *** 49,58 **** > } slabclass_t; > > static slabclass_t slabclass[POWER_LARGEST+1]; > ! static unsigned int mem_limit = 0; > ! static unsigned int mem_malloced = 0; > > ! unsigned int slabs_clsid(unsigned int size) { > int res = 1; > > if(size==0) > --- 49,58 ---- > } slabclass_t; > > static slabclass_t slabclass[POWER_LARGEST+1]; > ! static size_t mem_limit = 0; > ! static size_t mem_malloced = 0; > > ! unsigned int slabs_clsid(size_t size) { > int res = 1; > > if(size==0) > *************** > *** 67,73 **** > return res; > } > > ! void slabs_init(unsigned int limit) { > int i; > int size=1; > > --- 67,73 ---- > return res; > } > > ! void slabs_init(size_t limit) { > int i; > int size=1; > > *************** > *** 88,94 **** > static int grow_slab_list (unsigned int id) { > slabclass_t *p = &slabclass[id]; > if (p->slabs == p->list_size) { > ! unsigned int new_size = p->list_size ? p->list_size * 2 : 16; > void *new_list = realloc(p->slab_list, new_size*sizeof(void*)); > if (new_list == 0) return 0; > p->list_size = new_size; > --- 88,94 ---- > static int grow_slab_list (unsigned int id) { > slabclass_t *p = &slabclass[id]; > if (p->slabs == p->list_size) { > ! size_t new_size = p->list_size ? p->list_size * 2 : 16; > void *new_list = realloc(p->slab_list, new_size*sizeof(void*)); > if (new_list == 0) return 0; > p->list_size = new_size; > *************** > *** 120,126 **** > return 1; > } > > ! void *slabs_alloc(unsigned int size) { > slabclass_t *p; > > unsigned char id = slabs_clsid(size); > --- 120,126 ---- > return 1; > } > > ! void *slabs_alloc(size_t size) { > slabclass_t *p; > > unsigned char id = slabs_clsid(size); > *************** > *** 160,166 **** > return 0; /* shouldn't ever get here */ > } > > ! void slabs_free(void *ptr, unsigned int size) { > unsigned char id = slabs_clsid(size); > slabclass_t *p; > > --- 160,166 ---- > return 0; /* shouldn't ever get here */ > } > > ! void slabs_free(void *ptr, size_t size) { > unsigned char id = slabs_clsid(size); > slabclass_t *p; > > Only in memcached-bigmem-1.1.10/: stamp-h > > From jtitus@postini.com Fri Feb 20 17:29:21 2004 From: jtitus@postini.com (Jason Titus) Date: Fri, 20 Feb 2004 09:29:21 -0800 Subject: Large memory support Message-ID: No problem. Not sure if every one of the variables needed to be changed = (or that I got all of them), but it seemed like they were all the ones = that would hold memory size in them. Jason ------------ Common subdirectories: memcached-1.1.10/doc and = memcached-bigmem-1.1.10/doc Only in memcached-bigmem-1.1.10/: Makefile diff -u memcached-1.1.10/memcached.c memcached-bigmem-1.1.10/memcached.c --- memcached-1.1.10/memcached.c 2003-12-04 09:50:57.000000000 = -0800 +++ memcached-bigmem-1.1.10/memcached.c 2004-02-18 23:27:39.000000000 = -0800 @@ -336,7 +336,7 @@ pos +=3D sprintf(pos, "STAT get_misses %u\r\n", = stats.get_misses); pos +=3D sprintf(pos, "STAT bytes_read %llu\r\n", = stats.bytes_read); pos +=3D sprintf(pos, "STAT bytes_written %llu\r\n", = stats.bytes_written); - pos +=3D sprintf(pos, "STAT limit_maxbytes %u\r\n", = settings.maxbytes); + pos +=3D sprintf(pos, "STAT limit_maxbytes %llu\r\n", = settings.maxbytes); pos +=3D sprintf(pos, "END"); out_string(c, temp); return; @@ -1279,7 +1279,7 @@ settings.port =3D atoi(optarg); break; case 'm': - settings.maxbytes =3D atoi(optarg)*1024*1024; + settings.maxbytes =3D (size_t) atoi(optarg)* (size_t) 1024* = (size_t) 1024; break; case 'c': settings.maxconns =3D atoi(optarg); diff -u memcached-1.1.10/memcached.h memcached-bigmem-1.1.10/memcached.h --- memcached-1.1.10/memcached.h 2003-12-04 09:50:57.000000000 = -0800 +++ memcached-bigmem-1.1.10/memcached.h 2004-02-18 23:11:28.000000000 = -0800 @@ -10,7 +10,7 @@ struct stats { unsigned int curr_items; unsigned int total_items; - unsigned long long curr_bytes; + size_t curr_bytes; unsigned int curr_conns; unsigned int total_conns; unsigned int conn_structs; @@ -24,7 +24,7 @@ }; =20 struct settings { - unsigned int maxbytes; + size_t maxbytes; int maxconns; int port; struct in_addr interface; @@ -147,17 +147,17 @@ /* slabs memory allocation */ =20 /* Init the subsystem. The argument is the limit on no. of bytes to = allocate, 0 if no limit */ -void slabs_init(unsigned int limit); +void slabs_init(size_t limit); =20 /* Given object size, return id to use when allocating/freeing memory = for object */ /* 0 means error: can't store such a large object */ -unsigned int slabs_clsid(unsigned int size); +unsigned int slabs_clsid(size_t size); =20 /* Allocate object of given length. 0 on error */ -void *slabs_alloc(unsigned int size); +void *slabs_alloc(size_t size); =20 /* Free previously allocated object */ -void slabs_free(void *ptr, unsigned int size); +void slabs_free(void *ptr, size_t size); =20 /* Fill buffer with stats */ char* slabs_stats(int *buflen); Common subdirectories: memcached-1.1.10/scripts and = memcached-bigmem-1.1.10/scripts diff -u memcached-1.1.10/slabs.c memcached-bigmem-1.1.10/slabs.c --- memcached-1.1.10/slabs.c 2003-09-05 15:37:36.000000000 -0700 +++ memcached-bigmem-1.1.10/slabs.c 2004-02-18 23:08:52.000000000 = -0800 @@ -49,10 +49,10 @@ } slabclass_t; =20 static slabclass_t slabclass[POWER_LARGEST+1]; -static unsigned int mem_limit =3D 0; -static unsigned int mem_malloced =3D 0; +static size_t mem_limit =3D 0; +static size_t mem_malloced =3D 0; =20 -unsigned int slabs_clsid(unsigned int size) { +unsigned int slabs_clsid(size_t size) { int res =3D 1; =20 if(size=3D=3D0) @@ -67,7 +67,7 @@ return res; } =20 -void slabs_init(unsigned int limit) { +void slabs_init(size_t limit) { int i; int size=3D1; =20 @@ -88,7 +88,7 @@ static int grow_slab_list (unsigned int id) {=20 slabclass_t *p =3D &slabclass[id]; if (p->slabs =3D=3D p->list_size) { - unsigned int new_size =3D p->list_size ? p->list_size * 2 : = 16; + size_t new_size =3D p->list_size ? p->list_size * 2 : 16; void *new_list =3D realloc(p->slab_list, = new_size*sizeof(void*)); if (new_list =3D=3D 0) return 0; p->list_size =3D new_size; @@ -120,7 +120,7 @@ return 1; } =20 -void *slabs_alloc(unsigned int size) { +void *slabs_alloc(size_t size) { slabclass_t *p; =20 unsigned char id =3D slabs_clsid(size); @@ -160,7 +160,7 @@ return 0; /* shouldn't ever get here */ } =20 -void slabs_free(void *ptr, unsigned int size) { +void slabs_free(void *ptr, size_t size) { unsigned char id =3D slabs_clsid(size); slabclass_t *p; =20 Only in memcached-bigmem-1.1.10/: stamp-h From jtitus@postini.com Fri Feb 20 19:21:06 2004 From: jtitus@postini.com (Jason Titus) Date: Fri, 20 Feb 2004 11:21:06 -0800 Subject: Large memory support In-Reply-To: References: Message-ID: <40365E22.7060108@postini.com> This is a multi-part message in MIME format. --------------090907020104020700080008 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit OK, one last time but this time from a non-Microsoft mail client (to keep from garbling the patch). Sorry to send so many emails about this, but I do think folks will be happy to have >2GB caches! Jason --------------090907020104020700080008 Content-Type: text/plain; name="bigmem.patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="bigmem.patch" Only in memcached-bigmem-1.1.10/: config.h Only in memcached-bigmem-1.1.10/: config.log Only in memcached-bigmem-1.1.10/: config.status Common subdirectories: memcached-1.1.10/doc and memcached-bigmem-1.1.10/doc Only in memcached-bigmem-1.1.10/: Makefile diff -u memcached-1.1.10/memcached.c memcached-bigmem-1.1.10/memcached.c --- memcached-1.1.10/memcached.c 2003-12-04 09:50:57.000000000 -0800 +++ memcached-bigmem-1.1.10/memcached.c 2004-02-18 23:27:39.000000000 -0800 @@ -336,7 +336,7 @@ pos += sprintf(pos, "STAT get_misses %u\r\n", stats.get_misses); pos += sprintf(pos, "STAT bytes_read %llu\r\n", stats.bytes_read); pos += sprintf(pos, "STAT bytes_written %llu\r\n", stats.bytes_written); - pos += sprintf(pos, "STAT limit_maxbytes %u\r\n", settings.maxbytes); + pos += sprintf(pos, "STAT limit_maxbytes %llu\r\n", settings.maxbytes); pos += sprintf(pos, "END"); out_string(c, temp); return; @@ -1279,7 +1279,7 @@ settings.port = atoi(optarg); break; case 'm': - settings.maxbytes = atoi(optarg)*1024*1024; + settings.maxbytes = (size_t) atoi(optarg)* (size_t) 1024* (size_t) 1024; break; case 'c': settings.maxconns = atoi(optarg); diff -u memcached-1.1.10/memcached.h memcached-bigmem-1.1.10/memcached.h --- memcached-1.1.10/memcached.h 2003-12-04 09:50:57.000000000 -0800 +++ memcached-bigmem-1.1.10/memcached.h 2004-02-18 23:11:28.000000000 -0800 @@ -10,7 +10,7 @@ struct stats { unsigned int curr_items; unsigned int total_items; - unsigned long long curr_bytes; + size_t curr_bytes; unsigned int curr_conns; unsigned int total_conns; unsigned int conn_structs; @@ -24,7 +24,7 @@ }; struct settings { - unsigned int maxbytes; + size_t maxbytes; int maxconns; int port; struct in_addr interface; @@ -147,17 +147,17 @@ /* slabs memory allocation */ /* Init the subsystem. The argument is the limit on no. of bytes to allocate, 0 if no limit */ -void slabs_init(unsigned int limit); +void slabs_init(size_t limit); /* Given object size, return id to use when allocating/freeing memory for object */ /* 0 means error: can't store such a large object */ -unsigned int slabs_clsid(unsigned int size); +unsigned int slabs_clsid(size_t size); /* Allocate object of given length. 0 on error */ -void *slabs_alloc(unsigned int size); +void *slabs_alloc(size_t size); /* Free previously allocated object */ -void slabs_free(void *ptr, unsigned int size); +void slabs_free(void *ptr, size_t size); /* Fill buffer with stats */ char* slabs_stats(int *buflen); Common subdirectories: memcached-1.1.10/scripts and memcached-bigmem-1.1.10/scripts diff -u memcached-1.1.10/slabs.c memcached-bigmem-1.1.10/slabs.c --- memcached-1.1.10/slabs.c 2003-09-05 15:37:36.000000000 -0700 +++ memcached-bigmem-1.1.10/slabs.c 2004-02-18 23:08:52.000000000 -0800 @@ -49,10 +49,10 @@ } slabclass_t; static slabclass_t slabclass[POWER_LARGEST+1]; -static unsigned int mem_limit = 0; -static unsigned int mem_malloced = 0; +static size_t mem_limit = 0; +static size_t mem_malloced = 0; -unsigned int slabs_clsid(unsigned int size) { +unsigned int slabs_clsid(size_t size) { int res = 1; if(size==0) @@ -67,7 +67,7 @@ return res; } -void slabs_init(unsigned int limit) { +void slabs_init(size_t limit) { int i; int size=1; @@ -88,7 +88,7 @@ static int grow_slab_list (unsigned int id) { slabclass_t *p = &slabclass[id]; if (p->slabs == p->list_size) { - unsigned int new_size = p->list_size ? p->list_size * 2 : 16; + size_t new_size = p->list_size ? p->list_size * 2 : 16; void *new_list = realloc(p->slab_list, new_size*sizeof(void*)); if (new_list == 0) return 0; p->list_size = new_size; @@ -120,7 +120,7 @@ return 1; } -void *slabs_alloc(unsigned int size) { +void *slabs_alloc(size_t size) { slabclass_t *p; unsigned char id = slabs_clsid(size); @@ -160,7 +160,7 @@ return 0; /* shouldn't ever get here */ } -void slabs_free(void *ptr, unsigned int size) { +void slabs_free(void *ptr, size_t size) { unsigned char id = slabs_clsid(size); slabclass_t *p; Only in memcached-bigmem-1.1.10/: stamp-h --------------090907020104020700080008-- From chris@paymentonline.com Sun Feb 22 06:32:14 2004 From: chris@paymentonline.com (Chris Ochs) Date: Sat, 21 Feb 2004 22:32:14 -0800 Subject: Cache::Memcached with mod perl Message-ID: <008201c3f90d$9c88d4d0$b9042804@chris2> Running under a mod perl handler. I am calling Cache::Memcached new every time the handler is run and there is only one memcached server running. I am getting a lot of calls to _dead_sock when I call set. If I call set repeatedly memcached calls _dead_sock every time. If I wait about 5-10 seconds it stops calling _dead_sock and works fine. This behavior only shows up under mod perl, and only when I set a value and keep setting it every few seconds. It also does this when running apache with -X. Anything obvious I should be doing before I spend a bunch more time debugging? I must be missing something about how the perl client works that makes it do this under mod perl. Chris From brad@danga.com Sun Feb 22 07:10:21 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Sat, 21 Feb 2004 23:10:21 -0800 (PST) Subject: Cache::Memcached with mod perl In-Reply-To: <008201c3f90d$9c88d4d0$b9042804@chris2> References: <008201c3f90d$9c88d4d0$b9042804@chris2> Message-ID: Chris, We use the module under mod_perl, so I wouldn't point any fingers at mod_perl in particular. Have you written a minimal script outside of mod_perl and seen what it does? I suspect the same thing. I can't predict why, though. - Brad On Sat, 21 Feb 2004, Chris Ochs wrote: > > Running under a mod perl handler. I am calling Cache::Memcached new every > time the handler is run and there is only one memcached server running. > > I am getting a lot of calls to _dead_sock when I call set. If I call set > repeatedly memcached calls _dead_sock every time. If I wait about 5-10 > seconds it stops calling _dead_sock and works fine. This behavior only > shows up under mod perl, and only when I set a value and keep setting it > every few seconds. > > It also does this when running apache with -X. > > Anything obvious I should be doing before I spend a bunch more time > debugging? I must be missing something about how the perl client works that > makes it do this under mod perl. > > Chris > > From jtitus@postini.com Tue Feb 24 21:56:36 2004 From: jtitus@postini.com (Jason Titus) Date: Tue, 24 Feb 2004 13:56:36 -0800 Subject: PATCH - Add command line flag to not push items out of the cache Message-ID: <403BC894.7070701@postini.com> This is a multi-part message in MIME format. --------------060504020100020708060107 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Adds a '-M' flag to turn off tossing items from the cache. This makes it so that memcached will error on a set when memory is full rather than tossing an old item. This is useful if you depend on items staying in the cache. Jason --------------060504020100020708060107 Content-Type: text/plain; name="memcached-no-evict.patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename="memcached-no-evict.patch" Common subdirectories: memcached-1.1.10/doc and memcached-1.1.10-no-evict/doc diff -u memcached-1.1.10/items.c memcached-1.1.10-no-evict/items.c --- memcached-1.1.10/items.c 2003-08-11 09:14:07.000000000 -0700 +++ memcached-1.1.10-no-evict/items.c 2004-02-24 13:44:41.000000000 -0800 @@ -55,6 +55,13 @@ it = slabs_alloc(ntotal); if (it == 0) { + + /* If requested to not push old items out of cache when memory runs out, + * we're out of luck at this point... + */ + + if (!settings.evict_to_free) return 0; + /* * try to get one off the right LRU * don't necessariuly unlink the tail because it may be locked: refcount>0 diff -u memcached-1.1.10/memcached.c memcached-1.1.10-no-evict/memcached.c --- memcached-1.1.10/memcached.c 2003-12-04 09:50:57.000000000 -0800 +++ memcached-1.1.10-no-evict/memcached.c 2004-02-24 13:47:23.000000000 -0800 @@ -92,6 +92,7 @@ settings.maxconns = 1024; /* to limit connections-related memory to about 5MB */ settings.verbose = 0; settings.oldest_live = 0; + settings.evict_to_free = 1; /* push old items out of cache when memory runs out */ } conn **freeconns; @@ -1177,6 +1178,7 @@ printf("-d run as a daemon\n"); printf("-u assume identity of (only when run as root)\n"); printf("-m max memory to use for items in megabytes, default is 64 MB\n"); + printf("-M return error on memory exhausted (rather than removing items)\n"); printf("-c max simultaneous connections, default is 1024\n"); printf("-k lock down all paged memory\n"); printf("-v verbose (print errors/warnings while in event loop)\n"); @@ -1273,7 +1275,7 @@ settings_init(); /* process arguments */ - while ((c = getopt(argc, argv, "p:m:c:khivdl:u:")) != -1) { + while ((c = getopt(argc, argv, "p:m:Mc:khivdl:u:")) != -1) { switch (c) { case 'p': settings.port = atoi(optarg); @@ -1281,6 +1283,9 @@ case 'm': settings.maxbytes = atoi(optarg)*1024*1024; break; + case 'M': + settings.evict_to_free = 0; + break; case 'c': settings.maxconns = atoi(optarg); break; diff -u memcached-1.1.10/memcached.h memcached-1.1.10-no-evict/memcached.h --- memcached-1.1.10/memcached.h 2003-12-04 09:50:57.000000000 -0800 +++ memcached-1.1.10-no-evict/memcached.h 2004-02-24 13:46:27.000000000 -0800 @@ -30,6 +30,7 @@ struct in_addr interface; int verbose; time_t oldest_live; /* ignore existing items older than this */ + int evict_to_free; }; extern struct stats stats; Common subdirectories: memcached-1.1.10/scripts and memcached-1.1.10-no-evict/scripts --------------060504020100020708060107-- From brad@danga.com Tue Feb 24 22:14:02 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Tue, 24 Feb 2004 14:14:02 -0800 (PST) Subject: PATCH - Add command line flag to not push items out of the cache In-Reply-To: <403BC894.7070701@postini.com> References: <403BC894.7070701@postini.com> Message-ID: Nice and simple... thanks! We'll be sure to include this in the next release. - Brad On Tue, 24 Feb 2004, Jason Titus wrote: > Adds a '-M' flag to turn off tossing items from the cache. This makes > it so that memcached will error on a set when memory is full rather than > tossing an old item. > > This is useful if you depend on items staying in the cache. > > Jason > > From brad@danga.com Tue Feb 24 23:42:18 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Tue, 24 Feb 2004 15:42:18 -0800 (PST) Subject: PATCH - Add command line flag to not push items out of the cache In-Reply-To: <403BC894.7070701@postini.com> References: <403BC894.7070701@postini.com> Message-ID: Now in CVS. On Tue, 24 Feb 2004, Jason Titus wrote: > Adds a '-M' flag to turn off tossing items from the cache. This makes > it so that memcached will error on a set when memory is full rather than > tossing an old item. > > This is useful if you depend on items staying in the cache. > > Jason > > From jtitus@postini.com Wed Feb 25 01:24:20 2004 From: jtitus@postini.com (Jason Titus) Date: Tue, 24 Feb 2004 17:24:20 -0800 Subject: PATCH - Add command line flag to not push items out of the cache Message-ID: Awesome. Hope folks find it useful. How about large memory support - any issues with that patch? Seems like = a minor change for a significant benefit. We've been banging on it a = bit and haven't come across any problems yet. Let me know if I need to make any changes or test anything in = particular. Jason -----Original Message----- From: Brad Fitzpatrick [mailto:brad@danga.com] Sent: Tuesday, February 24, 2004 3:42 PM To: Jason Titus Cc: memcached@lists.danga.com Subject: Re: PATCH - Add command line flag to not push items out of the cache Now in CVS. On Tue, 24 Feb 2004, Jason Titus wrote: > Adds a '-M' flag to turn off tossing items from the cache. This makes > it so that memcached will error on a set when memory is full rather = than > tossing an old item. > > This is useful if you depend on items staying in the cache. > > Jason > > From brad@danga.com Wed Feb 25 01:36:48 2004 From: brad@danga.com (Brad Fitzpatrick) Date: Tue, 24 Feb 2004 17:36:48 -0800 (PST) Subject: PATCH - Add command line flag to not push items out of the cache In-Reply-To: References: Message-ID: I just want to make sure I fully understand it before it's committed. That is, I want to know what "int", "long", "long long", "size_t" all mean and do on different platforms. I'll make sure I research it and commit it before the next release. - Brad On Tue, 24 Feb 2004, Jason Titus wrote: > Awesome. Hope folks find it useful. > > How about large memory support - any issues with that patch? Seems like > a minor change for a significant benefit. We've been banging on it a > bit and haven't come across any problems yet. > > Let me know if I need to make any changes or test anything in particular. > > Jason > > -----Original Message----- > From: Brad Fitzpatrick [mailto:brad@danga.com] > Sent: Tuesday, February 24, 2004 3:42 PM > To: Jason Titus > Cc: memcached@lists.danga.com > Subject: Re: PATCH - Add command line flag to not push items out of the > cache > > > Now in CVS. > > > On Tue, 24 Feb 2004, Jason Titus wrote: > > > Adds a '-M' flag to turn off tossing items from the cache. This makes > > it so that memcached will error on a set when memory is full rather than > > tossing an old item. > > > > This is useful if you depend on items staying in the cache. > > > > Jason > > > > > > From mellon@pobox.com Wed Feb 25 14:28:03 2004 From: mellon@pobox.com (Anatoly Vorobey) Date: Wed, 25 Feb 2004 16:28:03 +0200 Subject: Large memory support In-Reply-To: <40365E22.7060108@postini.com> References: <40365E22.7060108@postini.com> Message-ID: <20040225142803.GA22657@pobox.com> On Fri, Feb 20, 2004 at 11:21:06AM -0800, Jason Titus wrote: > - pos += sprintf(pos, "STAT limit_maxbytes %u\r\n", settings.maxbytes); > + pos += sprintf(pos, "STAT limit_maxbytes %llu\r\n", settings.maxbytes); This is wrong, because %llu will always expect a 64bit argument, but on 32bit architectures your settings.maxbytes, being a size_t, will be a 32bit argument. There're two ways to make it right: a) use the 'z' field width option to sprintf here (and any other place where we might need to output/input a size_t field) and retain the size_t type. However, I don't know how standard 'z' is and would appreciate information about it. b) use 'unsigned long' instead of size_t, and give %l to sprintf. "unsigned long" works great because it's 32bit on 32bit architectures and 64bit on 64bit architectures. I prefer b) myself, but I may be underappreciating size_t, I'm not sure. > - settings.maxbytes = atoi(optarg)*1024*1024; > + settings.maxbytes = (size_t) atoi(optarg)* (size_t) 1024* (size_t) 1024; Most or all of these explicit typecasts shouldn't be necessary. -- avva From jtitus@postini.com Thu Feb 26 21:02:49 2004 From: jtitus@postini.com (Jason Titus) Date: Thu, 26 Feb 2004 13:02:49 -0800 Subject: memcached digest, Vol 1 #120 - 1 msg In-Reply-To: <20040226142506.32049.37684.Mailman@danga.com> References: <20040226142506.32049.37684.Mailman@danga.com> Message-ID: <403E5EF9.20708@postini.com> >Date: Wed, 25 Feb 2004 16:28:03 +0200 >From: Anatoly Vorobey >To: memcached@lists.danga.com >Subject: Re: Large memory support > >On Fri, Feb 20, 2004 at 11:21:06AM -0800, Jason Titus wrote: > > >>- pos += sprintf(pos, "STAT limit_maxbytes %u\r\n", settings.maxbytes); >>+ pos += sprintf(pos, "STAT limit_maxbytes %llu\r\n", settings.maxbytes); >> >> > >This is wrong, because %llu will always expect a 64bit argument, but on >32bit architectures your settings.maxbytes, being a size_t, will be a >32bit argument. > >There're two ways to make it right: > >a) use the 'z' field width option to sprintf here (and any other place >where we might need to output/input a size_t field) and retain the >size_t type. However, I don't know how standard 'z' is and would >appreciate information about it. > >b) use 'unsigned long' instead of size_t, and give %l to sprintf. >"unsigned long" works great because it's 32bit on 32bit architectures >and 64bit on 64bit architectures. > >I prefer b) myself, but I may be underappreciating size_t, I'm not sure. > > > How about just casting it as a unsigned long long? sprintf(pos, "STAT limit_maxbytes %llu\r\n", (unsigned long long)settings.maxbytes); As for the whole size_t thing, it seems like there is some debate on whether it makes sense to use it everywhere. Some platforms seem to do odd things with it (I gather that HPUX is one example), and you need to have C99 support. On Linux size_t is always an unsigned int on 32 bit platforms and always an unsigned long on 64 bit platforms (at least for the 18 processor types supported as of 4.2.40). Unsigned longs should work in most UNIXs, but is 32 bits on 64 bit Windows. Not sure if anyone cares now, but I'd bet that Win64 will be a popular platform in a year or two. >>- settings.maxbytes = atoi(optarg)*1024*1024; >>+ settings.maxbytes = (size_t) atoi(optarg)* (size_t) 1024* (size_t) 1024; >> >> > >Most or all of these explicit typecasts shouldn't be necessary. > > > Yeah. Ditch those. Jason From jtitus@postini.com Thu Feb 26 23:45:57 2004 From: jtitus@postini.com (Jason Titus) Date: Thu, 26 Feb 2004 15:45:57 -0800 Subject: Perl error checking Message-ID: I know this was discussed before, but what do folks currently do to = determine whether a memcached server is running or not? It seems like = the Cache::Memcached->new method should recognize that it is unable to = connect to one or more servers. Perhaps it should issue a 'stats' command and verify a response? I am = doing that in a script now, but it causes uninitialized value warnings = from trying to total non-existant hash entries. I can work on a patch but want to understand what people think makes the = most sense. Jason