From christopher at baus.net Fri Mar 4 00:46:31 2005
From: christopher at baus.net (christopher@baus.net)
Date: Fri Mar 4 00:46:32 2005
Subject: determining server connect failures with C client
Message-ID: <53490.127.0.0.1.1109925991.squirrel@mail.baus.net>
I must be missing something, but I don't see how to detect a failure to
connect to the server(s) from the libmemcached C client. It looks like
the client attempts to connect JIT when making a request, but if the
connect fails it just and returns void.
Christopher
http://baus.net/
From dwilde at sandia.gov Fri Mar 4 07:05:18 2005
From: dwilde at sandia.gov (Wilde, Donald)
Date: Fri Mar 4 07:05:31 2005
Subject: Embperl+Apache::Session::Memcached
Message-ID: <040DF00BF960A24897B5B3EFBE63FE8A8EB7BB@ES20SNLNT.srn.sandia.gov>
Greetings! I've successfully gotten memcached to function from raw perl,
but I'm not succeeding with the Apache::Session::Memcached variant when
called from embperl includes.
Here's the beginning of my index.epl:
===================
[*
use Apache;
use Apache::Session::Memcached;
my %session;
tie %session, 'Apache::Session::Memcached', undef, {
'Servers' => ['127.0.0.1:20000'],
'NoRehash' => 1,
'Readonly' => 0,
'Debug' => 1,
'CompressThreshold' => 10_000,
};
*]
...
=================
I've tried several variants; this one (which works in raw perl called
from command line) has the Servers value enclosed in an anonymous array
and all keys quoted, as per the Cache::Memcached module docs. My embperl
setup appears to be functioning properly; I can get
Apache::Session::File to work using the same tie structure.
The failure presented is an Apache Internal Server Error:
====================
[728]ERR: 24: Error in Perl code: Can't locate object method "TIEHASH"
+ via package "Apache::Session::Memcached" at /var/web/root/index.epl
+line 10.
[728]ERR: 24: index.epl(1): Error in Perl code:
Apache/1.3.33 (Unix) mod_perl/1.29 PHP/5.0.0a6-alexdupre HTML::Embperl
+ 1.3.6 [Thu Mar 3 06:49:48 2005]
====================
Thanks in advance!
--
Don Wilde
Org 1737, MS1076, 844-1126
dwilde@sandia.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.danga.com/pipermail/memcached/attachments/20050304/ebb9ff1e/attachment.html
From christopher at baus.net Fri Mar 4 11:58:10 2005
From: christopher at baus.net (christopher@baus.net)
Date: Fri Mar 4 11:58:13 2005
Subject: Adding AREYOUTHERE to memcached
In-Reply-To: <53490.127.0.0.1.1109925991.squirrel@mail.baus.net>
References: <53490.127.0.0.1.1109925991.squirrel@mail.baus.net>
Message-ID: <47724.127.0.0.1.1109966290.squirrel@mail.baus.net>
After looking around at the C client last night, I was wondering if it
might be useful to add an AREYOUTHERE command to the protocol. I want to
be able to check the status of the servers when my app starts up.
What I am thinging is something like:
if(mc_server_available("127.0.0.1", "11211")){
mc_server_add(mc, "127.0.0.1", "11211");
}
Where mc_server_available sends the AREYOUTHERE command if it is able to
connect to the server.
Again it could be that I'm just missing something in the API. Also maybe
a similar API could be added with out reving the protocol.
Thoughts?
From gavin at orgasm.com Fri Mar 4 16:19:31 2005
From: gavin at orgasm.com (Gavin Dunne)
Date: Fri Mar 4 16:18:55 2005
Subject: Using libmemcache PHP extension on OS X
Message-ID:
Hey everyone, I'm attempting to use the PHP libmemcache extension on:
Mac OS X 10.3.8
Apache 1.3.33
PHP 5.0.3
libmemcache 1.2.3
mcache 1.1.2
Configuration and compilation seems to go fine, however on restarting
apache something goes wrong. Following is a script of what I've done,
any help in resolving this would be much appreciated.
Thanks, Gavin.
Script started on Fri Mar 4 15:48:39 2005
Chasey:~/Src/mcache gdunne$
Chasey:~/Src/mcache gdunne$ ./configure
--with-php-config=/usr/local/bin/php-config
--with-mcache=../libmemcache-1.2.3
checking build system type... powerpc-apple-darwin7.7.0
checking host system type... powerpc-apple-darwin7.7.0
checking for gcc... gcc
checking for C compiler default output... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ANSI C... none needed
checking whether gcc and cc understand -c and -o together... yes
checking if compiler supports -R... no
checking if compiler supports -Wl,-rpath,... no
checking for PHP prefix... /usr/local
checking for PHP includes... -I/usr/local/include/php
-I/usr/local/include/php/main -I/usr/local/include/php/Zend
-I/usr/local/include/php/TSRM
checking for PHP extension directory...
/usr/local/lib/php/extensions/no-debug-non-zts-20041030
checking for re2c... exit 0;
checking for gawk... no
checking for mawk... no
checking for nawk... no
checking for awk... awk
checking for mcache support... yes, shared
checking for a sed that does not truncate output... /usr/bin/sed
checking for egrep... grep -E
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... no
checking for /usr/bin/ld option to reload object files... -r
checking for BSD-compatible nm... /usr/bin/nm -p
checking whether ln -s works... yes
checking how to recognise dependent libraries... pass_all
checking how to run the C preprocessor... gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking dlfcn.h usability... yes
checking dlfcn.h presence... yes
checking for dlfcn.h... yes
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking how to run the C++ preprocessor... g++ -E
checking for g77... no
checking for f77... no
checking for xlf... no
checking for frt... no
checking for pgf77... no
checking for fl32... no
checking for af77... no
checking for fort77... no
checking for f90... no
checking for xlf90... no
checking for pgf90... no
checking for epcf90... no
checking for f95... no
checking for fort... no
checking for xlf95... no
checking for lf95... no
checking for g95... no
checking whether we are using the GNU Fortran 77 compiler... no
checking whether accepts -g... no
checking the maximum length of command line arguments... 65536
checking command to parse /usr/bin/nm -p output from gcc object... ok
checking for objdir... .libs
checking for ar... ar
checking for ranlib... ranlib
checking for strip... strip
checking if gcc static flag works... yes
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fno-common
checking if gcc PIC flag -fno-common works... yes
checking if gcc supports -c -o file.o... yes
checking whether the gcc linker (/usr/bin/ld) supports shared
libraries... yes
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... no
checking dynamic linker characteristics... darwin7.7.0 dyld
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... no
configure: creating libtool
appending configuration tag "CXX" to libtool
checking for ld used by g++... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... no
checking whether the g++ linker (/usr/bin/ld) supports shared
libraries... yes
checking for g++ option to produce PIC... -fno-common
checking if g++ PIC flag -fno-common works... yes
checking if g++ supports -c -o file.o... yes
checking whether the g++ linker (/usr/bin/ld) supports shared
libraries... yes
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... no
checking dynamic linker characteristics... darwin7.7.0 dyld
appending configuration tag "F77" to libtool
configure: creating ./config.status
config.status: creating config.h
config.status: config.h is unchanged
Chasey:~/Src/mcache gdunne$
Chasey:~/Src/mcache gdunne$
Chasey:~/Src/mcache gdunne$ make
/bin/sh /Users/gdunne/Src/mcache/libtool --mode=compile gcc -I.
-I/Users/gdunne/Src/mcache -DPHP_ATOM_INC
-I/Users/gdunne/Src/mcache/include -I/Users/gdunne/Src/mcache/main
-I/Users/gdunne/Src/mcache -I/usr/local/include/php
-I/usr/local/include/php/main -I/usr/local/include/php/Zend
-I/usr/local/include/php/TSRM -I/Users/gdunne/Src/libmemcache-1.2.3
-DHAVE_CONFIG_H -g -O2 -prefer-pic -c
/Users/gdunne/Src/mcache/mcache.c -o mcache.lo
mkdir .libs
gcc -I. -I/Users/gdunne/Src/mcache -DPHP_ATOM_INC
-I/Users/gdunne/Src/mcache/include -I/Users/gdunne/Src/mcache/main
-I/Users/gdunne/Src/mcache -I/usr/local/include/php
-I/usr/local/include/php/main -I/usr/local/include/php/Zend
-I/usr/local/include/php/TSRM -I/Users/gdunne/Src/libmemcache-1.2.3
-DHAVE_CONFIG_H -g -O2 -c /Users/gdunne/Src/mcache/mcache.c
-fno-common -DPIC -o .libs/mcache.o
In file included from /Users/gdunne/Src/mcache/mcache.c:33:
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c: In function
`mcm_atomic_cmd':
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:498: warning: assignment
discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:500: warning: assignment
discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:502: warning: assignment
discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:512: warning: assignment
discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c: In function
`mcm_delete':
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:584: warning: assignment
discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:586: warning: assignment
discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:588: warning: assignment
discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:598: warning: assignment
discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c: In function
`mcm_fetch_cmd':
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:642: warning: assignment
discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:652: warning: assignment
discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:656: warning: assignment
discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:666: warning: assignment
discards qualifiers from pointer target type
In file included from /Users/gdunne/Src/mcache/mcache.c:33:
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c: In function
`mcm_res_free':
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:1298: warning: passing
arg 1 of pointer to function discards qualifiers from pointer target
type
In file included from /Users/gdunne/Src/mcache/mcache.c:33:
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c: In function
`mcm_storage_cmd':
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:2159: warning:
assignment discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:2161: warning:
assignment discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:2163: warning:
assignment discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:2205: warning:
assignment discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:2212: warning:
assignment discards qualifiers from pointer target type
/Users/gdunne/Src/libmemcache-1.2.3/memcache.c:2216: warning:
assignment discards qualifiers from pointer target type
/bin/sh /Users/gdunne/Src/mcache/libtool --mode=link gcc -DPHP_ATOM_INC
-I/Users/gdunne/Src/mcache/include -I/Users/gdunne/Src/mcache/main
-I/Users/gdunne/Src/mcache -I/usr/local/include/php
-I/usr/local/include/php/main -I/usr/local/include/php/Zend
-I/usr/local/include/php/TSRM -I/Users/gdunne/Src/libmemcache-1.2.3
-DHAVE_CONFIG_H -g -O2 -o mcache.la -export-dynamic -avoid-version
-prefer-pic -module -rpath /Users/gdunne/Src/mcache/modules mcache.lo
gcc -bundle -flat_namespace -undefined suppress -o .libs/mcache.so
.libs/mcache.o
creating mcache.la
(cd .libs && rm -f mcache.la && ln -s ../mcache.la mcache.la)
/bin/sh /Users/gdunne/Src/mcache/libtool --mode=install cp ./mcache.la
/Users/gdunne/Src/mcache/modules
cp ./.libs/mcache.so /Users/gdunne/Src/mcache/modules/mcache.so
cp ./.libs/mcache.lai /Users/gdunne/Src/mcache/modules/mcache.la
----------------------------------------------------------------------
Libraries have been installed in:
/Users/gdunne/Src/mcache/modules
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `DYLD_LIBRARY_PATH' environment variable
during execution
See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
Build complete.
(It is safe to ignore warnings about tempnam and tmpnam).
Chasey:~/Src/mcache gdunne$
Chasey:~/Src/mcache gdunne$
Chasey:~/Src/mcache gdunne$ sudo make install
Installing shared extensions:
/usr/local/lib/php/extensions/no-debug-non-zts-20041030/
Chasey:~/Src/mcache gdunne$
Chasey:~/Src/mcache gdunne$
Chasey:~/Src/mcache gdunne$ sudo apachectl configtest
Processing config directory: /etc/httpd/*.include
[Fri Mar 4 15:50:10 2005] [alert] httpd: Could not determine the
server's fully qualified domain name, using 127.0.0.1 for ServerName
[Fri Mar 4 15:50:10 2005] [error] Failed to resolve server name for
10.0.1.202 (check DNS) -- or specify an explicit ServerName
Syntax OK
dyld: /usr/sbin/httpd Undefined symbols:
__array_init
__convert_to_string
__efree
__emalloc
__erealloc
__estrndup
__object_init_ex
__zend_hash_add_or_update
__zend_hash_init
__zend_list_find
_add_assoc_double_ex
_add_assoc_long_ex
_add_assoc_string_ex
_add_assoc_zval_ex
_add_property_resource_ex
_ap_php_snprintf
_convert_to_double
_convert_to_long
_executor_globals
_php_info_print_table_end
_php_info_print_table_header
_php_info_print_table_row
_php_info_print_table_start
_php_var_serialize
_php_var_unserialize
_var_destroy
_zend_error
_zend_hash_del_key_or_index
_zend_hash_destroy
_zend_hash_find
_zend_hash_get_current_data_ex
_zend_hash_get_current_key_ex
_zend_hash_internal_pointer_reset_ex
_zend_hash_move_forward_ex
_zend_hash_num_elements
_zend_list_insert
_zend_parse_parameters
_zend_register_internal_class
_zend_register_list_destructors_ex
_zend_wrong_param_count
/usr/sbin/apachectl: line 193: 28743 Trace/BPT trap $HTTPD -t
Chasey:~/Src/mcache gdunne$ exit
exit
Script done on Fri Mar 4 15:50:14 2005
From camster at citeulike.org Sat Mar 5 12:20:42 2005
From: camster at citeulike.org (Richard Cameron)
Date: Sat Mar 5 12:20:48 2005
Subject: TCP_NOPUSH and Mac OS X
Message-ID: <67631abe832c214700870b65b14ed2c9@citeulike.org>
There was some discussion on this list last year about some fairly
serious performance problems on Mac OS X. I was seeing these too, and I
think I've isolated the problem to the TCP_NOPUSH option, and there's a
one line hack which seems to solve it.
On OS X 10.3.8, running memcached locally and connecting to it on
localhost, the symptoms were that there was a latency of about 0.2
seconds between sending a command down the socket to the server and
getting a reply. Doing a tcpdump showed that the delay was *exactly*
200ms on every request, however running a kdump showed that memcached
was actually writing its response to the socket pretty much
instantaneously.
The relevant hack which seemed to get things working again was to
simply comment out the line in memcached.c which set TCP_NOPUSH:
#ifdef TCP_NOPUSH
// setsockopt(c->sfd, IPPROTO_TCP, TCP_NOPUSH, &val, sizeof(val));
#endif
It doesn't seem to be well known (at least, Google doesn't know) that
TCP_NOPUSH is simply broken on OS X, and there was some evidence on the
list that some people managed to get memcached running "out of the box"
without this sort of latency. I'd be interested to know if that's still
the case as it might shed a little more light on the problem.
However, I'm quite willing to conclude there is some underlying problem
with the operating system, as things continue to get even stranger:
As I couldn't use TCP_NOPUSH, I put a "#undef TCP_NOPOSH" at the top of
the file, which has the effect of making the code set TCP_NODELAY on
the socket. This is exactly what I wanted:
#if !defined(TCP_NOPUSH)
setsockopt(sfd, IPPROTO_TCP, TCP_NODELAY, &flags, sizeof(flags));
#endif
This worked quite nicely (about a factor of 3 speedup over the lo
interface), but when I load tested it for an extended period (about 5
minutes) it seemed to fairly reliably cause a kernel panic (stack trace
attached for interest below). Dropping the TCP_NODELAY option again
seemed to "fix" things, but I've got no idea whether this isn't simply
because it conspires to slow things down enough such that whatever race
condition in the kernel is causing the panic doesn't happen any more.
Does anyone else see this, or is it just a (rather annoying) quirk of
my machine?
Richard
*********
Sat Mar 5 19:33:12 2005
Unresolved kernel trap(cpu 0): 0x300 - Data access
DAR=0x0000000000000014 PC=0x000000000020C8F4
Latest crash info for cpu 0:
Exception state (sv=0x31747C80)
PC=0x0020C8F4; MSR=0x00009030; DAR=0x00000014; DSISR=0x40000000;
LR=0x0020C800; R1=0x12213C20; XCP=0x0000000C (0x300 - Data access)
Backtrace:
0x40471D84 0x0020C330 0x002463E4 0x00094160 0x01C465A0
Proceeding back via exception chain:
Exception state (sv=0x31747C80)
previously dumped as "Latest" state. skipping...
Exception state (sv=0x28307000)
PC=0x9002E1CC; MSR=0x0000F030; DAR=0x1C3EB004; DSISR=0x40000000;
LR=0x00007B38; R1=0xBFFFF910; XCP=0x00000030 (0xC00 - System call)
Kernel version:
Darwin Kernel Version 7.8.0:
Wed Dec 22 14:26:17 PST 2004; root:xnu/xnu-517.11.1.obj~1/RELEASE_PPC
panic(cpu 0): 0x300 - Data access
Latest stack backtrace for cpu 0:
Backtrace:
0x000835F8 0x00083ADC 0x0001EDA4 0x00090BD8 0x00093FCC
Proceeding back via exception chain:
Exception state (sv=0x31747C80)
PC=0x0020C8F4; MSR=0x00009030; DAR=0x00000014; DSISR=0x40000000;
LR=0x0020C800; R1=0x12213C20; XCP=0x0000000C (0x300 - Data access)
Backtrace:
0x40471D84 0x0020C330 0x002463E4 0x00094160 0x01C465A0
Exception state (sv=0x28307000)
PC=0x9002E1CC; MSR=0x0000F030; DAR=0x1C3EB004; DSISR=0x40000000;
LR=0x00007B38; R1=0xBFFFF910; XCP=0x00000030 (0xC00 - System call)
Kernel version:
Darwin Kernel Version 7.8.0:
Wed Dec 22 14:26:17 PST 2004; root:xnu/xnu-517.11.1.obj~1/RELEASE_PPC
*********
From camster at citeulike.org Sat Mar 5 12:26:49 2005
From: camster at citeulike.org (Richard Cameron)
Date: Sat Mar 5 12:26:53 2005
Subject: TCP_NOPUSH and Mac OS X
In-Reply-To: <67631abe832c214700870b65b14ed2c9@citeulike.org>
References: <67631abe832c214700870b65b14ed2c9@citeulike.org>
Message-ID: <9b9162a9f8cb3cb4fe5a2c0224bb1cde@citeulike.org>
On 5 Mar 2005, at 20:20, Richard Cameron wrote:
> As I couldn't use TCP_NOPUSH, I put a "#undef TCP_NOPOSH"
This is, of course, a typo rather than any sort of left-wing political
imperative. "#undef TCP_NOPUSH" in memcached.c is what you need to
cause yourself a kernel panic.
Richard.
From johnm at klir.com Sat Mar 5 12:28:52 2005
From: johnm at klir.com (John McCaskey)
Date: Sat Mar 5 12:27:45 2005
Subject: Using libmemcache PHP extension on OS X
In-Reply-To:
References:
Message-ID: <1110054532.9992.7.camel@localhost>
Hey Gavin,
I'm not sure what to make of this, unfortunately I don't have any OS X
boxes to test on. See my notes in line towards the end of your email...
On Fri, 2005-03-04 at 16:19 -0800, Gavin Dunne wrote:
> Hey everyone, I'm attempting to use the PHP libmemcache extension on:
>
> Mac OS X 10.3.8
> Apache 1.3.33
> PHP 5.0.3
> libmemcache 1.2.3
> mcache 1.1.2
>
> Configuration and compilation seems to go fine, however on restarting
> apache something goes wrong. Following is a script of what I've done,
> any help in resolving this would be much appreciated.
>
> Thanks, Gavin.
>
>
[snip]
> ---------------------------------
> Libraries have been installed in:
> /Users/gdunne/Src/mcache/modules
>
> If you ever happen to want to link against installed libraries
> in a given directory, LIBDIR, you must either use libtool, and
> specify the full pathname of the library, or use the `-LLIBDIR'
> flag during linking and do at least one of the following:
> - add LIBDIR to the `DYLD_LIBRARY_PATH' environment variable
> during execution
>
> See any operating system documentation about shared libraries for
> more information, such as the ld(1) and ld.so(8) manual pages.
> ----------------------------------------------------------------------
>
> Build complete.
> (It is safe to ignore warnings about tempnam and tmpnam).
>
> Chasey:~/Src/mcache gdunne$
> Chasey:~/Src/mcache gdunne$
> Chasey:~/Src/mcache gdunne$ sudo make install
> Installing shared extensions:
> /usr/local/lib/php/extensions/no-debug-non-zts-20041030/
> Chasey:~/Src/mcache gdunne$
> Chasey:~/Src/mcache gdunne$
> Chasey:~/Src/mcache gdunne$ sudo apachectl configtest
> Processing config directory: /etc/httpd/*.include
> [Fri Mar 4 15:50:10 2005] [alert] httpd: Could not determine the
> server's fully qualified domain name, using 127.0.0.1 for ServerName
> [Fri Mar 4 15:50:10 2005] [error] Failed to resolve server name for
> 10.0.1.202 (check DNS) -- or specify an explicit ServerName
> Syntax OK
> dyld: /usr/sbin/httpd Undefined symbols:
> __array_init
> __convert_to_string
> __efree
> __emalloc
> __erealloc
> __estrndup
> __object_init_ex
> __zend_hash_add_or_update
> __zend_hash_init
> __zend_list_find
> _add_assoc_double_ex
> _add_assoc_long_ex
> _add_assoc_string_ex
> _add_assoc_zval_ex
> _add_property_resource_ex
> _ap_php_snprintf
> _convert_to_double
> _convert_to_long
> _executor_globals
> _php_info_print_table_end
> _php_info_print_table_header
> _php_info_print_table_row
> _php_info_print_table_start
> _php_var_serialize
> _php_var_unserialize
> _var_destroy
> _zend_error
> _zend_hash_del_key_or_index
> _zend_hash_destroy
> _zend_hash_find
> _zend_hash_get_current_data_ex
> _zend_hash_get_current_key_ex
> _zend_hash_internal_pointer_reset_ex
> _zend_hash_move_forward_ex
> _zend_hash_num_elements
> _zend_list_insert
> _zend_parse_parameters
> _zend_register_internal_class
> _zend_register_list_destructors_ex
> _zend_wrong_param_count
> /usr/sbin/apachectl: line 193: 28743 Trace/BPT trap $HTTPD -t
> Chasey:~/Src/mcache gdunne$ exit
> exit
Hmmm... Seems like some sort of php probably where it is either a) the
phpize command is not properly configuring the autoconf build stuff for
the module, or b) php's runtime loadable module support is broken?
Are you running any other modules loaded at run time? You could try
compiling the mcache module directly into php rather than as a module.
I'd suggest hitting up the php mailing list as this definitely appears
to be some sort of php/autoconf build configuration issue specific to
your setup.
>
> Script done on Fri Mar 4 15:50:14 2005
>
John McCCaskey
From gblock at ctoforaday.com Mon Mar 7 01:13:03 2005
From: gblock at ctoforaday.com (Gregory Block)
Date: Mon Mar 7 01:14:03 2005
Subject: TCP_NOPUSH and Mac OS X
In-Reply-To: <67631abe832c214700870b65b14ed2c9@citeulike.org>
References: <67631abe832c214700870b65b14ed2c9@citeulike.org>
Message-ID:
I had a bug open on Radar regarding memcached causing core dumps when
used in poll(), kevent(), or anything other than select. I've been
informed by the team that there's a fix in place for Tiger, and that
they've tested under tiger and poll() without seeing the kernel panics.
So...
Until that fix is in place, it's memcached + select() on mac os x, or
watch your kernel go tits up.
On 5 Mar 2005, at 20:20, Richard Cameron wrote:
>
> There was some discussion on this list last year about some fairly
> serious performance problems on Mac OS X. I was seeing these too, and
> I think I've isolated the problem to the TCP_NOPUSH option, and
> there's a one line hack which seems to solve it.
>
> On OS X 10.3.8, running memcached locally and connecting to it on
> localhost, the symptoms were that there was a latency of about 0.2
> seconds between sending a command down the socket to the server and
> getting a reply. Doing a tcpdump showed that the delay was *exactly*
> 200ms on every request, however running a kdump showed that memcached
> was actually writing its response to the socket pretty much
> instantaneously.
>
> The relevant hack which seemed to get things working again was to
> simply comment out the line in memcached.c which set TCP_NOPUSH:
>
> #ifdef TCP_NOPUSH
> // setsockopt(c->sfd, IPPROTO_TCP, TCP_NOPUSH, &val, sizeof(val));
> #endif
>
> It doesn't seem to be well known (at least, Google doesn't know) that
> TCP_NOPUSH is simply broken on OS X, and there was some evidence on
> the list that some people managed to get memcached running "out of the
> box" without this sort of latency. I'd be interested to know if that's
> still the case as it might shed a little more light on the problem.
>
> However, I'm quite willing to conclude there is some underlying
> problem with the operating system, as things continue to get even
> stranger:
>
> As I couldn't use TCP_NOPUSH, I put a "#undef TCP_NOPOSH" at the top
> of the file, which has the effect of making the code set TCP_NODELAY
> on the socket. This is exactly what I wanted:
>
> #if !defined(TCP_NOPUSH)
> setsockopt(sfd, IPPROTO_TCP, TCP_NODELAY, &flags, sizeof(flags));
> #endif
>
> This worked quite nicely (about a factor of 3 speedup over the lo
> interface), but when I load tested it for an extended period (about 5
> minutes) it seemed to fairly reliably cause a kernel panic (stack
> trace attached for interest below). Dropping the TCP_NODELAY option
> again seemed to "fix" things, but I've got no idea whether this isn't
> simply because it conspires to slow things down enough such that
> whatever race condition in the kernel is causing the panic doesn't
> happen any more. Does anyone else see this, or is it just a (rather
> annoying) quirk of my machine?
>
> Richard
>
>
>
> *********
>
> Sat Mar 5 19:33:12 2005
>
>
> Unresolved kernel trap(cpu 0): 0x300 - Data access
> DAR=0x0000000000000014 PC=0x000000000020C8F4
> Latest crash info for cpu 0:
> Exception state (sv=0x31747C80)
> PC=0x0020C8F4; MSR=0x00009030; DAR=0x00000014; DSISR=0x40000000;
> LR=0x0020C800; R1=0x12213C20; XCP=0x0000000C (0x300 - Data access)
> Backtrace:
> 0x40471D84 0x0020C330 0x002463E4 0x00094160 0x01C465A0
> Proceeding back via exception chain:
> Exception state (sv=0x31747C80)
> previously dumped as "Latest" state. skipping...
> Exception state (sv=0x28307000)
> PC=0x9002E1CC; MSR=0x0000F030; DAR=0x1C3EB004; DSISR=0x40000000;
> LR=0x00007B38; R1=0xBFFFF910; XCP=0x00000030 (0xC00 - System call)
>
> Kernel version:
> Darwin Kernel Version 7.8.0:
> Wed Dec 22 14:26:17 PST 2004; root:xnu/xnu-517.11.1.obj~1/RELEASE_PPC
>
>
> panic(cpu 0): 0x300 - Data access
> Latest stack backtrace for cpu 0:
> Backtrace:
> 0x000835F8 0x00083ADC 0x0001EDA4 0x00090BD8 0x00093FCC
> Proceeding back via exception chain:
> Exception state (sv=0x31747C80)
> PC=0x0020C8F4; MSR=0x00009030; DAR=0x00000014; DSISR=0x40000000;
> LR=0x0020C800; R1=0x12213C20; XCP=0x0000000C (0x300 - Data access)
> Backtrace:
> 0x40471D84 0x0020C330 0x002463E4 0x00094160 0x01C465A0
> Exception state (sv=0x28307000)
> PC=0x9002E1CC; MSR=0x0000F030; DAR=0x1C3EB004; DSISR=0x40000000;
> LR=0x00007B38; R1=0xBFFFF910; XCP=0x00000030 (0xC00 - System call)
>
> Kernel version:
> Darwin Kernel Version 7.8.0:
> Wed Dec 22 14:26:17 PST 2004; root:xnu/xnu-517.11.1.obj~1/RELEASE_PPC
>
>
> *********
>
From jasper at bookings.nl Tue Mar 8 03:50:17 2005
From: jasper at bookings.nl (Jasper Cramwinckel)
Date: Tue Mar 8 03:51:01 2005
Subject: no stats with perl Cache::Memcached
Message-ID: <422D9179.7090402@bookings.nl>
Hi,
I have a problem with getting stats from the memcached. I try the
following command line statement:
[jasper@peony85 jasper]$ perl -e 'use Data::Dumper; use Cache::Memcached;
my $memd = new Cache::Memcached { servers =>
["192.168.1.60:11211","192.168.1.61:11211" ] };
print Dumper $memd->set("key", "value");
print Dumper $memd->get("key");
print Dumper $memd->stats("maps")'
$VAR1 = '1';
$VAR1 = 'value';
$VAR1 = {
'total' => {
'get_misses' => '0',
'total_connections' => '0',
'get_hits' => '0',
'curr_items' => '0',
'connection_structures' => '0',
'total_items' => '0',
'bytes' => '0',
'bytes_read' => '0',
'cmd_get' => '0',
'cmd_set' => '0',
'bytes_written' => '0'
},
'hosts' => {
'192.168.1.60:11211' => {
'misc' => {},
'maps' => 'SERVER_ERROR
cannot open the maps file'
},
'192.168.1.61:11211' => {
'misc' => {},
'maps' => 'SERVER_ERROR
cannot open the maps file'
}
}
};
Storing and retrieving works fine, but I do not get any result in my
stats.
Any idea how to solve this?
Thanks, Jasper
From johnm at klir.com Thu Mar 10 09:32:38 2005
From: johnm at klir.com (John McCaskey)
Date: Thu Mar 10 09:32:48 2005
Subject: mcache PHP extension for Windows
In-Reply-To: <896.1110466334@www38.gmx.net>
References: <896.1110466334@www38.gmx.net>
Message-ID: <1110475958.7103.60.camel@dev01>
Has anyone compiled mcache under Windows and could share the dll? It
might be nice if I could put it up for download on the homepage. I
don't really have a good setup to compile the dll under windows at the
moment.
If someone would like to submit new versions of the dll with each update
I'd happily put it up and give them credit and or link to their own site
hosting the dlls.
On Thu, 2005-03-10 at 15:52 +0100, muellmails@gmx.de wrote:
> Hello,
>
> I just wanted to ask if there is a Windows .dll of mcache for downloadload
> somewhere?
>
> Thanks,
> Mark
>
--
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
johnm@klir.com
206.902.2027
From sergiosalvatore at yahoo.com Tue Mar 15 14:00:06 2005
From: sergiosalvatore at yahoo.com (Sergio Salvatore)
Date: Tue Mar 15 14:00:20 2005
Subject: memcached and epoll vs. select
Message-ID: <20050315220006.41192.qmail@web41203.mail.yahoo.com>
Dear All,
I've been testing memcached on a cluster of RedHat ES3
boxes for a few weeks now without incident (PHP mcache
C API). ES3's kernel (2.4.21) apparently contains
some backported stuff from the 2.6 series, but not
epoll, which according to the documentation with
memcached, seems practically necessary.
My questions is, is epoll actually "necessary" or is
it simply an optimization? The benefits of epoll are
clear, however, I'm not particularly psyched to
rebuild the OS on all my web servers and standard
select() seems to be working OK in my tests. My
application is not seeing LiveJournal-type loads, so
would I be safe (as in no crashes) by skipping epoll?
Has anybody tried this in production?
Any help you can provide would be greatly appreciated.
Thanks in advance!
/sergio
__________________________________
Do you Yahoo!?
Yahoo! Small Business - Try our new resources site!
http://smallbusiness.yahoo.com/resources/
From dhcom at sundial.com Tue Mar 15 20:01:44 2005
From: dhcom at sundial.com (Damon Hart)
Date: Tue Mar 15 20:01:47 2005
Subject: memcached Python API development status
Message-ID: <4237AFA8.3020803@sundial.com>
Hi all -
I have been using the memcached Python API (caching Python objects used
by multiple machines constructed from data in a PostgreSQL database.) I
have noticed both the request for maintainers of this code on the API
web page and an external web page
(http://www.tummy.com/journals/entries/jafo_20041212_192853) suggesting
that there was at least one other person interested in supporting this code.
I have some observations about the Python code and would like to see
some features made accessible in Python implemented which are not
present in the Python API.
Are patches posted to the memcached list appropriate in the absence of a
designated maintainer?
What are the pros and cons of maintaining the current Python API in
comparison to a Python wrapper around libmemcache? The effort to update
and add features to the all-Python API might be seen as superfluous in
the presence of a libmemcache wrapper. While I can suggest changes to
the current API, I have no experience in creating a wrapper. Is anybody
contemplating/working on one?
thanks
Damon
From pavel.francirek at firma.seznam.cz Wed Mar 16 07:00:15 2005
From: pavel.francirek at firma.seznam.cz (Pavel Francirek)
Date: Wed Mar 16 00:59:48 2005
Subject: memcached Python API development status
In-Reply-To: <4237AFA8.3020803@sundial.com>
References: <4237AFA8.3020803@sundial.com>
Message-ID: <1110985215.1728.95.camel@franci>
Hi,
We already made wrapper but have not time to compare tests yet :-(
With Python API we met serious problems when using multiple servers
(high processor load).
Pavel
On Tue, 2005-03-15 at 23:01 -0500, Damon Hart wrote:
> Hi all -
>
> I have been using the memcached Python API (caching Python objects used
> by multiple machines constructed from data in a PostgreSQL database.) I
> have noticed both the request for maintainers of this code on the API
> web page and an external web page
> (http://www.tummy.com/journals/entries/jafo_20041212_192853) suggesting
> that there was at least one other person interested in supporting this code.
>
> I have some observations about the Python code and would like to see
> some features made accessible in Python implemented which are not
> present in the Python API.
>
> Are patches posted to the memcached list appropriate in the absence of a
> designated maintainer?
>
> What are the pros and cons of maintaining the current Python API in
> comparison to a Python wrapper around libmemcache? The effort to update
> and add features to the all-Python API might be seen as superfluous in
> the presence of a libmemcache wrapper. While I can suggest changes to
> the current API, I have no experience in creating a wrapper. Is anybody
> contemplating/working on one?
From gblock at ctoforaday.com Wed Mar 16 01:39:48 2005
From: gblock at ctoforaday.com (Gregory Block)
Date: Wed Mar 16 01:39:44 2005
Subject: memcached and epoll vs. select
In-Reply-To: <20050315220006.41192.qmail@web41203.mail.yahoo.com>
References: <20050315220006.41192.qmail@web41203.mail.yahoo.com>
Message-ID: <136d63f50fe01aa224a44390972d0630@ctoforaday.com>
Due to a bug in the MacOS/X kernel in 10.3, the only safe function to
use is select. I do so with no difficulty whatsoever, other than the
performance penalties associated with this.
Basically, they're all perfectly functional - whether or not the
performance is acceptable is entirely a problem of the application
domain, the number of parallel connections being processed, the
activity of your application, etc.
So, in short, "suck it and see" is probably the right answer. :)
On 15 Mar 2005, at 22:00, Sergio Salvatore wrote:
> Dear All,
>
> I've been testing memcached on a cluster of RedHat ES3
> boxes for a few weeks now without incident (PHP mcache
> C API). ES3's kernel (2.4.21) apparently contains
> some backported stuff from the 2.6 series, but not
> epoll, which according to the documentation with
> memcached, seems practically necessary.
>
> My questions is, is epoll actually "necessary" or is
> it simply an optimization? The benefits of epoll are
> clear, however, I'm not particularly psyched to
> rebuild the OS on all my web servers and standard
> select() seems to be working OK in my tests. My
> application is not seeing LiveJournal-type loads, so
> would I be safe (as in no crashes) by skipping epoll?
> Has anybody tried this in production?
>
> Any help you can provide would be greatly appreciated.
>
> Thanks in advance!
>
> /sergio
>
>
>
> __________________________________
> Do you Yahoo!?
> Yahoo! Small Business - Try our new resources site!
> http://smallbusiness.yahoo.com/resources/
From varteaga at tecnobe.com Wed Mar 16 01:44:13 2005
From: varteaga at tecnobe.com (Vicente Arteaga)
Date: Wed Mar 16 01:44:27 2005
Subject: memcached Python API development status
In-Reply-To: <4237AFA8.3020803@sundial.com>
References: <4237AFA8.3020803@sundial.com>
Message-ID: <4237FFED.4020405@tecnobe.com>
You can take a look at swig (swig.org), which may help a lot! (btw I've
never used it, although the next project I can I will use it)
Regards!
Damon Hart wrote:
> Hi all -
>
> I have been using the memcached Python API (caching Python objects
> used by multiple machines constructed from data in a PostgreSQL
> database.) I have noticed both the request for maintainers of this
> code on the API web page and an external web page
> (http://www.tummy.com/journals/entries/jafo_20041212_192853)
> suggesting that there was at least one other person interested in
> supporting this code.
>
> I have some observations about the Python code and would like to see
> some features made accessible in Python implemented which are not
> present in the Python API.
>
> Are patches posted to the memcached list appropriate in the absence of
> a designated maintainer?
>
> What are the pros and cons of maintaining the current Python API in
> comparison to a Python wrapper around libmemcache? The effort to
> update and add features to the all-Python API might be seen as
> superfluous in the presence of a libmemcache wrapper. While I can
> suggest changes to the current API, I have no experience in creating a
> wrapper. Is anybody contemplating/working on one?
>
> thanks
>
> Damon
>
--
Vicente Arteaga
Tecnobe Tecnolog?a, S.L.
C/ Diagonal, 34A 3o1a
08290 Cerdanyola del Valles
93 580 98 95
-------------- next part --------------
A non-text attachment was scrubbed...
Name: varteaga.vcf
Type: text/x-vcard
Size: 307 bytes
Desc: not available
Url : http://lists.danga.com/pipermail/memcached/attachments/20050316/182e37b7/varteaga.vcf
From greg at corga.com Wed Mar 16 07:55:28 2005
From: greg at corga.com (Greg Grothaus)
Date: Wed Mar 16 07:44:32 2005
Subject: PHP Client Hangs Apache: Any Ideas?
Message-ID: <423856F0.2080803@corga.com>
I tried using memcached across two servers using the PHP client. After
running the servers for a few hours, the number of apache processes had
grown to the maximum number on the server and were requests were being
dropped. The apache processes simply weren't terminating. I can't
"prove" that it was memcached as the issue, but I've repeated this
experiment a number of times with and without memcached. With
memcached, apache processes grow and never complete, without memcached
there is no problem.
Any idea as to what might be happenning here?
-Greg
From johnm at klir.com Wed Mar 16 08:28:30 2005
From: johnm at klir.com (John McCaskey)
Date: Wed Mar 16 08:28:44 2005
Subject: PHP Client Hangs Apache: Any Ideas?
In-Reply-To: <423856F0.2080803@corga.com>
References: <423856F0.2080803@corga.com>
Message-ID: <1110990510.8972.19.camel@dev01>
On Wed, 2005-03-16 at 10:55 -0500, Greg Grothaus wrote:
> I tried using memcached across two servers using the PHP client. After
> running the servers for a few hours, the number of apache processes had
> grown to the maximum number on the server and were requests were being
> dropped. The apache processes simply weren't terminating. I can't
> "prove" that it was memcached as the issue, but I've repeated this
> experiment a number of times with and without memcached. With
> memcached, apache processes grow and never complete, without memcached
> there is no problem.
>
> Any idea as to what might be happenning here?
What php client are you using? There are several of varying quality,
until we know which its very hard to give any suggestions.
> -Greg
--
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
johnm@klir.com
206.902.2027
From greg at corga.com Wed Mar 16 09:39:16 2005
From: greg at corga.com (Greg Grothaus)
Date: Wed Mar 16 09:28:19 2005
Subject: PHP Client Hangs Apache: Any Ideas?
In-Reply-To: <1110990510.8972.19.camel@dev01>
References: <423856F0.2080803@corga.com> <1110990510.8972.19.camel@dev01>
Message-ID: <42386F44.90402@corga.com>
I initially tried the version on the memcached website that Brad put up
there. Noticing these errors, I switched and tried to use php-mcache,
but it had the same difficulties. I am currently trying to double check
that there are no references to Brad's PHP client that might be causing
the problem, but I don't think that there are.
-Greg
John McCaskey wrote:
>On Wed, 2005-03-16 at 10:55 -0500, Greg Grothaus wrote:
>
>
>>I tried using memcached across two servers using the PHP client. After
>>running the servers for a few hours, the number of apache processes had
>>grown to the maximum number on the server and were requests were being
>>dropped. The apache processes simply weren't terminating. I can't
>>"prove" that it was memcached as the issue, but I've repeated this
>>experiment a number of times with and without memcached. With
>>memcached, apache processes grow and never complete, without memcached
>>there is no problem.
>>
>>Any idea as to what might be happenning here?
>>
>>
>
>What php client are you using? There are several of varying quality,
>until we know which its very hard to give any suggestions.
>
>
>
>>-Greg
>>
>>
From johnm at klir.com Wed Mar 16 09:29:00 2005
From: johnm at klir.com (John McCaskey)
Date: Wed Mar 16 09:29:17 2005
Subject: libmemcache segfault / memory leak in 1.2.3 (patch to fix included)
In-Reply-To: <20050316033248.62910.qmail@web41211.mail.yahoo.com>
References: <20050316033248.62910.qmail@web41211.mail.yahoo.com>
Message-ID: <1110994140.8972.30.camel@dev01>
Sergio, Sean & everyone,
Sergio was experiencing an unusual segfault in my php extension, and
I've tracked it down into the internals of libmemcache. Since users are
experiencing this issue in actual use I thought it best to immediately
forward a potential fix to the entire list. Hope you don't mind Sean!
The problem is that the internally allocated buffer used to be at the
memcache struct level, but moved to the server struct level. Yet, the
cleanup was still being performed at the mcm_free() level rather than
the mcm_server_free() level.
There were 2 errors with this approach, one only the last server in the
list got its buffer freed, all other memory leaked. Two, there was no
check to ensure that the list of servers was not empty, and as such the
free would occur on a null pointer in this situation and a segfault
would ensue.
I have patched this to perform the free inside of mcm_server_free on a
per server basis and this fixes both errors. See attached patch which
is against libmemcache-1.2.3.
On Tue, 2005-03-15 at 19:32 -0800, Sergio Salvatore wrote:
> John,
>
> Excellent. I'm glad it's making more sense now.
> Thanks so much for your hard work. Please don't
> hesitate to let me know if there's anything I can do
> to help test...
>
> /sergio
>
>
> --- John McCaskey wrote:
>
> > Sergio,
> >
> > Wow, I can reproduce it now! I think it may have
> > something to do with
> > an invalid hostname vs a connection refused. I'm
> > about to leave the
> > office for the night, but I'll try to look at this
> > some more later
> > tonight or at least tommorow morning and get a fix
> > out. Hopefully it's
> > not an issue in libmemcache itself, but if it is
> > I'll try to send a
> > patch out for that as well.
> >
> > On Tue, 2005-03-15 at 16:20 -0800, Sergio Salvatore
> > wrote:
> > > John,
> > >
> > > Thanks for the prompt response. You're totally
> > right
> > > about the error logs---I should have included that
> > in
> > > my original message. Here is what's in apache's
> > > error_log:
> > >
> > > httpd: memcache: host memcachehost does not exist:
> > > Name or service not known. Not adding to server
> > > list.: Success
> > > httpd:
> > > /home/sergio/src/libmemcache-1.2.3/memcache.c:676
> >
> > > Unable to find a valid server
> > > httpd:
> > > /home/sergio/src/libmemcache-1.2.3/memcache.c:2145
> >
> > > Unable to find a valid server
> > >
> > > For the bug test I was just trying to connect to a
> > > single remote server. But it didn't seem to
> > matter
> > > how many servers there were---as long as none of
> > them
> > > were available.
> > >
> > > From what I can see in the debug output---it does
> > look
> > > like libmemcache is complaining---but I wonder if
> > it's
> > > reporting this in a sane way to mcache---but I'm
> > sure
> > > you would know that better than I. :)
> > >
> > > Let me know if there's any way I can help.
> > >
> > > Thanks!
> > >
> > > /sergio
> > >
> > >
> > > --- John McCaskey wrote:
> > > > Sergio,
> > > >
> > > > First, thanks for the bug report, I'd certainly
> > like
> > > > to look into it.
> > > >
> > > > Is there any error info getting logged? I can't
> > > > reproduce this just but
> > > > shutting down my memcache servers... It may be a
> > bug
> > > > in libmemcache
> > > > itself, libmemcache just does error logging to
> > > > stderr presently and
> > > > can't be redirected, so you should be able to
> > look
> > > > at your apache error
> > > > log file and see it. Can you cut and paste what
> > you
> > > > see?
> > > >
> > > > Also can you provide some details on the server
> > > > setup? How many
> > > > memcache instances do you try to connect to? are
> > > > they local? remote?
> > > >
> > > > Thanks.
> > > >
> > > > On Tue, 2005-03-15 at 15:50 -0800, Sergio
> > Salvatore
> > > > wrote:
> > > > > John,
> > > > >
> > > > > I hope this message finds you well. First,
> > thanks
> > > > for
> > > > > the great work on the mcache php extension!
> > It's
> > > > a
> > > > > great implementation.
> > > > >
> > > > > I'm running into one very reproduceable
> > problem.
> > > > When
> > > > > testing, if all my memcached instances are
> > down,
> > > > > apache segfaults and the error log shows that
> > > > mcache
> > > > > doesn't like that it can't find any servers.
> > Of
> > > > > course, all the memcache instances being down
> > is
> > > > very
> > > > > unlikely, but the segfaults are not
> > desireable.
> > > > >
> > > > > I was thinking that the get() method could
> > simply
> > > > > return false under this condition.
> > > > >
> > > > > FYI, I'm using mcache 1.1.2 (as a shared
> > module)
> > > > with
> > > > > libmemcache 1.2.3 and memcached 1.1.11 on
> > RedHat
> > > > ES 3.
> > > > > PHP version 4.3.10 statically compiled into
> > > > Apache
> > > > > 1.3.33.
> > > > >
> > > > > Any ideas on how to fix this?
> > > > >
> > > > > Thanks in advance for your help.
> > > > >
> > > > > Sincerely,
> > > > >
> > > > > Sergio Salvatore
> > > > >
> > > > >
> > > > >
> > > > > __________________________________
> > > > > Do you Yahoo!?
> > > > > Yahoo! Small Business - Try our new resources
> > > > site!
> > > > > http://smallbusiness.yahoo.com/resources/
> > > > --
> > > > John A. McCaskey
> > > > Software Development Engineer
> > > > Klir Technologies, Inc.
> > > > johnm@klir.com
> > > > 206.902.2027
> > > >
> > >
> > >
> > >
> > > __________________________________
> > > Do you Yahoo!?
> > > Read only the mail you want - Yahoo! Mail
> > SpamGuard.
> > > http://promotions.yahoo.com/new_mail
> > --
> > John A. McCaskey
> > Software Development Engineer
> > Klir Technologies, Inc.
> > johnm@klir.com
> > 206.902.2027
> >
>
>
>
> __________________________________
> Do you Yahoo!?
> Yahoo! Mail - now with 250MB free storage. Learn more.
> http://info.mail.yahoo.com/mail_250
--
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
johnm@klir.com
206.902.2027
-------------- next part --------------
A non-text attachment was scrubbed...
Name: libmemcache-1.2.3-memory_fixes.patch
Type: text/x-patch
Size: 1098 bytes
Desc: not available
Url : http://lists.danga.com/pipermail/memcached/attachments/20050316/043750a6/libmemcache-1.2.3-memory_fixes-0001.bin
From johnm at klir.com Wed Mar 16 09:31:18 2005
From: johnm at klir.com (John McCaskey)
Date: Wed Mar 16 09:31:32 2005
Subject: PHP Client Hangs Apache: Any Ideas?
In-Reply-To: <42386F44.90402@corga.com>
References: <423856F0.2080803@corga.com> <1110990510.8972.19.camel@dev01>
<42386F44.90402@corga.com>
Message-ID: <1110994278.8972.33.camel@dev01>
On Wed, 2005-03-16 at 12:39 -0500, Greg Grothaus wrote:
> I initially tried the version on the memcached website that Brad put up
> there. Noticing these errors, I switched and tried to use php-mcache,
> but it had the same difficulties. I am currently trying to double check
> that there are no references to Brad's PHP client that might be causing
> the problem, but I don't think that there are.
> -Greg
Well I'm the author of the php mcache extension... so I can help most
with that, I haven't heard of anyone having these issues and we use the
extension extensively in production systems. Could you provide details
on the following:
php version,
os, and kernel version,
lib-event version,
memcached version,
polling method used by libevent (epoll, select, etc),
apache version (and if 2.x worker model, pre-fork, per-thread, etc.)
Also if you could describe the environment, ie how many memcached
instances? are they on the same machine as the client? seperate machine
over network? etc.
>
> John McCaskey wrote:
>
> >On Wed, 2005-03-16 at 10:55 -0500, Greg Grothaus wrote:
> >
> >
> >>I tried using memcached across two servers using the PHP client. After
> >>running the servers for a few hours, the number of apache processes had
> >>grown to the maximum number on the server and were requests were being
> >>dropped. The apache processes simply weren't terminating. I can't
> >>"prove" that it was memcached as the issue, but I've repeated this
> >>experiment a number of times with and without memcached. With
> >>memcached, apache processes grow and never complete, without memcached
> >>there is no problem.
> >>
> >>Any idea as to what might be happenning here?
> >>
> >>
> >
> >What php client are you using? There are several of varying quality,
> >until we know which its very hard to give any suggestions.
> >
> >
> >
> >>-Greg
> >>
> >>
>
--
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
johnm@klir.com
206.902.2027
From johnm at klir.com Wed Mar 16 09:49:48 2005
From: johnm at klir.com (John McCaskey)
Date: Wed Mar 16 09:50:01 2005
Subject: [announce] php: mcache extension 1.1.3
Message-ID: <1110995388.8972.41.camel@dev01>
For those who haven't been around in the past and are unfamiliar,
php-mcache is a php extension written in C as a wrapper around Sean
Chittenden's libmemcache c API. The extension is both faster, and
contains a fuller set of features than the standard PECL extension.
This release fixes a minor memory allocation / free issue related to
serialization of complex php objects. Php's serialization requires the
use of a php_smart_str type, I previously did not explicitly free the
smart str after using it. Because of php's garbage collection
facilities this would not truly result in a leak, but was innefficient
and would sometimes result in php giving some warnings (I could never
reproduce this, but Sean was able to at one point). At any rate, if you
performed a large number of sets/gets of complex objects in a single
page hit it would use a good amount of memory, and then it would all get
freed at the end of the request, now it gets freed earlier and the
memory usage should be lower.
The release is avaiable at http://www.klir.com/~johnm/php-mcache/
--
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
johnm@klir.com
206.902.2027
From rg at tcslon.com Wed Mar 16 10:42:58 2005
From: rg at tcslon.com (Russ Garrett)
Date: Wed Mar 16 10:43:26 2005
Subject: PHP Client Hangs Apache: Any Ideas?
In-Reply-To: <423856F0.2080803@corga.com>
References: <423856F0.2080803@corga.com>
Message-ID: <1110998578.3210.10.camel@localhost.localdomain>
On Wed, 2005-03-16 at 10:55 -0500, Greg Grothaus wrote:
> I tried using memcached across two servers using the PHP client. After
> running the servers for a few hours, the number of apache processes had
> grown to the maximum number on the server and were requests were being
> dropped. The apache processes simply weren't terminating.
This sounds remarkably familiar to an elusive and ongoing bug that we've
been seeing for months. We've found no solution, changing memcached
clients doesn't help (we've done it 3 times), and the PHP development
team didn't have a clue.
Further debugging (try loading up gdb against a hanging apache backend)
indicated that it was to do with some fairly funamental freeing of
constant strings (I seem to remember it was actually hanging in free()),
which obviously makes no sense.
It's not quite as pronounced now as it used to be - we have remedied the
situation by having a 5-minute cron job which kills backends over a
certain amount of memory usage, and a daily job which restarts apache on
all our servers.
We can't prove it's memcached because running the site with memcached
disabled is obviously untenable, and we're unable to reproduce it on a
single server under test loads.
The overall conclusion is that PHP is prone to doing annoying,
inexplicable things seemingly randomly. Several of our site updates
recently have caused random segfaults when put live for no good reason,
and the number of man-hours we spend debugging this (if you can call it
debugging - more like brute-forcing the answer) is insane.
If we were to re-code Audioscrobbler/Last.fm again from scratch we'd
undoubtedly use Java. Sorry this has turned into a rant against PHP,
heh.
--
Russ Garrett Last.fm Limited
russ@last.fm http://last.fm
From chris-lists at bolt.cx Wed Mar 16 12:30:40 2005
From: chris-lists at bolt.cx (Chris Bolt)
Date: Wed Mar 16 12:30:44 2005
Subject: PHP Client Hangs Apache: Any Ideas?
In-Reply-To: <423856F0.2080803@corga.com>
References: <423856F0.2080803@corga.com>
Message-ID: <42389770.8060004@bolt.cx>
> I tried using memcached across two servers using the PHP client. After
> running the servers for a few hours, the number of apache processes had
> grown to the maximum number on the server and were requests were being
> dropped. The apache processes simply weren't terminating. I can't
> "prove" that it was memcached as the issue, but I've repeated this
> experiment a number of times with and without memcached. With
> memcached, apache processes grow and never complete, without memcached
> there is no problem.
Are you using persistent connections? From what I can tell, when using
persistent connections with the PHP client, they can sometimes get stuck
in a state that causes them to hang indefinitely.
From johnm at klir.com Wed Mar 16 12:42:24 2005
From: johnm at klir.com (John McCaskey)
Date: Wed Mar 16 12:42:37 2005
Subject: PHP Client Hangs Apache: Any Ideas?
In-Reply-To: <42389770.8060004@bolt.cx>
References: <423856F0.2080803@corga.com> <42389770.8060004@bolt.cx>
Message-ID: <1111005744.8972.83.camel@dev01>
On Wed, 2005-03-16 at 13:30 -0700, Chris Bolt wrote:
> > I tried using memcached across two servers using the PHP client. After
> > running the servers for a few hours, the number of apache processes had
> > grown to the maximum number on the server and were requests were being
> > dropped. The apache processes simply weren't terminating. I can't
> > "prove" that it was memcached as the issue, but I've repeated this
> > experiment a number of times with and without memcached. With
> > memcached, apache processes grow and never complete, without memcached
> > there is no problem.
>
> Are you using persistent connections? From what I can tell, when using
> persistent connections with the PHP client, they can sometimes get stuck
> in a state that causes them to hang indefinitely.
I'd like more info on this, this has never once occured for me and we
use persistent connections in production. If you could provide a
scenario that helps reproduce this I would certainly work to fix it.
As to Greg's issue, we had a couple additional emails off list. I think
part of the problem may be that he is running memcached on 2.4.x linux
kernels using select() as the polling method. We have found memcached
to be unreliable when using select() and to slow down immensely
sometimes hanging, but to run perfectly with epoll under 2.6.x.
>
--
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
johnm@klir.com
206.902.2027
From johnm at klir.com Wed Mar 16 12:44:06 2005
From: johnm at klir.com (John McCaskey)
Date: Wed Mar 16 12:44:20 2005
Subject: [announce] php: mcache extension 1.1.4
Message-ID: <1111005846.8972.87.camel@dev01>
Ok, so yet another quick trivial bug fix release...
This one fixes an issue where calling $mc->stats() when no valid servers
are added could result in a segfault. libmemcache returns a null
pointer for the version in this situation and I was not checking for
that. If you aren't using the stats() call this has no impact.
Updated build at http://www.klir.com/~johnm/php-mcache/
--
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
johnm@klir.com
206.902.2027
From chris-lists at bolt.cx Wed Mar 16 13:43:16 2005
From: chris-lists at bolt.cx (Chris Bolt)
Date: Wed Mar 16 13:42:54 2005
Subject: PHP Client Hangs Apache: Any Ideas?
In-Reply-To: <1111005744.8972.83.camel@dev01>
References: <423856F0.2080803@corga.com> <42389770.8060004@bolt.cx>
<1111005744.8972.83.camel@dev01>
Message-ID: <4238A874.2040702@bolt.cx>
> I'd like more info on this, this has never once occured for me and we
> use persistent connections in production. If you could provide a
> scenario that helps reproduce this I would certainly work to fix it.
Well, to clarify, I'm using the pure PHP client. It's certainly not the
most reproduceable bug. When I was experiencing it, it would only happen
on one of six web servers at a time. I think it's caused by a php script
not finishing correctly, leaving the persistent connection in an
undefined state, so that the next time the connection is used it blocks.
It's been months since I disabled persistent connections though, so this
is all from memory.
> As to Greg's issue, we had a couple additional emails off list. I think
> part of the problem may be that he is running memcached on 2.4.x linux
> kernels using select() as the polling method. We have found memcached
> to be unreliable when using select() and to slow down immensely
> sometimes hanging, but to run perfectly with epoll under 2.6.x.
We're using epoll on 2.6.
From johnm at klir.com Wed Mar 16 13:45:53 2005
From: johnm at klir.com (John McCaskey)
Date: Wed Mar 16 13:46:09 2005
Subject: PHP Client Hangs Apache: Any Ideas?
In-Reply-To: <4238A874.2040702@bolt.cx>
References: <423856F0.2080803@corga.com> <42389770.8060004@bolt.cx>
<1111005744.8972.83.camel@dev01> <4238A874.2040702@bolt.cx>
Message-ID: <1111009553.8972.101.camel@dev01>
On Wed, 2005-03-16 at 14:43 -0700, Chris Bolt wrote:
> > I'd like more info on this, this has never once occured for me and we
> > use persistent connections in production. If you could provide a
> > scenario that helps reproduce this I would certainly work to fix it.
>
> Well, to clarify, I'm using the pure PHP client. It's certainly not the
> most reproduceable bug. When I was experiencing it, it would only happen
> on one of six web servers at a time. I think it's caused by a php script
> not finishing correctly, leaving the persistent connection in an
> undefined state, so that the next time the connection is used it blocks.
> It's been months since I disabled persistent connections though, so this
> is all from memory.
Ahh, well I can't comment as to that then... but I'd suggest giving my c
extension a try as it should not suffer from the same issue :)
>
> > As to Greg's issue, we had a couple additional emails off list. I think
> > part of the problem may be that he is running memcached on 2.4.x linux
> > kernels using select() as the polling method. We have found memcached
> > to be unreliable when using select() and to slow down immensely
> > sometimes hanging, but to run perfectly with epoll under 2.6.x.
>
> We're using epoll on 2.6.
--
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
johnm@klir.com
206.902.2027
From ecahill at corp.untd.com Wed Mar 16 15:29:04 2005
From: ecahill at corp.untd.com (Cahill, Earl)
Date: Wed Mar 16 15:29:17 2005
Subject: starting with memcached
Message-ID: <88DCF6AA199DF24C9F994C7D891F82CF150CDC@slcexs02.slc.corp.int.untd.com>
Well, starting, but in a big way. Want to cache some conf file lookups for
several million hits a day. Looks like the default for max simultaneous
connections is 1024, and just wondering how well a single box handles that
many connections. Let supposing I have 50 boxes all doing 20 connections
each, for a thousand total connections, is that a reasonable thing? Is
there a way to limit the number of connections per host?
We hope to shortly add a second box, but I am guessing the problem will be
similar. Like the reads will be spread over two boxes, but we will still
likely get 20 connections to each of the two boxes.
My implementation is in perl, and I am trying to use Cache::Memcached to get
some stats about the server, and have tried each key listed under stats, but
have yet to find one that will just show me what is actually in the cache.
Does such a method exist? Even some commandline way would be very helpful.
Or some guidance and maybe I could contribute such a method.
Thanks,
Earl
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.danga.com/pipermail/memcached/attachments/20050316/e84ce951/attachment.html
From mellon at pobox.com Wed Mar 16 15:42:46 2005
From: mellon at pobox.com (Anatoly Vorobey)
Date: Wed Mar 16 15:42:51 2005
Subject: starting with memcached
In-Reply-To: <88DCF6AA199DF24C9F994C7D891F82CF150CDC@slcexs02.slc.corp.int.untd.com>
References: <88DCF6AA199DF24C9F994C7D891F82CF150CDC@slcexs02.slc.corp.int.untd.com>
Message-ID: <20050316234246.GA13148@pobox.com>
On Wed, Mar 16, 2005 at 04:29:04PM -0700, Cahill, Earl wrote:
> Well, starting, but in a big way. Want to cache some conf file lookups for
> several million hits a day. Looks like the default for max simultaneous
> connections is 1024, and just wondering how well a single box handles that
> many connections. Let supposing I have 50 boxes all doing 20 connections
> each, for a thousand total connections, is that a reasonable thing?
Not at all, you'd be wasting boxes. Memcached is optimised for a lot of
connections, and (provided you use epoll on linux or kqueue on
BSD) easily handles 300, 500 or 700 of them still using
very little CPU. With memcached, the reason to add more boxes is to
increase the total amount of memory available to memcached rather
than to distribute connections.
> Is
> there a way to limit the number of connections per host?
Per a running memcached instance -- with a -c command line flag.
> My implementation is in perl, and I am trying to use Cache::Memcached to get
> some stats about the server, and have tried each key listed under stats, but
> have yet to find one that will just show me what is actually in the cache.
> Does such a method exist? Even some commandline way would be very helpful.
> Or some guidance and maybe I could contribute such a method.
There is "stats cachedump ".
It's undocumented and you need to understand the internal memory
architecture of memcached (the slabs allocation method, there's an
explanation on the website or in the distribution somewhre) if you want
to use it. It's not a great idea to use it for anything other than
debugging the server. I don't think Cache::Memcached gives you access
to it, you need to actually telnet into the server, as if you were a
client, and type it in.
--
avva
"There's nothing simply good, nor ill alone" -- John Donne
From ecahill at corp.untd.com Wed Mar 16 17:34:53 2005
From: ecahill at corp.untd.com (Cahill, Earl)
Date: Wed Mar 16 17:34:59 2005
Subject: starting with memcached
Message-ID: <88DCF6AA199DF24C9F994C7D891F82CF150CDD@slcexs02.slc.corp.int.untd.com>
> Not at all, you'd be wasting boxes. Memcached is optimised for a lot of
> connections, and (provided you use epoll on linux or kqueue on
> BSD) easily handles 300, 500 or 700 of them still using
> very little CPU. With memcached, the reason to add more boxes is to
> increase the total amount of memory available to memcached rather
> than to distribute connections.
[Cahill, Earl]
Sorry, I mean I have fifty client boxes connecting to one host. Dumb
question, but I am in linux, how do I tell if I am using epoll? Rght now I
have a load of 0.07 and 640 connections.
We did a partial launch today and want to finish this part of our launch
tomorrow. Today's launch cached a small number (under a thousand) of things
that get hit a lot and tomorrow's launch will be a lot of things (maybe a
couple million) that get hit a lot. Kind of worried a bit about tomorrow.
> > Is
> > there a way to limit the number of connections per host?
>
> Per a running memcached instance -- with a -c command line flag.
[Cahill, Earl]
Yeah, sorry again, I mean per client, like I want at most ten connections
per client.
> I don't think Cache::Memcached gives you access
> to it, you need to actually telnet into the server, as if you were a
> client, and type it in.
[Cahill, Earl]
Well, I wrote something that will dump all the keys by host and attached it.
Really, I just care what the keys are, not really even the values, but going
forward, if I am interested in the values, I guess I could just do gets to
get them.
Thanks,
Earl
-------------- next part --------------
sub dump_all {
my $self = shift;
my $hosts = shift || 'all';
my $ref = $self->stats("slabs");
my $slabs = {};
foreach my $host (keys %{$ref->{hosts}}) {
$slabs->{$host} ||= {};
while($ref->{hosts}{$host}{slabs} =~ /^(.+)$/mg) {
my $line = $1;
next unless($line =~ /^STAT\s+(\d+)/);
$slabs->{$host}{$1} = 1;
};
my $count = 0;
foreach my $host (keys %{$slabs}) {
my $_self = Memcache->new({ servers => [$host]});
print "$host\n" . ('-' x length($host)) . "\n";
foreach my $slab (keys %{$slabs->{$host}}) {
my $command = "cachedump $slab";
my $ref = $_self->stats($command);
while($ref->{hosts}{$host}{$command} =~ /^ITEM\s+(\S+)/mg) {
print "$1\n";
$count++;
}
}
}
print "$count\n";
}
}
From camster at citeulike.org Thu Mar 17 00:57:00 2005
From: camster at citeulike.org (Richard Cameron)
Date: Thu Mar 17 00:57:10 2005
Subject: starting with memcached
In-Reply-To: <88DCF6AA199DF24C9F994C7D891F82CF150CDD@slcexs02.slc.corp.int.untd.com>
References: <88DCF6AA199DF24C9F994C7D891F82CF150CDD@slcexs02.slc.corp.int.untd.com>
Message-ID:
On 17 Mar 2005, at 01:34, Cahill, Earl wrote:
> Dumb question, but I am in linux, how do I tell if I am using epoll?
1) Build libevent from source (or, at least, do a ./configure)
2) grep HAVE_EPOLL config.h
If you've got EPOLL support in your kernel, HAVE_EPOLL will be #defined
to be 1, otherwise it will be #undefd.
Richard.
From mellon at pobox.com Thu Mar 17 01:07:17 2005
From: mellon at pobox.com (Anatoly Vorobey)
Date: Thu Mar 17 01:07:24 2005
Subject: starting with memcached
In-Reply-To:
References: <88DCF6AA199DF24C9F994C7D891F82CF150CDD@slcexs02.slc.corp.int.untd.com>
Message-ID: <20050317090717.GA14660@pobox.com>
On Thu, Mar 17, 2005 at 08:57:00AM +0000, Richard Cameron wrote:
>
> On 17 Mar 2005, at 01:34, Cahill, Earl wrote:
>
> >Dumb question, but I am in linux, how do I tell if I am using epoll?
>
> 1) Build libevent from source (or, at least, do a ./configure)
> 2) grep HAVE_EPOLL config.h
This shows that epoll was available at the time of building and compiled
into the executable. But if you later run the executable on a different
machine, or upgrade/downgrade the machine, or even if it has the epoll
system header but no actual support in the kernel, due to
misconfiguration, libevent will silently provide a different method at
runtime for memcached.
A better way is to run the built executable thus:
$ EVENT_SHOW_METHOD=1 ./memcached
It will print, at startup, the method chosen by libevent, which will be
epoll if both memcached and the machine it's running on support it.
--
avva
"There's nothing simply good, nor ill alone" -- John Donne
From mellon at pobox.com Thu Mar 17 01:21:30 2005
From: mellon at pobox.com (Anatoly Vorobey)
Date: Thu Mar 17 01:21:34 2005
Subject: starting with memcached
In-Reply-To: <88DCF6AA199DF24C9F994C7D891F82CF150CDD@slcexs02.slc.corp.int.untd.com>
References: <88DCF6AA199DF24C9F994C7D891F82CF150CDD@slcexs02.slc.corp.int.untd.com>
Message-ID: <20050317092130.GB14660@pobox.com>
On Wed, Mar 16, 2005 at 06:34:53PM -0700, Cahill, Earl wrote:
> > Not at all, you'd be wasting boxes. Memcached is optimised for a lot of
> > connections, and (provided you use epoll on linux or kqueue on
> > BSD) easily handles 300, 500 or 700 of them still using
> > very little CPU. With memcached, the reason to add more boxes is to
> > increase the total amount of memory available to memcached rather
> > than to distribute connections.
>
> [Cahill, Earl]
>
> Sorry, I mean I have fifty client boxes connecting to one host. Dumb
> question, but I am in linux, how do I tell if I am using epoll?
See my previous message to the list.
I don't see a problem with any number of client boxes connecting to the
memcached server; what matters to the server is the number of
connections, not the number of different machines they're coming from.
If you have really really huge traffic, memcached traffic may rise
high enough to saturate your LAN, but that seems unlikely in your
case (millions of hits per day).
> We did a partial launch today and want to finish this part of our launch
> tomorrow. Today's launch cached a small number (under a thousand) of things
> that get hit a lot and tomorrow's launch will be a lot of things (maybe a
> couple million) that get hit a lot. Kind of worried a bit about tomorrow.
Good luck!
> > Per a running memcached instance -- with a -c command line flag.
>
> [Cahill, Earl]
>
> Yeah, sorry again, I mean per client, like I want at most ten connections
> per client.
I guess I don't understand you, or you're not asking the right question.
Cache::Memcached has an OO interface. Every object you create will
connect, on demand, to every memcached server you gave it in the list
of servers when you created it (or changed later with the set_servers
call). If you only have one server, that will be one connection per
object. It will not create more than one connection to the same server
(per object). If you have 8 separate processes each using one
Cache::Memcached object
(for instance 8 Apache subprocesses handling 8 requests simultaneously
running Perl via mod_perl or CGI and creating one Cache::Memcached
object each) you will have the total of 8*number-of-servers connections
from clients on that box to memcached servers.
Does that answer your question?
> Well, I wrote something that will dump all the keys by host and attached it.
> Really, I just care what the keys are, not really even the values, but going
> forward, if I am interested in the values, I guess I could just do gets to
> get them.
Yeah. But note that if you don't specify a maximum number of items to
dump, it will *not* dump all the items for that slab, but rather all the
items until the buffer that holds all that text reaches a predefined
limit (2Mb, hardcoded into source code, currently). It's meant to be a
debugging interface, not a reliable way to get all keys.
--
avva
"There's nothing simply good, nor ill alone" -- John Donne
From camster at citeulike.org Thu Mar 17 08:40:47 2005
From: camster at citeulike.org (Richard Cameron)
Date: Thu Mar 17 08:40:54 2005
Subject: libmemcached "protocol error" and "Server sent data for key not in
request"
Message-ID: <5d29ab34f99ac21d141dcc51d437cb03@citeulike.org>
Does anyone else see these two errors with libmemcached:
regress: memcache.c:1096 protocol error: Unknown error: 0
regress: memcache.c:1031 Server sent data for key not in request.
I see them on Mac OS X but not on Linux. It's a reasonably slow
machine, and it's got the disadvantage of not having a working
TCP_NOPUSH, so what appears to be happening is the data seems to be
coming back from the server in dribs and drabs.
It appears that there are some particularly unfortunate lengths of
dribs which fool libmemcached into thinking it's getting mangled data
back from the server. While this is much more likely to happen on OS
X, I presume that it's theoretically possible that it might happen on
Linux too.
I'm using libmemcached version 1.2.3 and here's my patch which seems to
calm things down:
Index: memcache.c
===================================================================
--- memcache.c (revision 2201)
+++ memcache.c (working copy)
@@ -1011,7 +1011,7 @@
* If this fails we will scan the whole list. */
if (res != NULL && res->entries.tqe_next != NULL) {
for (res = res->entries.tqe_next; res != NULL; res =
res->entries.tqe_next) {
- if ((size_t)(rb - (cp - ms->cur)) > res->len) {
+ if ((size_t)(ms->read_cur - cp) >= res->len) {
if (memcmp(cp, res->key, res->len) == 0) {
break;
}
@@ -1019,7 +1019,7 @@
}
} else {
for (res = req->query.tqh_first; res != NULL; res =
res->entries.tqe_next) {
- if((size_t)(rb - (cp - ms->cur)) > res->len) {
+ if ((size_t)(ms->read_cur - cp) >= res->len) {
if(memcmp(cp, res->key, res->len) == 0) {
break;
}
@@ -1067,8 +1067,8 @@
bytes_read = ms->read_cur - cp;
- /* Check if we have read all the data the plus a \r\n */
- if (bytes_read >= len + 2) {
+ /* Check if we have read all the data the plus a \r\n (plus partial
next item or END)*/
+ if (bytes_read > len + 2) {
res->_flags |= MCM_RES_FOUND;
if (res->size == 0) {
res->val = ctxt->mcMallocAtomic(res->bytes);
From camster at citeulike.org Fri Mar 18 05:55:47 2005
From: camster at citeulike.org (Richard Cameron)
Date: Fri Mar 18 05:56:00 2005
Subject: Atomic replace and append operations
Message-ID: <3322939d36b4cd05c547947653eec4a3@citeulike.org>
I'm interested in implementing two extra storage commands "areplace"
and "append" for memcached. The former would behave exactly like
"replace", but would return the object's value before it performs the
replace operation; the latter would simply string append the new value
onto the old one.
The motivation is not purely to add extra bloat to memcached but rather
to provide the minimal extra operations I'd need to be able to keep
track of dependency trees in the cached data.
For instance, if I want to cache objects A and B (the results of two
SQL query, say) which depends on some "thing" in the database (like a
particular user U's account), I'd like to be able keep track of my
dependencies like this:
U -> A,B
Memcached itself seems like a good choice to store this list. So, when
user U does something to his account, I'll want to delete the newly
defunct objects from the cache. I think the best I can do at the moment
is:
my_deps := [get U]
foreach dep $my_deps:
[delete $dep]
[set U ""]
Also, if I perform another cacheable operation C which depends on U,
I'd like to be able to add this dependency to my list. The existing
protocol lets me say:
old_deps := "get U"
new_deps := append(old_deps, C)
[set U $new_deps]
The trouble, of course, is that there are race conditions in both cases
where multi-threaded code could execute in a way which would corrupt
the dependency list.
On a single machine, I can simply guard the critical section in the
client code with a mutex, but this won't work on a distributed server
farm. It would be possible to use memcached's existing "incr" and
"decr" operations to produce a "global" mutex, but this is a) not
terribly efficient, and b) dangerous if one server incr's the semaphore
on entry to the critical section but crashes before it exists - this
would break things such that no machine could subsequently update the
dependency and we'd be stuck in that state for good.
So, my proposed solution is to implement an "areplace" (atomic replace)
operation which looks like this:
my_deps = [areplace U ""]
and and append operation which looks like this:
[append U " C"]
I could then keep most of the bloat in the client library, not the
server. This approach also has the advantage of being able to track
dependencies over multiple memcached instances.
This is probably bending memcached's intended use slightly, but I think
it still adheres to the basic principle that it's a cache: all the data
can be regenerated from the database if required. The advantage is that
I can immediately tell when data should be marked as being invalid,
which means I can use much longer cache times which, in turn, means
that the cached items in the "long tail" (object which get accessed
infrequently but are expensive to compute) can have much higher hit
rates.
Seem at all sensible? I'd be interested in any comments before I go
away and code this up.
Richard.
From gblock at ctoforaday.com Fri Mar 18 06:40:54 2005
From: gblock at ctoforaday.com (Gregory Block)
Date: Fri Mar 18 06:40:39 2005
Subject: Atomic replace and append operations
In-Reply-To: <3322939d36b4cd05c547947653eec4a3@citeulike.org>
References: <3322939d36b4cd05c547947653eec4a3@citeulike.org>
Message-ID: <5b5f9b2ea9ee77dd9468cd0dd50040fe@ctoforaday.com>
I'd second these additions - I've been waiting with a similar kind of
need for some time, and would be very happy to see this added.
On 18 Mar 2005, at 13:55, Richard Cameron wrote:
>
> I'm interested in implementing two extra storage commands "areplace"
> and "append" for memcached. The former would behave exactly like
> "replace", but would return the object's value before it performs the
> replace operation; the latter would simply string append the new value
> onto the old one.
>
> The motivation is not purely to add extra bloat to memcached but
> rather to provide the minimal extra operations I'd need to be able to
> keep track of dependency trees in the cached data.
>
> For instance, if I want to cache objects A and B (the results of two
> SQL query, say) which depends on some "thing" in the database (like a
> particular user U's account), I'd like to be able keep track of my
> dependencies like this:
>
> U -> A,B
>
> Memcached itself seems like a good choice to store this list. So, when
> user U does something to his account, I'll want to delete the newly
> defunct objects from the cache. I think the best I can do at the
> moment is:
>
> my_deps := [get U]
> foreach dep $my_deps:
> [delete $dep]
> [set U ""]
>
> Also, if I perform another cacheable operation C which depends on U,
> I'd like to be able to add this dependency to my list. The existing
> protocol lets me say:
>
> old_deps := "get U"
> new_deps := append(old_deps, C)
> [set U $new_deps]
>
> The trouble, of course, is that there are race conditions in both
> cases where multi-threaded code could execute in a way which would
> corrupt the dependency list.
>
> On a single machine, I can simply guard the critical section in the
> client code with a mutex, but this won't work on a distributed server
> farm. It would be possible to use memcached's existing "incr" and
> "decr" operations to produce a "global" mutex, but this is a) not
> terribly efficient, and b) dangerous if one server incr's the
> semaphore on entry to the critical section but crashes before it
> exists - this would break things such that no machine could
> subsequently update the dependency and we'd be stuck in that state for
> good.
>
> So, my proposed solution is to implement an "areplace" (atomic
> replace) operation which looks like this:
>
> my_deps = [areplace U ""]
>
> and and append operation which looks like this:
>
> [append U " C"]
>
> I could then keep most of the bloat in the client library, not the
> server. This approach also has the advantage of being able to track
> dependencies over multiple memcached instances.
>
> This is probably bending memcached's intended use slightly, but I
> think it still adheres to the basic principle that it's a cache: all
> the data can be regenerated from the database if required. The
> advantage is that I can immediately tell when data should be marked as
> being invalid, which means I can use much longer cache times which, in
> turn, means that the cached items in the "long tail" (object which get
> accessed infrequently but are expensive to compute) can have much
> higher hit rates.
>
> Seem at all sensible? I'd be interested in any comments before I go
> away and code this up.
>
> Richard.
>
From ecahill at corp.untd.com Fri Mar 18 08:40:45 2005
From: ecahill at corp.untd.com (Cahill, Earl)
Date: Fri Mar 18 08:40:48 2005
Subject: initial incr call
Message-ID: <88DCF6AA199DF24C9F994C7D891F82CF150CDF@slcexs02.slc.corp.int.untd.com>
Probably been over this before, but the initial incr call seems kind of
strange to me. Let's say I have a new key called counter, and I want to
incr it. I call $memd->incr('counter'); and get undef back. So, I set it
to 0, and then if I call incr, happy day. The problem is that setting to 0
is a total race condition, so I have to do something like this
sub incr {
my $self = shift;
my $key = shift;
unless($self->get($key)) {
require File::NFSLock;
my $lock = File::NFSLock->new("$nfs_path/memcache.incr.$key",
"BLOCKING", 15, 10);
if($lock) {
$self->set($key, 0);
}
}
$self->SUPER::incr($key);
}
where you lock however you like. Maybe it's too late now, but it sure seems
like it would be nice if the server would some do similar thing. I would
like to just call incr, regardless of whether the key has existed before or
not, and get back either 1 for new values, or the old value plus one. Is
there a reason incr behaves as it does?
Earl
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.danga.com/pipermail/memcached/attachments/20050318/bdc9f627/attachment.htm
From ecahill at corp.untd.com Fri Mar 18 12:14:26 2005
From: ecahill at corp.untd.com (Cahill, Earl)
Date: Fri Mar 18 12:14:36 2005
Subject: initial incr call
Message-ID: <88DCF6AA199DF24C9F994C7D891F82CF150CE0@slcexs02.slc.corp.int.untd.com>
Looks like this works, without the need for the lock
sub incr {
my $self = shift;
my $key = shift;
my $value = shift || 1;
unless($self->SUPER::incr($key, $value)) {
$self->add($key, 0);
$self->SUPER::incr($key, $value);
}
}
Thanks to Patrick for pointing me to ->add.
Earl
_____
From: Cahill, Earl
Sent: Friday, March 18, 2005 9:41 AM
To: memcached@lists.danga.com
Subject: initial incr call
Probably been over this before, but the initial incr call seems kind of
strange to me. Let's say I have a new key called counter, and I want to
incr it. I call $memd->incr('counter'); and get undef back. So, I set it
to 0, and then if I call incr, happy day. The problem is that setting to 0
is a total race condition, so I have to do something like this
sub incr {
my $self = shift;
my $key = shift;
unless($self->get($key)) {
require File::NFSLock;
my $lock = File::NFSLock->new("$nfs_path/memcache.incr.$key",
"BLOCKING", 15, 10);
if($lock) {
$self->set($key, 0);
}
}
$self->SUPER::incr($key);
}
where you lock however you like. Maybe it's too late now, but it sure seems
like it would be nice if the server would some do similar thing. I would
like to just call incr, regardless of whether the key has existed before or
not, and get back either 1 for new values, or the old value plus one. Is
there a reason incr behaves as it does?
Earl
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.danga.com/pipermail/memcached/attachments/20050318/e15938bf/attachment.html
From timo at tzc.com Fri Mar 18 12:48:12 2005
From: timo at tzc.com (Timo Ewalds)
Date: Fri Mar 18 12:46:27 2005
Subject: Atomic replace and append operations
In-Reply-To: <5b5f9b2ea9ee77dd9468cd0dd50040fe@ctoforaday.com>
References: <3322939d36b4cd05c547947653eec4a3@citeulike.org>
<5b5f9b2ea9ee77dd9468cd0dd50040fe@ctoforaday.com>
Message-ID: <423B3E8C.20404@tzc.com>
I'll third that one. I've suggested it before as well.
Brad answered my question a while ago with how he grabs the list of
comments on livejournal. He answered that he has a list of comment ids
in a memcache entry. He then grabs the individual ones one at a time
based on the contents of the first. When a new comment is added, it gets
added to the list of comment ids. I asked how to avoid the race
condition, but never got an answer to that. An append operation would
fix it.
Timo
Gregory Block wrote:
> I'd second these additions - I've been waiting with a similar kind of
> need for some time, and would be very happy to see this added.
>
> On 18 Mar 2005, at 13:55, Richard Cameron wrote:
>
>>
>> I'm interested in implementing two extra storage commands "areplace"
>> and "append" for memcached. The former would behave exactly like
>> "replace", but would return the object's value before it performs the
>> replace operation; the latter would simply string append the new
>> value onto the old one.
>>
>> The motivation is not purely to add extra bloat to memcached but
>> rather to provide the minimal extra operations I'd need to be able to
>> keep track of dependency trees in the cached data.
>>
>> For instance, if I want to cache objects A and B (the results of two
>> SQL query, say) which depends on some "thing" in the database (like a
>> particular user U's account), I'd like to be able keep track of my
>> dependencies like this:
>>
>> U -> A,B
>>
>> Memcached itself seems like a good choice to store this list. So,
>> when user U does something to his account, I'll want to delete the
>> newly defunct objects from the cache. I think the best I can do at
>> the moment is:
>>
>> my_deps := [get U]
>> foreach dep $my_deps:
>> [delete $dep]
>> [set U ""]
>>
>> Also, if I perform another cacheable operation C which depends on U,
>> I'd like to be able to add this dependency to my list. The existing
>> protocol lets me say:
>>
>> old_deps := "get U"
>> new_deps := append(old_deps, C)
>> [set U $new_deps]
>>
>> The trouble, of course, is that there are race conditions in both
>> cases where multi-threaded code could execute in a way which would
>> corrupt the dependency list.
>>
>> On a single machine, I can simply guard the critical section in the
>> client code with a mutex, but this won't work on a distributed server
>> farm. It would be possible to use memcached's existing "incr" and
>> "decr" operations to produce a "global" mutex, but this is a) not
>> terribly efficient, and b) dangerous if one server incr's the
>> semaphore on entry to the critical section but crashes before it
>> exists - this would break things such that no machine could
>> subsequently update the dependency and we'd be stuck in that state
>> for good.
>>
>> So, my proposed solution is to implement an "areplace" (atomic
>> replace) operation which looks like this:
>>
>> my_deps = [areplace U ""]
>>
>> and and append operation which looks like this:
>>
>> [append U " C"]
>>
>> I could then keep most of the bloat in the client library, not the
>> server. This approach also has the advantage of being able to track
>> dependencies over multiple memcached instances.
>>
>> This is probably bending memcached's intended use slightly, but I
>> think it still adheres to the basic principle that it's a cache: all
>> the data can be regenerated from the database if required. The
>> advantage is that I can immediately tell when data should be marked
>> as being invalid, which means I can use much longer cache times
>> which, in turn, means that the cached items in the "long tail"
>> (object which get accessed infrequently but are expensive to compute)
>> can have much higher hit rates.
>>
>> Seem at all sensible? I'd be interested in any comments before I go
>> away and code this up.
>>
>> Richard.
>>
>
>
>
From brad at danga.com Fri Mar 18 13:54:00 2005
From: brad at danga.com (Brad Fitzpatrick)
Date: Fri Mar 18 13:54:03 2005
Subject: Atomic replace and append operations
In-Reply-To: <423B3E8C.20404@tzc.com>
References: <3322939d36b4cd05c547947653eec4a3@citeulike.org>
<5b5f9b2ea9ee77dd9468cd0dd50040fe@ctoforaday.com>
<423B3E8C.20404@tzc.com>
Message-ID:
We use the database for locking, since we have to have a handle on it
anyway.
I'm down for an atomic append operation, but I want to add virtual buckets
and the trackers first, as it's kinda a prereq. Apps coded to just append
all the time could get in trouple if there's a network blip. With the
trackers, it'd be detected and wiped.
- Brad
On Fri, 18 Mar 2005, Timo Ewalds wrote:
> I'll third that one. I've suggested it before as well.
>
> Brad answered my question a while ago with how he grabs the list of
> comments on livejournal. He answered that he has a list of comment ids
> in a memcache entry. He then grabs the individual ones one at a time
> based on the contents of the first. When a new comment is added, it gets
> added to the list of comment ids. I asked how to avoid the race
> condition, but never got an answer to that. An append operation would
> fix it.
>
> Timo
>
> Gregory Block wrote:
>
> > I'd second these additions - I've been waiting with a similar kind of
> > need for some time, and would be very happy to see this added.
> >
> > On 18 Mar 2005, at 13:55, Richard Cameron wrote:
> >
> >>
> >> I'm interested in implementing two extra storage commands "areplace"
> >> and "append" for memcached. The former would behave exactly like
> >> "replace", but would return the object's value before it performs the
> >> replace operation; the latter would simply string append the new
> >> value onto the old one.
> >>
> >> The motivation is not purely to add extra bloat to memcached but
> >> rather to provide the minimal extra operations I'd need to be able to
> >> keep track of dependency trees in the cached data.
> >>
> >> For instance, if I want to cache objects A and B (the results of two
> >> SQL query, say) which depends on some "thing" in the database (like a
> >> particular user U's account), I'd like to be able keep track of my
> >> dependencies like this:
> >>
> >> U -> A,B
> >>
> >> Memcached itself seems like a good choice to store this list. So,
> >> when user U does something to his account, I'll want to delete the
> >> newly defunct objects from the cache. I think the best I can do at
> >> the moment is:
> >>
> >> my_deps := [get U]
> >> foreach dep $my_deps:
> >> [delete $dep]
> >> [set U ""]
> >>
> >> Also, if I perform another cacheable operation C which depends on U,
> >> I'd like to be able to add this dependency to my list. The existing
> >> protocol lets me say:
> >>
> >> old_deps := "get U"
> >> new_deps := append(old_deps, C)
> >> [set U $new_deps]
> >>
> >> The trouble, of course, is that there are race conditions in both
> >> cases where multi-threaded code could execute in a way which would
> >> corrupt the dependency list.
> >>
> >> On a single machine, I can simply guard the critical section in the
> >> client code with a mutex, but this won't work on a distributed server
> >> farm. It would be possible to use memcached's existing "incr" and
> >> "decr" operations to produce a "global" mutex, but this is a) not
> >> terribly efficient, and b) dangerous if one server incr's the
> >> semaphore on entry to the critical section but crashes before it
> >> exists - this would break things such that no machine could
> >> subsequently update the dependency and we'd be stuck in that state
> >> for good.
> >>
> >> So, my proposed solution is to implement an "areplace" (atomic
> >> replace) operation which looks like this:
> >>
> >> my_deps = [areplace U ""]
> >>
> >> and and append operation which looks like this:
> >>
> >> [append U " C"]
> >>
> >> I could then keep most of the bloat in the client library, not the
> >> server. This approach also has the advantage of being able to track
> >> dependencies over multiple memcached instances.
> >>
> >> This is probably bending memcached's intended use slightly, but I
> >> think it still adheres to the basic principle that it's a cache: all
> >> the data can be regenerated from the database if required. The
> >> advantage is that I can immediately tell when data should be marked
> >> as being invalid, which means I can use much longer cache times
> >> which, in turn, means that the cached items in the "long tail"
> >> (object which get accessed infrequently but are expensive to compute)
> >> can have much higher hit rates.
> >>
> >> Seem at all sensible? I'd be interested in any comments before I go
> >> away and code this up.
> >>
> >> Richard.
> >>
> >
> >
> >
>
>
From camster at citeulike.org Sat Mar 19 15:28:47 2005
From: camster at citeulike.org (Richard Cameron)
Date: Sat Mar 19 15:28:56 2005
Subject: Atomic replace and append operations
In-Reply-To: <5b5f9b2ea9ee77dd9468cd0dd50040fe@ctoforaday.com>
References: <3322939d36b4cd05c547947653eec4a3@citeulike.org>
<5b5f9b2ea9ee77dd9468cd0dd50040fe@ctoforaday.com>
Message-ID:
I'm making some good progress on this. I've added support for "append"
which was fairly straightforward and seems to work, as well as partial
support for "areplace" which still needs some more effort. I've also
patched libmemcache to let it speak these two commands.
Aside from a few minor bits and pieces I need to do before releasing a
patch, the biggest issue I've got at the moment is that I can take a
clean, unhacked copy of memcached, run it up, and say:
[set x "hello"]
[delete x]
very quickly followed by
[add x "world"]
I get a "NOT_STORED" error back. This is not what I'd expect to happen.
On the other hand, if I give it a few seconds between the "delete" and
the "add", it seems quite happy. There's obviously some issue with
objects living in a zombie "deleted" state but not being freed causing
confusion. There are a few odd bits of code which look like they might
be covering up a more fundamental problem, for example:
if (old_it && (old_it->it_flags & ITEM_DELETED) && (comm ==
NREAD_REPLACE || comm == NREAD_ADD)) {
out_string(c, "NOT_STORED");
break;
}
Unfortunately doing the semi-obvious thing with that doesn't seem to
solve the problem. Can anyone give me any pointers on what I need to do
here?
Richard.
On 18 Mar 2005, at 14:40, Gregory Block wrote:
> I'd second these additions - I've been waiting with a similar kind of
> need for some time, and would be very happy to see this added.
>
> On 18 Mar 2005, at 13:55, Richard Cameron wrote:
>
>>
>> I'm interested in implementing two extra storage commands "areplace"
>> and "append" for memcached. The former would behave exactly like
>> "replace", but would return the object's value before it performs the
>> replace operation; the latter would simply string append the new
>> value onto the old one.
>>
>> The motivation is not purely to add extra bloat to memcached but
>> rather to provide the minimal extra operations I'd need to be able to
>> keep track of dependency trees in the cached data.
>>
>> For instance, if I want to cache objects A and B (the results of two
>> SQL query, say) which depends on some "thing" in the database (like a
>> particular user U's account), I'd like to be able keep track of my
>> dependencies like this:
>>
>> U -> A,B
>>
>> Memcached itself seems like a good choice to store this list. So,
>> when user U does something to his account, I'll want to delete the
>> newly defunct objects from the cache. I think the best I can do at
>> the moment is:
>>
>> my_deps := [get U]
>> foreach dep $my_deps:
>> [delete $dep]
>> [set U ""]
>>
>> Also, if I perform another cacheable operation C which depends on U,
>> I'd like to be able to add this dependency to my list. The existing
>> protocol lets me say:
>>
>> old_deps := "get U"
>> new_deps := append(old_deps, C)
>> [set U $new_deps]
>>
>> The trouble, of course, is that there are race conditions in both
>> cases where multi-threaded code could execute in a way which would
>> corrupt the dependency list.
>>
>> On a single machine, I can simply guard the critical section in the
>> client code with a mutex, but this won't work on a distributed server
>> farm. It would be possible to use memcached's existing "incr" and
>> "decr" operations to produce a "global" mutex, but this is a) not
>> terribly efficient, and b) dangerous if one server incr's the
>> semaphore on entry to the critical section but crashes before it
>> exists - this would break things such that no machine could
>> subsequently update the dependency and we'd be stuck in that state
>> for good.
>>
>> So, my proposed solution is to implement an "areplace" (atomic
>> replace) operation which looks like this:
>>
>> my_deps = [areplace U ""]
>>
>> and and append operation which looks like this:
>>
>> [append U " C"]
>>
>> I could then keep most of the bloat in the client library, not the
>> server. This approach also has the advantage of being able to track
>> dependencies over multiple memcached instances.
>>
>> This is probably bending memcached's intended use slightly, but I
>> think it still adheres to the basic principle that it's a cache: all
>> the data can be regenerated from the database if required. The
>> advantage is that I can immediately tell when data should be marked
>> as being invalid, which means I can use much longer cache times
>> which, in turn, means that the cached items in the "long tail"
>> (object which get accessed infrequently but are expensive to compute)
>> can have much higher hit rates.
>>
>> Seem at all sensible? I'd be interested in any comments before I go
>> away and code this up.
>>
>> Richard.
>>
>
From camster at citeulike.org Sun Mar 20 03:01:21 2005
From: camster at citeulike.org (Richard Cameron)
Date: Sun Mar 20 03:01:30 2005
Subject: Atomic replace and append operations
In-Reply-To:
References: <3322939d36b4cd05c547947653eec4a3@citeulike.org>
<5b5f9b2ea9ee77dd9468cd0dd50040fe@ctoforaday.com>
Message-ID: <32ccbc509836cca08a60a1b1805f9490@citeulike.org>
> [T]he biggest issue I've got at the moment is that I can take a clean,
> unhacked copy of memcached, run it up, and say:
>
> [set x "hello"]
> [delete x]
>
> very quickly followed by
>
> [add x "world"]
>
> I get a "NOT_STORED" error back.
I think I've tracked this down. What seems to be happening was that any
immediate delete operation wouldn't take effect until the next
deleteevent simply because it's exptime was being set to "never expire"
rather than "in the past". This meant that there was a (maximum) five
second period where the data had technically been deleted, but it was
still lingering on in the system.
The following patch appears to sort it out. It seems fine, but I'd be
grateful if anyone who knows this code a little better can confirm that
it won't have any adverse effects on the item memory management system.
Index: memcached.c
===================================================================
--- memcached.c (revision 2217)
+++ memcached.c (working copy)
@@ -705,9 +705,17 @@
out_string(c, "NOT_FOUND");
return;
}
-
- exptime = realtime(exptime);
-
+
+ if (exptime==0) {
+ /* If we want to expire immediately then don't have
+ realtime() set exptime to 0, as this won't be picked up
+ by the expiry test later. We'll use 1 as a suitable
+ time in the past. */
+ exptime = 1;
+ } else {
+ exptime = realtime(exptime);
+ }
+
it->refcount++;
/* use its expiration time as its deletion time now */
it->exptime = exptime;
From mellon at pobox.com Sun Mar 20 03:25:11 2005
From: mellon at pobox.com (Anatoly Vorobey)
Date: Sun Mar 20 03:25:18 2005
Subject: Atomic replace and append operations
In-Reply-To:
References: <3322939d36b4cd05c547947653eec4a3@citeulike.org>
<5b5f9b2ea9ee77dd9468cd0dd50040fe@ctoforaday.com>
Message-ID: <20050320112511.GA28275@pobox.com>
On Sat, Mar 19, 2005 at 11:28:47PM +0000, Richard Cameron wrote:
>
> I'm making some good progress on this. I've added support for "append"
> which was fairly straightforward and seems to work, as well as partial
> support for "areplace" which still needs some more effort. I've also
> patched libmemcache to let it speak these two commands.
>
> Aside from a few minor bits and pieces I need to do before releasing a
> patch, the biggest issue I've got at the moment is that I can take a
> clean, unhacked copy of memcached, run it up, and say:
>
> [set x "hello"]
> [delete x]
>
> very quickly followed by
>
> [add x "world"]
>
> I get a "NOT_STORED" error back. This is not what I'd expect to happen.
This has been fixed in trunk some time ago. Please see
http://cvs.livejournal.org/browse.cgi/wcmtools/memcached/memcached.c?cvsroot=Danga
, note the log message on rev. 1.48.
Sorry you had to work on it and produce a patch (that works
differently); it's only after reading your next message and trying to
apply your patch that I realised you weren't working with the trunk and
that this is the same problem we fixed back in July. I thought it might
have been something new.
> There are a few odd bits of code which look like they might
> be covering up a more fundamental problem, for example:
>
> if (old_it && (old_it->it_flags & ITEM_DELETED) && (comm ==
> NREAD_REPLACE || comm == NREAD_ADD)) {
> out_string(c, "NOT_STORED");
> break;
> }
This code is correct; it's a feature for ADD/RESTORE to fail if the
item is in the DELETED state but is still linked (meaning it still
can be found by its key), which happens when you specify a non-null
expiration time to the DELETE command.
--
avva
"There's nothing simply good, nor ill alone" -- John Donne
From camster at citeulike.org Sun Mar 20 05:11:18 2005
From: camster at citeulike.org (Richard Cameron)
Date: Sun Mar 20 05:11:25 2005
Subject: Atomic replace and append operations
In-Reply-To: <20050320112511.GA28275@pobox.com>
References: <3322939d36b4cd05c547947653eec4a3@citeulike.org>
<5b5f9b2ea9ee77dd9468cd0dd50040fe@ctoforaday.com>
<20050320112511.GA28275@pobox.com>
Message-ID:
> This has been fixed in trunk some time ago. Please see
> http://cvs.livejournal.org/browse.cgi/wcmtools/memcached/memcached.c?
> cvsroot=Danga
> , note the log message on rev. 1.48.
That was pretty stupid. Sorry about that.
The more up-to-date version in CVS does help though. I've now got a
proof-of-concept working, and you can download the following:
* Patch for the memcached to support both append and areplace
operations
* Patch for libmemcache to deal with these new commands
along with:
* A Tcl client API for memcached (using libmemcache)
at
http://www.citeulike.org/opensource/memcached.adp
Richard.
From greg at corga.com Sun Mar 20 17:43:50 2005
From: greg at corga.com (Greg Grothaus)
Date: Sun Mar 20 17:42:12 2005
Subject: Slabs and Flush
In-Reply-To: <20050318200007.081503BC0EC@danga.com>
References: <20050318200007.081503BC0EC@danga.com>
Message-ID: <423E26D6.2080307@corga.com>
I am aware that there is an issue in memcached whereby sets can fail
because there are no slabs of the correct size and there is not enough
free memory available to memcached. My question is whether or not the
flush_all command given by many of the api's will reset the slab
allocation, or just erase the data?
Thanks,
-Greg
From mellon at pobox.com Sun Mar 20 22:52:04 2005
From: mellon at pobox.com (Anatoly Vorobey)
Date: Sun Mar 20 22:52:10 2005
Subject: Slabs and Flush
In-Reply-To: <423E26D6.2080307@corga.com>
References: <20050318200007.081503BC0EC@danga.com> <423E26D6.2080307@corga.com>
Message-ID: <20050321065204.GA31824@pobox.com>
On Sun, Mar 20, 2005 at 08:43:50PM -0500, Greg Grothaus wrote:
> I am aware that there is an issue in memcached whereby sets can fail
> because there are no slabs of the correct size and there is not enough
> free memory available to memcached. My question is whether or not the
> flush_all command given by many of the api's will reset the slab
> allocation, or just erase the data?
The latter.
More precisely, flush_all's effect is: pretend, for the purpose of
"get"'s, that we no longer have any data. It doesn't really erase any
data, or change its distrubution in any way, instead relying on
"set"'s pushing out old data sa new items come in.
--
avva
"There's nothing simply good, nor ill alone" -- John Donne
From matthew at nocturnal.org Tue Mar 22 10:45:51 2005
From: matthew at nocturnal.org (Matthew Lenz)
Date: Tue Mar 22 10:45:57 2005
Subject: memcached + apache::session::memcached
Message-ID: <1111517151.13640.33.camel@mlenzdesktop>
I'm curious if anyone has used memcached to implement a Java session
like session system with perl. We have an application that due to some
mis-design requires a separate dbi::mysql connection per request (this
is in addition to the one connection it opens to communicate with the
db). Also, due to deep nested structures we are automatically
incrementing a top level key => value so that the session is always
stored back to the db if it is opened. None of this would be required
normally, but we are in a load balanced environment and have experienced
race conditions. We have three machines that handle this application
and its entirely possible that one http application request from a
customer can arrive to a different machine than the previous request.
I'm looking for a solution similar to java's session (servlet/jsp) where
the given session and its data is migrated to and from different servers
automatically. It almost seems like memcached +
apache::session::memcached was designed for this purpose but I'm not
sure if I'm on the right track. Sorry if I missed some glaring
documentation but hopefully someone can off some advice.
Does memcached migrate its data between a cluster of servers? I'm
looking for a simple drop in replacement where the local webserver runs
a memcached server which receives update/read session requests and then
migrates the data to the other webserver's memcache servers.
-Matt (memcache noob be gentle)
From russor at msoe.edu Wed Mar 23 20:40:58 2005
From: russor at msoe.edu (Richard 'toast' Russo)
Date: Wed Mar 23 20:41:54 2005
Subject: memcached + apache::session::memcached
In-Reply-To: <1111517151.13640.33.camel@mlenzdesktop>
References: <1111517151.13640.33.camel@mlenzdesktop>
Message-ID:
On Tue, 22 Mar 2005, Matthew Lenz wrote:
> I'm curious if anyone has used memcached to implement a Java session
> like session system with perl. We have an application that due to some
> mis-design requires a separate dbi::mysql connection per request (this
> is in addition to the one connection it opens to communicate with the
> db).
Wow, that is pretty messed up... using the database for access control,
I'm guessing? (not that it's relevant)
> Also, due to deep nested structures we are automatically
> incrementing a top level key => value so that the session is always
> stored back to the db if it is opened. None of this would be required
> normally, but we are in a load balanced environment and have experienced
> race conditions. We have three machines that handle this application
> and its entirely possible that one http application request from a
> customer can arrive to a different machine than the previous request.
>
> I'm looking for a solution similar to java's session (servlet/jsp) where
> the given session and its data is migrated to and from different servers
> automatically. It almost seems like memcached +
> apache::session::memcached was designed for this purpose but I'm not
> sure if I'm on the right track. Sorry if I missed some glaring
> documentation but hopefully someone can off some advice.
>
It seems like memcached will be a good fit. I'm not familiar w/
apache::session::memcached though.
> Does memcached migrate its data between a cluster of servers? I'm
> looking for a simple drop in replacement where the local webserver runs
> a memcached server which receives update/read session requests and then
> migrates the data to the other webserver's memcache servers.
>
>
memcached segments the information over the cluster. Assuming all
memcached daemons are running and all the clients have the same config,
all the clients will use a given server for the same key. It is important
to remember that the memcached cluster is just a cache not a backing
store, and may not be coherent while daemons are transitioning from down
to up. (Since each client individually keeps track of server failures,
and individually tries to reconnect, there are times when clients may
disagree about which server a key is stored on).
In your environment, you probably would want to run the memcached daemon
on all three webservers, so you can have a larger total cache.
toast
From al_raetz at yahoo.com Thu Mar 24 15:45:00 2005
From: al_raetz at yahoo.com (Alan Raetz)
Date: Thu Mar 24 15:45:03 2005
Subject: three basic questions
Message-ID: <20050324234501.76366.qmail@web30301.mail.mud.yahoo.com>
Hi,
I just discovered this and it may be the answer to my problems,
I am trying to run a perl app that needs to handle 10GB+ of data
in a DB hash.
Three questions:
1) So if I need, say, 20 GB of storage, I would run ten machines
each with a 2GB memory cache?
2) But ~2-4 GB is the virtual memory limit per machine (limited
by the operating system)?
3) Is it possible to build the daemon on windows machines, or
is this a unix only type thing?
Thanks!
-Alan
From alermo at bk.ru Sun Mar 27 02:39:25 2005
From: alermo at bk.ru (Al Ermolaew)
Date: Sun Mar 27 02:39:28 2005
Subject: (no subject)
Message-ID:
Hi All!
At once I shall apologize for my English :)
I had very strange problem with work of the memcached on
freeBSD. I have two servers with freeBSD 5.3 and on both
servers problem has repeated. I have a local machine with
Linux and on it problems have not arisen.....
Short example - data between 1400 and 2800 bytes would be
transferred very slowly
su-2.05b# ./memd_test 1248
memd_test: mc_aget result: <0.0001540>
su-2.05b# ./memd_test 1440
memd_test: mc_aget result: <0.0998720>
su-2.05b# ./memd_test 10440
memd_test: mc_aget result: <0.0005540>
1440 bytes are transferred from memcached at 0.09 seconds,
but 10440 bytes at 0.0005......
I upgrade em driver to version 2.05, but it has not
helped...
On lo device (127.0.0.1) all OK.
I test it with libevent 0.9, 1.0, 1.0b...
I test memcached 1.1.11, 1.1.12cr1, 1.1.9-snapshot
I using libmemcached for C test and Cached::Memcached for
perl...
Probably problem in a freeBSD kernel or em driver... I don't
know :(
Any ideas?
Regards,
Alermo
------ simple memd_test.c based on regress.c from
libmemcached ------
/*
memcached test on freeBSD 5.3
data between 1400 and 2800 bytes would be transferred very
slowly - 0.1 sec
device em (Intel(R) PRO/1000 Gigabit Ethernet driver)
on device lo all OK.... Very strange.
*/
#include
#include
#include
#include
#include
#include
#include
#include
#include
int
main(int argc, char *argv[]) {
struct memcache *mc = NULL;
u_int32_t long_string_size = 0;
char *long_string;
u_int32_t i;
if (argc > 1)
long_string_size = strtol(argv[1], NULL, 10);
if (long_string_size == 0)
long_string_size = 2750;
mc = mc_new();
if (mc == NULL)
err(EX_OSERR, "Unable to allocate a new memcache
object");
mc_server_add(mc, "192.168.1.1", "11211");
long_string = malloc(long_string_size);
for (i = 0; i < long_string_size; ++i)
long_string[i] = '0';
mc_set(mc, "long_poisoned_string",
MCM_CSTRLEN("long_poisoned_string"), long_string,
long_string_size, 0, 0);
struct timeval t0,t1;
int res0,res1;
void *val;
res0 = gettimeofday(&t0,NULL);
if(res0 < 0)
warnx("Error found: %s\n",errno);
// for (i = 0; i < 100; ++i )
val = mc_aget(mc, "long_poisoned_string",
strlen("long_poisoned_string"));
// warnx("res: %s\n",val);
res1 = gettimeofday(&t1,NULL);
if(res1 < 0)
warnx("Error found: %s\n",errno);
double tdif,sdif;
sdif = t1.tv_usec-t0.tv_usec;
sdif = sdif/1000000;
tdif = t1.tv_sec-t0.tv_sec;
tdif = tdif+sdif;
warnx("mc_aget result: <%.7f>\n", tdif );
mc_free(mc);
return EX_OK;
}
From alermo at bk.ru Sun Mar 27 02:41:05 2005
From: alermo at bk.ru (Al Ermolaew)
Date: Sun Mar 27 02:41:08 2005
Subject: freeBSD 5.3 and memcached
Message-ID:
Hi All!
At once I shall apologize for my English :)
I had very strange problem with work of the memcached on
freeBSD. I have two servers with freeBSD 5.3 and on both
servers problem has repeated. I have a local machine with
Linux and on it problems have not arisen.....
Short example - data between 1400 and 2800 bytes would be
transferred very slowly
su-2.05b# ./memd_test 1248
memd_test: mc_aget result: <0.0001540>
su-2.05b# ./memd_test 1440
memd_test: mc_aget result: <0.0998720>
su-2.05b# ./memd_test 10440
memd_test: mc_aget result: <0.0005540>
1440 bytes are transferred from memcached at 0.09 seconds,
but 10440 bytes at 0.0005......
I upgrade em driver to version 2.05, but it has not
helped...
On lo device (127.0.0.1) all OK.
I test it with libevent 0.9, 1.0, 1.0b...
I test memcached 1.1.11, 1.1.12cr1, 1.1.9-snapshot
I using libmemcached for C test and Cached::Memcached for
perl...
Probably problem in a freeBSD kernel or em driver... I don't
know :(
Any ideas?
Regards,
Alermo
------ simple memd_test.c based on regress.c from
libmemcached ------
/*
memcached test on freeBSD 5.3
data between 1400 and 2800 bytes would be transferred very
slowly - 0.1 sec
device em (Intel(R) PRO/1000 Gigabit Ethernet driver)
on device lo all OK.... Very strange.
*/
#include
#include
#include
#include
#include
#include
#include
#include
#include
int
main(int argc, char *argv[]) {
struct memcache *mc = NULL;
u_int32_t long_string_size = 0;
char *long_string;
u_int32_t i;
if (argc > 1)
long_string_size = strtol(argv[1], NULL, 10);
if (long_string_size == 0)
long_string_size = 2750;
mc = mc_new();
if (mc == NULL)
err(EX_OSERR, "Unable to allocate a new memcache
object");
mc_server_add(mc, "192.168.1.1", "11211");
long_string = malloc(long_string_size);
for (i = 0; i < long_string_size; ++i)
long_string[i] = '0';
mc_set(mc, "long_poisoned_string",
MCM_CSTRLEN("long_poisoned_string"), long_string,
long_string_size, 0, 0);
struct timeval t0,t1;
int res0,res1;
void *val;
res0 = gettimeofday(&t0,NULL);
if(res0 < 0)
warnx("Error found: %s\n",errno);
// for (i = 0; i < 100; ++i )
val = mc_aget(mc, "long_poisoned_string",
strlen("long_poisoned_string"));
// warnx("res: %s\n",val);
res1 = gettimeofday(&t1,NULL);
if(res1 < 0)
warnx("Error found: %s\n",errno);
double tdif,sdif;
sdif = t1.tv_usec-t0.tv_usec;
sdif = sdif/1000000;
tdif = t1.tv_sec-t0.tv_sec;
tdif = tdif+sdif;
warnx("mc_aget result: <%.7f>\n", tdif );
mc_free(mc);
return EX_OK;
}
From gaal at forum2.org Tue Mar 29 03:55:50 2005
From: gaal at forum2.org (Gaal Yahas)
Date: Tue Mar 29 03:56:23 2005
Subject: [patch] fix timeout==0
Message-ID: <20050329115550.GP18134@sike.forum2.org>
_connect_sock never works in blocking mode because of a bug in setting
the default timeout. Here's the fix.
==snip==
--- /home/roo/.cpan/build/Cache-Memcached-1.14/Memcached.pm-o 2005-03-29 13:48:39.917957408 +0200
+++ /home/roo/.cpan/build/Cache-Memcached-1.14/Memcached.pm 2005-03-29 13:49:26.747838184 +0200
@@ -157,7 +157,7 @@
sub _connect_sock { # sock, sin, timeout
my ($sock, $sin, $timeout) = @_;
- $timeout ||= 0.25;
+ $timeout = 0.25 if not defined $timeout;
# make the socket non-blocking from now on,
# except if someone wants 0 timeout, meaning
==snip==
--
Gaal Yahas
http://gaal.livejournal.com/
From gaal at forum2.org Tue Mar 29 14:29:21 2005
From: gaal at forum2.org (Gaal Yahas)
Date: Tue Mar 29 14:29:45 2005
Subject: Perl6 port available
Message-ID: <20050329222921.GQ18134@sike.forum2.org>
Hi,
I've ported Cache::Memcached to Perl6. You can take a look at the
code here:
http://svn.perl.org/perl6/pugs/trunk/modules/Cache-Memcached/lib/Cache/Memcached.pm
There's nothing that can actually run it yet, but pugs
is getting closer by the day.
--
Gaal Yahas
http://gaal.livejournal.com/
From tschundler at gmail.com Tue Mar 29 16:03:07 2005
From: tschundler at gmail.com (Ted Schundler)
Date: Tue Mar 29 16:03:11 2005
Subject: freeBSD 5.3 and memcached
In-Reply-To:
References:
Message-ID:
Interesting. It *might* be a FreeBSD bug as it seems to be tied to the
interface's MTU. A packet will be at most 1500 bytes before it has to
be split up on the ethernet frame level. It seems the 2nd packet isn't
being sent immediately - maybe a weird side effect of some packet
scheduling code.
More interesting - it coredumps at 1407-1409 byte packages and also at
2855-2856. Which are basically at the border of the size ranges where
there is a speed issue. And at similar values when decreasing the
NIC's mtu.
But it doesn't coredump in any socket function - it coredumps when
copying the incomming content...
Ted
On Sun, 27 Mar 2005 14:41:05 +0400, Al Ermolaew wrote:
> Hi All!
>
> At once I shall apologize for my English :)
>
> I had very strange problem with work of the memcached on
> freeBSD. I have two servers with freeBSD 5.3 and on both
> servers problem has repeated. I have a local machine with
> Linux and on it problems have not arisen.....
>
> Short example - data between 1400 and 2800 bytes would be
> transferred very slowly
>
> su-2.05b# ./memd_test 1248
> memd_test: mc_aget result: <0.0001540>
>
> su-2.05b# ./memd_test 1440
> memd_test: mc_aget result: <0.0998720>
>
> su-2.05b# ./memd_test 10440
> memd_test: mc_aget result: <0.0005540>
>
> 1440 bytes are transferred from memcached at 0.09 seconds,
> but 10440 bytes at 0.0005......
>
> I upgrade em driver to version 2.05, but it has not
> helped...
> On lo device (127.0.0.1) all OK.
>
> I test it with libevent 0.9, 1.0, 1.0b...
> I test memcached 1.1.11, 1.1.12cr1, 1.1.9-snapshot
> I using libmemcached for C test and Cached::Memcached for
> perl...
>
> Probably problem in a freeBSD kernel or em driver... I don't
> know :(
>
> Any ideas?
>
> Regards,
> Alermo
>
> ------ simple memd_test.c based on regress.c from
> libmemcached ------
>
> /*
>
> memcached test on freeBSD 5.3
> data between 1400 and 2800 bytes would be transferred very
> slowly - 0.1 sec
> device em (Intel(R) PRO/1000 Gigabit Ethernet driver)
>
> on device lo all OK.... Very strange.
>
> */
>
> #include
> #include
> #include
> #include
> #include
> #include
> #include
> #include
>
> #include
>
> int
> main(int argc, char *argv[]) {
> struct memcache *mc = NULL;
> u_int32_t long_string_size = 0;
> char *long_string;
> u_int32_t i;
>
> if (argc > 1)
> long_string_size = strtol(argv[1], NULL, 10);
>
> if (long_string_size == 0)
> long_string_size = 2750;
>
> mc = mc_new();
> if (mc == NULL)
> err(EX_OSERR, "Unable to allocate a new memcache
> object");
>
> mc_server_add(mc, "192.168.1.1", "11211");
>
> long_string = malloc(long_string_size);
>
> for (i = 0; i < long_string_size; ++i)
> long_string[i] = '0';
>
> mc_set(mc, "long_poisoned_string",
> MCM_CSTRLEN("long_poisoned_string"), long_string,
> long_string_size, 0, 0);
>
> struct timeval t0,t1;
> int res0,res1;
> void *val;
>
> res0 = gettimeofday(&t0,NULL);
> if(res0 < 0)
> warnx("Error found: %s\n",errno);
>
> // for (i = 0; i < 100; ++i )
> val = mc_aget(mc, "long_poisoned_string",
> strlen("long_poisoned_string"));
> // warnx("res: %s\n",val);
>
> res1 = gettimeofday(&t1,NULL);
> if(res1 < 0)
> warnx("Error found: %s\n",errno);
>
> double tdif,sdif;
> sdif = t1.tv_usec-t0.tv_usec;
> sdif = sdif/1000000;
> tdif = t1.tv_sec-t0.tv_sec;
> tdif = tdif+sdif;
>
> warnx("mc_aget result: <%.7f>\n", tdif );
>
> mc_free(mc);
>
> return EX_OK;
> }
>
>
From brad at danga.com Tue Mar 29 16:11:44 2005
From: brad at danga.com (Brad Fitzpatrick)
Date: Tue Mar 29 16:11:45 2005
Subject: Perl6 port available
In-Reply-To: <20050329222921.GQ18134@sike.forum2.org>
References: <20050329222921.GQ18134@sike.forum2.org>
Message-ID:
Sick, dude. Sick.
(but yeah --- go Pugs!)
On Wed, 30 Mar 2005, Gaal Yahas wrote:
> Hi,
>
> I've ported Cache::Memcached to Perl6. You can take a look at the
> code here:
>
> http://svn.perl.org/perl6/pugs/trunk/modules/Cache-Memcached/lib/Cache/Memcached.pm
>
> There's nothing that can actually run it yet, but pugs
> is getting closer by the day.
>
> --
> Gaal Yahas
> http://gaal.livejournal.com/
>
>
From tschundler at gmail.com Tue Mar 29 16:38:08 2005
From: tschundler at gmail.com (Ted Schundler)
Date: Tue Mar 29 16:38:13 2005
Subject: libmemcached "protocol error" and "Server sent data for key not
in request"
In-Reply-To: <5d29ab34f99ac21d141dcc51d437cb03@citeulike.org>
References: <5d29ab34f99ac21d141dcc51d437cb03@citeulike.org>
Message-ID:
Ah, this is the same issue related Alermo's problems in a different thread.
And I think I have a solution, at least to the delay issue:
in memcached in set_cork(), change it to:
void set_cork() (conn *c, int val) {
if (c->is_corked == val) return;
c->is_corked = val;
#ifdef TCP_NOPUSH
setsockopt(c->sfd, IPPROTO_TCP, TCP_NOPUSH, &val, sizeof(val));
val!=val;
setsockopt(c->sfd, IPPROTO_TCP, TCP_NODELAY, &val, sizeof(val));
#endif
}
so, at very least it's guaranteed that when the cork is released, the
packets go out. Then there are no delay problems with inconveniently
sized packets. And I don't think it will break compatability with Linux.
Ted
On Thu, 17 Mar 2005 16:40:47 +0000, Richard Cameron
wrote:
>
> Does anyone else see these two errors with libmemcached:
>
> regress: memcache.c:1096 protocol error: Unknown error: 0
> regress: memcache.c:1031 Server sent data for key not in request.
>
> I see them on Mac OS X but not on Linux. It's a reasonably slow
> machine, and it's got the disadvantage of not having a working
> TCP_NOPUSH, so what appears to be happening is the data seems to be
> coming back from the server in dribs and drabs.
>
> It appears that there are some particularly unfortunate lengths of
> dribs which fool libmemcached into thinking it's getting mangled data
> back from the server. While this is much more likely to happen on OS
> X, I presume that it's theoretically possible that it might happen on
> Linux too.
>
> I'm using libmemcached version 1.2.3 and here's my patch which seems to
> calm things down:
>
> Index: memcache.c
> ===================================================================
> --- memcache.c (revision 2201)
> +++ memcache.c (working copy)
> @@ -1011,7 +1011,7 @@
> * If this fails we will scan the whole list. */
> if (res != NULL && res->entries.tqe_next != NULL) {
> for (res = res->entries.tqe_next; res != NULL; res =
> res->entries.tqe_next) {
> - if ((size_t)(rb - (cp - ms->cur)) > res->len) {
> + if ((size_t)(ms->read_cur - cp) >= res->len) {
> if (memcmp(cp, res->key, res->len) == 0) {
> break;
> }
> @@ -1019,7 +1019,7 @@
> }
> } else {
> for (res = req->query.tqh_first; res != NULL; res =
> res->entries.tqe_next) {
> - if((size_t)(rb - (cp - ms->cur)) > res->len) {
> + if ((size_t)(ms->read_cur - cp) >= res->len) {
> if(memcmp(cp, res->key, res->len) == 0) {
> break;
> }
> @@ -1067,8 +1067,8 @@
>
> bytes_read = ms->read_cur - cp;
>
> - /* Check if we have read all the data the plus a \r\n */
> - if (bytes_read >= len + 2) {
> + /* Check if we have read all the data the plus a \r\n (plus partial
> next item or END)*/
> + if (bytes_read > len + 2) {
> res->_flags |= MCM_RES_FOUND;
> if (res->size == 0) {
> res->val = ctxt->mcMallocAtomic(res->bytes);
>
>
From christopher at baus.net Tue Mar 29 22:06:16 2005
From: christopher at baus.net (christopher@baus.net)
Date: Tue Mar 29 22:06:15 2005
Subject: libmemcached "protocol error" and "Server sent data for key
not in request"
In-Reply-To:
References: <5d29ab34f99ac21d141dcc51d437cb03@citeulike.org>
Message-ID: <46793.127.0.0.1.1112162776.squirrel@mail.baus.net>
> Ah, this is the same issue related Alermo's problems in a different
> thread.
> And I think I have a solution, at least to the delay issue:
>
> in memcached in set_cork(), change it to:
> void set_cork() (conn *c, int val) {
> if (c->is_corked == val) return;
> c->is_corked = val;
> #ifdef TCP_NOPUSH
> setsockopt(c->sfd, IPPROTO_TCP, TCP_NOPUSH, &val, sizeof(val));
> val!=val;
> setsockopt(c->sfd, IPPROTO_TCP, TCP_NODELAY, &val, sizeof(val));
> #endif
> }
>
I think you mean
val ~= val;
> And I don't think it will break compatability with Linux.
Not sure. The manual says TCP_CORK is not compatible with TCP_NODELAY.
http://www.rt.com/man/tcp.4.html
I think it would be easier to read if TCP_NOPUSH wasn't redefined.
Corking is abstracted here anyway. Might as well put all the platform
specific stuff was in this function.
For instance:
void set_cork() (conn *c, int val) {
if (c->is_corked == val) return;
c->is_corked = val;
#if defined(TCP_NOPUSH)
setsockopt(c->sfd, IPPROTO_TCP, TCP_NOPUSH, &val, sizeof(val));
val ~= val;
setsockopt(c->sfd, IPPROTO_TCP, TCP_NODELAY, &val, sizeof(val));
#endif
#if defined(TCP_CORK) && !defined(TCP_NOPUSH)
setsockopt(c->sfd, IPPROTO_TCP, TCP_CORK, &val, sizeof(val));
#endif
}
From pafei at citrin.ch Thu Mar 31 00:05:01 2005
From: pafei at citrin.ch (Patrick Feisthammel)
Date: Thu Mar 31 00:05:10 2005
Subject: flushing items by prefix
Message-ID: <424BAF2D.6000205@citrin.ch>
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi!
We started using memcache and needed a possibility to delete entries
from the cache by a given prefix of the key. For example all entries
with a key starting with 'namespace1'.
I wrote a patch for memcache to support a new command
~ flush_prefix
where is the prefix of the keys to be deleted.
The command just iterates through all items and compares the stored key
with the given . If it matches, the item is deleted.
I do not check any flags of the items an I do not check time stamps.
If someone with more knowledge about the internals has suggestions for
improvement, I am happy to implement it.
The code passed our test cases and I would be happy if it is included in
the current version.
The patch is against the current cvs version of today.
The patch can be downloaded from:
~ http://work.citrin.ch/patch_memcache_flush_prefix
Cheers,
Patrick
- --
Citrin, Feisthammel und Partner, Phone: +41 44 940 6161
Steigstrasse 55, CH-8610 Uster, Switzerland Fax: +41 43 399 0506
http://www.citrin.ch/ email: citrin@citrin.ch
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (GNU/Linux)
iD8DBQFCS68mSUE6YzGusa8RAvh1AJwP+QikWs9Tqm0UkAaxJgQw/KHJ9QCfV6lx
qaFCmdRUGrpy5mg4yh0VAZc=
=wopG
-----END PGP SIGNATURE-----