memcache API happiness for C and Ruby users...

John McCaskey johnm at klir.com
Wed Jan 5 14:45:00 PST 2005


On Wed, 2005-01-05 at 11:42 -0800, Sean Chittenden wrote:
> > The only bit that I have outstanding is with multi-get requests in 
> > that I need to do add a scatter-gather algo to have the get's for the 
> > various keys distributed across the appropriate servers then have the 
> > various keys reassembled and returned in the correct order.  I'll 
> > likely tackle that in a 1.3.0 release, however and not in the 1.2.0 
> > cycle.  Please test and let me know if anyone has any probs w/ this 
> > version of memcache(3).
> 
> A little bit more on this.  Here's an example symptom (using the Ruby 
> API, however the problem is with memcache(3) and not the Ruby API.  I'd 
> wager that other APIs have this problem too).
> 
> > m = Memcache.new()
> > m.add_server('127.0.0.1:11211')
> > m.add_server('127.0.0.1:11212')
> > m.add_server('127.0.0.1:11213')
> > m['key1'] = 'val1'
> > m['key2'] = 'val2'
> > m['key3'] = 'val4'
> > m['key4'] = {'foo' => 'val5', 'bar' => 'val6'}
> > p m['key1']
> > p m['key2']
> > p m['key3']
> > p m['key4']
> > p m.get_a('key1', 'key2', 'key3', 'key4')
> > p m.get_h('key1', 'key2', 'key3', 'key4')
> 
> The above has the following output:
> 
> > "val1"
> > "val2"
> > "val4"
> > {"foo"=>"val5", "bar"=>"val6"}
> > ["val1", nil, "val4", nil]
> > {"key1"=>"val1", "key2"=>nil, "key3"=>"val4", "key4"=>nil}
> 
> If you comment out all but one of the above #add_server() lines, you 
> get the correct output, however:
> 
> > "val1"
> > "val2"
> > "val4"
> > {"foo"=>"val5", "bar"=>"val6"}
> > ["val1", "val2", "val4", {"foo"=>"val5", "bar"=>"val6"}]
> > {"key1"=>"val1", "key2"=>"val2", "key3"=>"val4", 
> > "key4"=>{"foo"=>"val5", "bar"=>"val6"}}
> 
> The problem being that the multi-get's send all of the keys to the 
> server calculated by 'key1' and it doesn't correctly send get requests 
> to each of the servers, collate the results, then return the result set 
> back to the client.  This was what I meant by needing to add 
> scatter/gather support to memcache(3).
> 
> If anyone has any neat tricks for doing this in parallel, drop me a 
> line offline.  I refuse to do this in sequence/serially so any ideas 
> are going to require either async IO, or non-blocking IO.  Ideally I'd 
> use use pthreads, but that's less than portable (despite the p in 
> pthreads).  Who knows, maybe I'll start abstracting IO handling and can 
> sneak in the use of kqueue(2) or libevent(3).  Efficient reassembly 

libevent only recently became safe to use in multi-threaded
applications.  Even though libmemcache isn't multi-threaded, it's
currently thread safe (at least when used appropriately) and thats how I
use it so I'd like to see it stay that way.  I've had serious problems
using libevent even for trivial tasks in a multi-threaded app (even
using it in only one of the threads).  I'll admit I haven't tried the
latest release which fixes some issues. (it's only 2 days old!)

> that's not O(N^2) is my main concern, however.  Anything that's more 
> expensive than O(2N) isn't gunna cut it.  Anyway, like I said, I'll 
> probably get to the scatter/gather portion of memcache(3) soonish.  In 
> the meantime, enjoy the ruby bindings.  I'll do some benchmarking later 
> to justify to myself the need to add Memcache::Request and 
> Memcache::Response class (hint: need to prevent excessive object 
> creation in tight loops).  -sc
> 
-- 
John A. McCaskey
Software Development Engineer
Klir Technologies, Inc.
johnm at klir.com
206.902.2027


More information about the memcached mailing list