libmemcache 1.0.1

Sean Chittenden sean at chittenden.org
Tue Dec 7 16:45:21 PST 2004


> Ok, so I believe I've fixed the issue.

Fix incorporated into 1.0.2.

http://people.FreeBSD.org/~seanc/libmemcache/

This release also fixes some miscellanea for FreeBSD's bento cluster, 
but no one here should care too much.

> The problem looked to be that the only check that would cause a goto to
> get_more_bits, as if there as no \n in the retrieved buffer, yet the
> mc_fetch_cmd code was assuming not just that the first line was
> complete, but all the data had been completely read as well.
>
> As such I made mc_get_line check if the amount of data read on the
> socket == the max allowed, and if so try to read some more, so that it
> will always read in the whole response at once.

Ah, yeah.  Good catch.  The worst case scenario being that the response 
was exactly the same as the number of bits coming in.  I don't think 
there's any way around this for now.  Text protocols suck (*nudges Brad 
to send feedback on binary protocol spec* [-: ).

> I also noticed some issues with the addresses not all getting set after
> the ReAlloc call, which would work fine if your implementation happened
> to not change the address, but would fail otherwise (failed for me 
> under
> linux 2.4 and 2.6). So I fixed that, and finally the code for
> determining the location of the newline had to be updated to account 
> for
> the fact that the newline might not be in mc->read_cur, but ealier in
> mc->cur since I read everything at once.

Ah, very nice.  I was doing this in mc_reset_buf(), but spaced adding 
it elsewhere.

> I don't think this has any negative performance implication as the 
> whole
> response is going to have to be read before much can be done anyway, 
> but
> it probably could be optimized to go back to how it was before, but
> modify mc_fetch_cmd to call mc_get_line again before attempting to set
> res->val where it now falsely assumes that the second line was already
> completed.

The only case where I can see any performance impact would be if there 
are exactly GET_INIT_BUF_SIZE or buf->size bytes on the wire such that 
subsequent read(2) calls would read(2) zero bytes and the buffer would 
grow 2x.  The flipside being, on subsequent calls, this wouldn't be a 
problem.

> See attached patch file.

Thanks, I've applied it to my tree and am rolling version 1.0.2 to 
address this.  I'm in the midst of adding multiple memory contexts so 
that this can easily be embedded in PHP and Apache (PHP uses Zend's 
allocation routines and Apache using APR for memory management... which 
is ugly).  There won't be any API changes to the single memory context 
API, however I am having to add a slew of interfaces that use a memory 
context.  Instead of calling mc_get(...), under the multimemory context 
API, you'd call: mcm_get(ctxt, ...).  Internally, mc_get() calls 
mcm_get() with a global and default context so there's no need for 
consumers of the single memory API context to even need to grok the 
multi-context foo.

With a binary protocol coming down the pipe, having language interfaces 
standardize on a single library is kinda important so I see this as 
rather important work.  More on this later though, probably will have 
1.1rc1 done by tomorrow which will incorporate these changes.  -sc

-- 
Sean Chittenden



More information about the memcached mailing list