"Set" and then "get" don't retrieve the stored value

Don MacAskill don at smugmug.com
Mon Nov 20 17:07:11 UTC 2006


Oh, interesting.  That might explain some strange behavior we were 
seeing.  I guess I'll try adding a sleep(1) after a flush_all, prior to 
enabling the server again, and see if that fixes our problem.

Wish I had a clever way of solving the base problem though.  Alas, I 
don't.  :)

Don


Brad Fitzpatrick wrote:
> This is another instance of the flush_all-is-only-second-granularity
> bug that's existed for awhile.
> 
> Because you flush_all and then do a set within the rest of that second,
> those sets are also considered non-existent.
> 
> We've wanted to fix this by adding a world generation number (and storing
> the world version of each item in the item struct), or making the time
> higher resolution, but both increase the size of *item for little gain.
> 
> Anybody have clever ideas on how to fix this?
> 
> (btw, there's already a failing TODO test for this in the test suite, in
> the flush-all.t test....)
> 
> 
> 
> On Tue, 14 Nov 2006, Kristian Hellquist wrote:
> 
>> Here's the dump. The key to look for is
>> '65b05a7489ea704c5afb80fee7b029497a80beac'. As you can see I do a
>> "set" with response "STORED" followed by a delete which returns "NOT
>> FOUND". After that follows various calls that I think you can ignore.
>>
>> Thanks for your help,
>> Kristian
>>
>>
>> 2006/11/14, Brad Fitzpatrick <brad at danga.com>:
>>> I'm coming into this late.
>>>
>>> Why would a "get" right after a "set" not work?  This isn't a known issue,
>>> at least not to me.
>>>
>>> If you can reproduce this easily, send me a tcpdump pcap file of the
>>> traffic to your memcached.
>>>
>>> If you're using the loopback to a 127.0.0.1 memcached on port 11211, run
>>> this:
>>>
>>> # tcpdump -w forbrad.pcap -s 0 -i lo port 11211
>>>
>>> Then do your (minimal!) failing test, control-C the tcpdump, and email me
>>> the forbrad.pcap file.
>>>
>>>
>>> On Tue, 14 Nov 2006, Kristian Hellquist wrote:
>>>
>>>> The problem is that the the small window unfortunately seems to be
>>>> quite large for me. Do you have measure in time of how small the
>>>> window is?
>>>>
>>>> Basically, I have some unit tests on a web application. The web
>>>> application relies on memcached heavily. Everything with memcached has
>>>> worked like a charm for us so far. We cache sql-finders in memcached
>>>> and invalidates the sql-finders when the model has changed.
>>>>
>>>> In a specific use-case we dont want to update the state of the db
>>>> immediately. This data change *a lot*. So when the state of model
>>>> change we will cache the sql-finder in memcache directly and later on
>>>> we flush the change to the db.  It is in this use-case that I have
>>>> experienced the "set" and "get" problem.
>>>>
>>>> My unit-test successfully ran when I added a sleep call of 0.3 s
>>>> between each memcache-call :/ This is a lot. Is there a way of
>>>> configurate memcached so I experience greater performance?
>>>>
>>>> I know that the test-environment doesnt mimick the user behavior of
>>>> the webapplication in real use. We use rails and the unit tests
>>>> doensn't involve the webserver at all. So in real life more delay will
>>>> be inserted between each "set" and "get" call. But, can I *trust*
>>>> memcache that data has been stored if I receive a "STORED" response?
>>>> Is just a matter of time before the value can be retrieved again? Or
>>>> is it bad usage of memcache to cache the sql-finders directly and
>>>> later on update the db?
>>>>
>>>> Thanks for your reply,
>>>> Kristian Hellquist
>>>>
>>>>
>>>> 2006/11/14, Jehiah Czebotar <jehiah at gmail.com>:
>>>>>> Basically, yes.  There is a small window where this can happen.  There
>>>>>> have been numerous discussions about it on the list.  Most people note
>>>>>> it with a set and then delete.
>>>>>>
>>>>> tif my memory serves me, the correct order is a delete, set, then a
>>>>> get all for the same key
>>>>>
>>>>
> 


More information about the memcached mailing list