memcached-1.2.1 efficiency

Andreas Vierengel avierengel at gatrixx.com
Thu Jan 11 16:11:22 UTC 2007


- client is self written in c for asynchronous sets ( 10 sets in one 
send() )

- set/get ratio is roughly 13:1 on this bunch of caches, i.e. a total of 
  26.000 set/s and 2000 get/s distributed over 2 servers (single cpu).

- that level is typical, because we are actively filling the caches with 
data.

- we have about 200 clients which fill the caches.

PS: due to your "faster hash algo" sys-load is 2 times higher than 
user-load (in my environment). That was the reason i straced :)
with 1.2.0 sys/user was 1:1

Although we use a bunch of caches for the "traditional" use, with 
synchronous sets :)

--Andy

Steven Grimm wrote:
> Requests are processed one at a time internally, so this result is 
> completely expected. For multi-key "get" requests, we do attempt to 
> minimize(*) the number of writes we use for the response, but it's still 
> at the level of completing an entire request before moving on to the 
> next one.
> 
> It would be possible to do some kind of output buffering scheme where we 
> delay writing until we've fully processed the input buffer, but IMO it'd 
> be more complicated than it's worth. However, I'm coming from an 
> environment where each request from a given client is synchronous, so we 
> never have multiple outstanding requests on the same connection, so of 
> course such buffering would be irrelevant to me. In a highly 
> asynchronous environment where you have lots of pending requests per 
> connection, it might make some sense.
> 
> Out of curiosity, what client are you using such that you're stacking up 
> that many requests asynchronously on a single connection? Is that level 
> of request parallelism typical for your application? Do you have 
> hundreds of connections each of which has a bunch of stacked-up 
> requests? If you have just one or a small number of clients, you might 
> get a slight performance boost by splitting the requests across multiple 
> connections, especially if you're running the server in multithreaded 
> mode; it won't reduce the number of system calls but it'll at least 
> spread the work across processors.
> 
> What's your rate of get requests, if you're doing 10K sets a second? Is 
> your traffic more heavily weighted to writes than reads?
> 
> -Steve
> 
> (*) Actually it's usually one more write than strictly required; the 
> extra write is to work around a problem with Solaris' TCP stack, which 
> is tuned for throughput rather than latency and gives significantly 
> worse response time without the extra write. But we usually write a 
> multi-key "get" response, even a big one, in exactly two sendmsg() calls.
> 
> 
> Andreas Vierengel wrote:
>> Hi,
>>
>> I just straced version 1.2.1 and I have a little question:
>> 20 set-commands were read in two read() and aftwerwards 20 responses 
>> were sent in 20 sendmsg(). Do you think it performs better, if only 1 
>> sendmsg() was called? Or will the resulting increased complexity in 
>> userland be higher?
>>
>> Our load is currently about 10.000 set-commands/s distributed over 100 
>> connections.
>>
>> --Andy
>>
>> # 20 set commands in 2 read()
>> 09:28:12.515198 read(11, "set foo 0 0 284"..., 8192) = 6471 <0.000019>
>> 09:28:12.515250 read(11, 0x81681c7, 1721) = -1 EAGAIN (Resource 
>> temporarily unavailable) <0.0 00008>
>>
>> # 20 responses in 20 sendmsg()
>> 09:28:12.515294 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000012>
>> 09:28:12.515367 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000011>
>> 09:28:12.515448 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000011>
>> 09:28:12.515526 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000010>
>> 09:28:12.515597 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000023>
>> 09:28:12.515680 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000012>
>> 09:28:12.515813 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000011>
>> 09:28:12.515884 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000010>
>> 09:28:12.515968 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000010>
>> 09:28:12.516048 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000010>
>> 09:28:12.516118 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000010>
>> 09:28:12.516194 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000011>
>> 09:28:12.516278 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000008>
>> 09:28:12.516347 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000008>
>> 09:28:12.516427 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000009>
>> 09:28:12.516495 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000008>
>> 09:28:12.516564 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000008>
>> 09:28:12.516632 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000009>
>> 09:28:12.516700 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000009>
>> 09:28:12.516768 sendmsg(11, {msg_name(16)={sa_family=AF_UNSPEC, 
>> sa_data="\0\0\0\0\0\0\0\0\0\0 \0\0\0\0"}, msg_iov(1)=[{"STORED\r\n", 
>> 8}], msg_controllen=0, msg_flags=0}, 0) = 8 <0.000009>
> 
> 



More information about the memcached mailing list