Steve,<br><br>I'm curious, which OS's did you end up choosing then? All Linux with different kernel configs, or did you switch to a *BSD or something else? I find it interesting that the network stack implementation made a noticeable difference in your app response time.
<br><br><br>--Cal<br><br><div><span class="gmail_quote">On 5/10/07, <b class="gmail_sendername">Steve Grimm</b> <<a href="mailto:sgrimm@facebook.com">sgrimm@facebook.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div>
<font face="Verdana, Helvetica, Arial"><span style="font-size: 12px;">We tried to go the proxy route at one point and ended up not using it (at least not as a generic "send everything through it" proxy as originally planned) because even without any batching of requests, the added response latency of passing everything through another user process made our application measurably slower. A big percentage of our page generation time is spent waiting for memcached requests to come back, so anything that systematically increases memcached round-trip times is generally a huge no-no for us. We've actually selected the operating systems on some of our servers based largely on the latency variance in their network stacks, no joke.
<br>
<br>
However, in an environment where you are not so latency-sensitive — and I guess yours qualifies, if I'm correct in thinking your client is doing Nagle-style "wait a little while to see if another request happens so we can batch them together" -- that may not matter so much and a proxy may be a reasonable approach.
<br><span class="sg">
<br>
-Steve </span><div><span class="e" id="q_112771f342c3d8dc_2"><br>
<br>
<br>
On 5/10/07 10:35 AM, "Dustin Sallings" <<a href="mailto:dustin@spy.net" target="_blank" onclick="return top.js.OpenExtLink(window,event,this)">dustin@spy.net</a>> wrote:<br>
<br>
</span></div></span></font><div><span class="e" id="q_112771f342c3d8dc_4"><blockquote><font face="Verdana, Helvetica, Arial"><span style="font-size: 12px;"><br>
On May 10, 2007, at 10:19 , Les Mikesell wrote:<br>
<br>
</span></font><blockquote><font face="Verdana, Helvetica, Arial"><span style="font-size: 12px;"><br>
</span></font><span style="font-size: 12px;"><font face="Monaco, Courier New">How graceful is the system about making these changes while in production? If you add servers do you have to stop the clients to reconfigure to use them, and is there any problem other than less than optimal caching while some clients run with the old setup?
<br>
</font><font face="Verdana, Helvetica, Arial"> <br>
</font></span></blockquote><span style="font-size: 12px;"><font face="Verdana, Helvetica, Arial"><br>
The memcached nodes don't care. They don't know about each other.<br>
<br>
The clients are where the issue is. For example, where I'm using my java client, I initialize it at application startup time and inject it where it's needed. This effectively leaves me with no reconfiguration facility.
<br>
<br>
Alternatively, I could more dynamically access my client and a means of pushing a new config into it and the users of the client wouldn't care at all.<br>
<br>
I've mentioned a memcached proxy that I think would be an ideal solution this problem as well as providing a performance benefit from multi-process applications. I haven't written any of it yet, though.<br>
<br>
<br>
-- <br>
Dustin Sallings<br>
<br>
<br>
<br>
<br>
</font></span></blockquote><span style="font-size: 12px;"><font face="Verdana, Helvetica, Arial"><br>
</font></span>
</span></div></div>
</blockquote></div><br><br clear="all"><br>-- <br>Cal Heldenbrand<br> FBS Data Systems<br> E-mail: <a href="mailto:cal@fbsdata.com">cal@fbsdata.com</a>