<HTML><BODY style="word-wrap: break-word; -khtml-nbsp-mode: space; -khtml-line-break: after-white-space; "><BR><DIV><DIV>On May 4, 2007, at 1:45 , Just Marc wrote:</DIV><BR class="Apple-interchange-newline"><BLOCKQUOTE type="cite"><BLOCKQUOTE type="cite"><P style="margin: 0.0px 0.0px 0.0px 10.0px"><FONT face="Monaco" size="2" style="font: 10.0px Monaco"><SPAN class="Apple-converted-space"> </SPAN>Regarding large installs, has anyone considered a memcached proxy?<SPAN class="Apple-converted-space"> </SPAN>It seems that a lot could be gained by having a local proxy on your frontend servers maintaining backend connections and configuration and perform the optimizations my java client performs (converting individual gets into a single get and optimizing out duplicate gets without otherwise processing requests out of order) even across multi-process clients.</FONT></P> </BLOCKQUOTE><P style="margin: 0.0px 0.0px 0.0px 0.0px; font: 10.0px Monaco; min-height: 14.0px"><BR></P> <P style="margin: 0.0px 0.0px 0.0px 0.0px"><FONT face="Monaco" size="2" style="font: 10.0px Monaco">Something like that would be a single point of failure and a bottleneck bound by your favorite operating system's efficiency to handle connections. <SPAN class="Apple-converted-space"> </SPAN>I think you would scale better if you leave the decision making to the clients.</FONT></P> </BLOCKQUOTE></DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV><SPAN class="Apple-tab-span" style="white-space:pre">        </SPAN>I don't know how you figure it'd be a single point of failure or a bottleneck. What I described wouldn't be any more a single point of failure than the processor(s) in your frontend servers.</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV><SPAN class="Apple-tab-span" style="white-space:pre">        </SPAN>Barring any bugs, you could almost guarantee an efficiency increase similar to what I observed when I wrote my java client. For example, my client will take n consecutive gets and send them as a single request (after deduplicating them). It will also take a get and a set being performed by two different requestors and send them in the same packet (at least, as closely as they'll fit).</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV><SPAN class="Apple-tab-span" style="white-space:pre">        </SPAN>Additionally, memcached cluster state can be pushed into such a proxy without forcing you to reconfigure every client on every platform. This is the main reason I brought it up. The client-facing side speaks memcached, and could have a few special keys like __server_list__ and __hash_type__ that can allow dynamic control over destinations. Except for a brief pause as requests complete during a refresh, dynamically reconfiguring your cluster via your monitoring system should have no impact on your applications.</DIV><BR><DIV> <SPAN class="Apple-style-span" style="border-collapse: separate; border-spacing: 0px 0px; color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; text-align: auto; -khtml-text-decorations-in-effect: none; text-indent: 0px; -apple-text-size-adjust: auto; text-transform: none; orphans: 2; white-space: normal; widows: 2; word-spacing: 0px; "><DIV>-- </DIV><DIV>Dustin Sallings</DIV><BR class="Apple-interchange-newline"></SPAN> </DIV><BR></BODY></HTML>