<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
We've seen that behavior, though not nearly on that scale (but then, we
have literally hundreds of millions of objects in our cache, so that
tends to even out any inequalities.)<br>
<br>
Before going to the trouble of building a system that keeps track of
which individual keys need to get redirected where -- and bear in mind
that since server selection is a client-side operation, you have to
keep that information in sync among all your clients -- I'd try just
tweaking the hash function on the client side. Maybe shift off the low
bit of the hash value or something like that. Unless the traffic
difference is due to a single key getting hammered, that's probably all
you need to do to spread things around.<br>
<br>
-Steve<br>
<br>
<br>
Darryl Kuhn wrote:
<blockquote cite="mid6466306EAC5530489A1DE0C45E8A3B1122C941@ex01.gwe.ad"
type="cite">
<meta http-equiv="Content-Type" content="text/html; ">
<meta content="MSHTML 6.00.2900.2995" name="GENERATOR">
<div><font face="Arial" size="2"><span class="765220502-02122006">We've
been running memcached in our production environment for several weeks
now quite successfully but have noticed that volume of internal traffic
on our servers is quite unevenly distributed, varying anywhere from
2-3Mbps up to 30Mbps. After looking into it a bit it was clear to us
that this was a result memcached gets. What we're finding is that since
a key is only stored on one server, keys with values that are large, or
more often the case values that are accessed frequently cause one
server to be accessed quite a bit more than others.</span></font></div>
<div><font face="Arial" size="2"><span class="765220502-02122006"></span></font> </div>
<div><font face="Arial" size="2"><span class="765220502-02122006">We
can't put a key in more than one spot (nor do we want to), but the more
I've thought about it the more I believe that there might be some value
in a mechanism that self adjusts key locations by moving heavily
trafficked keys to less heavily trafficked servers. </span></font></div>
<div><font face="Arial" size="2"><span class="765220502-02122006"></span></font> </div>
<div><font face="Arial" size="2"><span class="765220502-02122006">Say
for example that 10 keys are accessed every page request and the
hashing mechanism puts all 10 keys on a single server (say out of a
pool of 10 memcached servers). Over time (based on access rates or some
other suitable metric) this mechanism would ensure that those 10 keys
are evenly distributed across all 10 memcached instances leveling off
network traffic.</span></font></div>
<div><font face="Arial" size="2"><span class="765220502-02122006"></span></font> </div>
<div><font face="Arial" size="2"><span class="765220502-02122006">It's
a thought at any rate - has anyone else dealt with this kind of issue?</span></font></div>
<div><font face="Arial" size="2"><span class="765220502-02122006"></span></font> </div>
<div><font face="Arial" size="2"><span class="765220502-02122006">Cheers,</span></font></div>
<div> </div>
<div align="left"><font color="#808080" face="Arial Rounded MT Bold"
size="1">Darryl Kuhn<br>
Chief Technology Officer<br>
Skinit.com</font></div>
</blockquote>
<br>
</body>
</html>