memcached persistent load balancing with an f5 big-ip

Matthew Kent mkent at magoazul.com
Thu Jun 8 16:56:19 UTC 2006


Like I said this hasn't been tested in production, just something I've
been toying with. Presently we have some http irules running with little
effect but if it can't keep up then yes I suppose this isn't going to
work.

As for persistence - your correct and I didn't have it quite right.
After a server died the connection would be passed to another server and
continue to be even when the original was brought back online until the
hash entry hit the idle timeout, which could be never on a busy
memcached server.

Could always do the hashing yourself in the irule (see attached), but
that's still not as graceful as a client api in handling a down server.

Guess this is a bit of a dead end :)

On Wed, 2006-07-06 at 23:31 +0200, Andrew Miehs wrote:
> Hi Matthew,
> 
> I don't know which F5s you are running, but you seem to have a lot
> of free cpu time if you can afford to run iRules.
> 
> You probably find that this EATs cpu time - especially if you have a
> lot of requests to the memcached servers. These packets can suddenly
> no longer be switched on layer4 and need to go through the CPU.
> 
> Your next problem will be with the 'persistence'.
> 
> What happens when one of your memcached servers goes down?
> The new requests get handed out to those remaining - based on the hash.
> When your broken server comes back up, the new machines still get the
> traffic due to the connection being 'persistant' and your fixed
> server gets no traffic.
> 
> With us its HTTP, but similar principal....
> 
> What we did (we only have '2' servers)... roughly from memory
> 
> pool cache1 {
>      min active member 1
>      member 192.168.0.1:8000 priority 5
>      member 192.168.0.2:8000
> }
> 
> pool cache1 {
>      min active member 1
>      member 192.168.0.2:8000 priority 5
>      member 192.168.0.1:8000
> }
> 
> 
> virtual vs_10.0.0.1 {
>      pool cache1
> }
> 
> virtual vs_10.0.0.2 {
>      pool cache2
> }
> 
> 
> ---
> The configured the servers to use cache1 or cache2 based on whether they 
> were odd or even...
> 
> Cheers
> 
> Andrew
> 
> 
> 
> Matthew Kent wrote:
> > Thought I'd share something handy I cooked up based on examples:
> > 
> > If you have a f5 big-ip load balancer in your network (running some of
> > the more recent software) you can use an iRule to distribute data to a
> > pool of memcached servers based on a crc32 of the incoming key. The
> > payload will be reexamined for each get/set request, allowing you to
> > leave the connection open indefinitely.
> > 
> > Although most (all?) of the client apis seem to handle multiple
> > memcached servers and them being up/down, this seemed like a more
> > graceful approach to provide 1 unified ip for my configs plus solve some
> > issues with different apis (apr-util, ruby, php) hashing data between a
> > 3 server group differently, making it annoying to share.
> > 
> > Not in production yet but seems fine in testing. 
> > 
> > Oh and it does assume your memcached servers have an equal amount of
> > ram. I haven't looked into how to implement weighting as I don't need it
> > presently.
> > 
> > 
> > ------------------------------------------------------------------------
> > 
> > # memcached persistent load balancing based on keys passed
> > # how fun is this? :)
> > when CLIENT_ACCEPTED {
> >   # debug
> > #  log local0. "memcached pool: client accepted"
> >   TCP::collect 
> > }
> > when CLIENT_DATA {
> >   # memcached protocol is nice and simple
> >   # <command name> <key> <flags> <exptime> <bytes>\r\n
> >   set key [string trim [getfield [TCP::payload] " " 2]]
> >   # debug
> > #  log local0. "memcached pool: raw client data: '[TCP::payload]' key: '$key'"
> >   # f5 says they crc32 the key passed
> >   persist hash $key
> >   TCP::release
> >   TCP::collect
> > }
> > when LB_SELECTED {
> >   # debug
> > #  log local0. "memcached pool: connecting to server [IP::client_addr]:[TCP::client_port] --> [LB::server addr]:[LB::server port]"
> > } 
> > when SERVER_CONNECTED {
> >   TCP::collect
> >   # debug
> > #  log local0. "memcached pool: server connected"
> > }
> > when SERVER_DATA {
> >   # debug
> > #  log local0. "memcached pool: server data invoked"
> >   TCP::release
> >   LB::detach
> > }
> 
-- 
Matthew Kent <mkent at magoazul.com>
http://magoazul.com
-------------- next part --------------
when CLIENT_ACCEPTED {
  # debug
  log local0. "memcached pool: client accepted"
  TCP::collect 
}
when CLIENT_DATA {
   # Create a hash value for the key based on crc32
   set key [crc32 [string trim [getfield [TCP::payload] " " 2]]]
   log local0. "memcached pool: raw client data: '[TCP::payload]' key: '$key'"
   # Modulo the hash by active members
   set key [expr {$key % [active_members demo_pool]}]
   # Route the request to the pool member based on the modulus
   # of the hash value.
   switch $key {
   0 { pool demo_pool member 172.16.10.1:10000 }
   1 { pool demo_pool member 172.16.10.1:10001 }
   2 { pool demo_pool member 172.16.10.1:10002 }
   }
   TCP::release
   TCP::collect
}
when LB_SELECTED {
  # debug
  log local0. "memcached pool: connecting to server [IP::client_addr]:[TCP::client_port] --> [LB::server addr]:[LB::server port]"
} 
when SERVER_CONNECTED {
  TCP::collect
  # debug
  log local0. "memcached pool: server connected"
}
when SERVER_DATA {
  # debug
  log local0. "memcached pool: server data invoked"
  TCP::release
  LB::detach
}
when LB_FAILED {
  persist none    
  LB::reselect pool demo_pool
  log local0. "memcached pool: lb failed [LB::status]."
}


More information about the memcached mailing list