fair load balancing?

Mark Smith smitty at gmail.com
Mon Jun 23 21:50:20 UTC 2008

> As an observer, his thread has been helpful for my thinking, but there
> are still a few details I'm unclear about.  Particularly, assuming I
> have persistent connections to both the client and the backend, what
> happens at the end of a request?
> Do the particular client and particular backend remain associated for
> multiple requests, or does the backend go back into the pool?  Is the
> Options request sent once per client, once per backend, or once per
> request?

Perlbal does not do 'sticky sessions', which is basically what you're
asking about here.  When a client and backend have finished with a
request, the backend goes back into the pool to serve the next request
in the queue.

Persistence is maintained at both ends, though, so the client and the
backend don't disconnect.  But they don't stay attached.  :)

> The use case I'm considering involves two levels of Perlbal for a
> distributed search engine.  An application makes a request to the a
> front end search server, which in turn makes parallel requests via a
> custom plugin to a dozen or so backend search servers.   The backend
> servers act as standard reverse proxies to a small fixed number of
> local processes.

Interesting.  So something like:

{internet clients} -> [1+ frontend proxy] -> [1 proxy, 1+ web serving processes]

I'm not TOO sure why you have multiple processes that need to be
proxied to on the individual machine.  Can't you just do that from the
initial frontend proxy?  What's the purpose of the two layers?

> I'd like to be able to run two front end search servers for
> redundancy, each with multiple persistent connections to each of the
> back end servers.  I'm worried that if one front end server
> temporarily goes down, the other might end up hogging all the
> processes.  I think that if the backends go back into the pool after
> each request I'm safe, but that if the clients remain associated I'll
> have problems.   How does it actually work?

Well, this problem does exist, yes.  When we had a pool of Perlbals at
LJ/6A, if one went down it would often come back up and have some
difficulty getting connections to the backend webservers again.  This
would cause users who ended up on that cluster to get a bit of a delay
while it ramped up.

We solved this (well, mitigated it) by having the hardware load
balancer out front use "least connections" as the balancing algorithm.
 This would load the new Perlbal up, and since it was going slower, it
would send proportionately less traffic at it.  Eventually as it
speeds up (gets more backends) it will start getting more and more
requests from the hardware balancer.

So yes, those user who luck out and end up on the new Perlbal end up
waiting a little bit, but overall it's going to be only as many
requests are you normally have in the queues.  It's not terrible.  :)

> (sorry for the dup, Mark.  Someday I'll remember this list is reply-to-sender!)

No problem, happens fairly frequently.  I always send people to the
list if they do it.  :)

Mark Smith / xb95
smitty at gmail.com

More information about the perlbal mailing list