fair load balancing?

Drew Wilson amw at apple.com
Fri Jun 20 16:08:58 UTC 2008


On Jun 20, 2008, at 8:58 AM, Ryan Woodrum wrote:

> On Thursday 19 June 2008 10:22:31 pm Drew Wilson wrote:
>> On Jun 19, 2008, at 7:08 PM, Mark Smith wrote:
>>>> I'll admit I'm completely knew to Perlbal, but I don't understand  
>>>> how
>>>> verify_backend will work here.
>>>>
>>>> I just ran Perlbal up last week, proxying http requests to Mongrel
>>>> app
>>>> servers.
>>>> I had "SET verify_backend  = on" in my config file.
>>>>
>>>> I still saw long-requests blocking subsequent requests.
>>>>
>>>> Am I misunderstanding something?
>>>
>>> verify_backend makes the process of connecting to a backend into  
>>> this:
>>>
>>> 1) start TCP connection to backend
>>> 2) on accept, send an "OPTIONS *" request
>>> 3) on 200 OK response, put backend in the pool
>>>
>>> The critical step is #2, and that's what makes this work.  Let's say
>>> you have Apache setup to MaxClients of 20, this will happen:
>>>
>>> * the first 20 requests come in, are assigned backends, and Perlbal
>>> uses them because they responded to OPTIONS *
>>> * the 21st connection Perlbal makes sits idle - beccause there is no
>>> Apache process to serve it
>>> * when one of the first 20 dies off, Apache will respond to the  
>>> 21st,
>>> and Perlbal spawns a new 21st
>>
>> Thanks for the detailed explanation. Makes sense for Apache and other
>> app servers that let child processes handle the requests.
>>
>>> What ends up happening here is that Perlbal will use up exactly as
>>> many MaxClients as you specify - and no more.  Since (I assume) you
>>> have persistent connections to the backend, it works out perfectly.
>>> You can adjust the load on a server by adjusting MaxClients.  Of
>>> course, this does assume that on average, your requests are roughly
>>> equal in how much processing power it takes.
>>>
>>> I assume that Mongrel will let you do the same thing - specify how
>>> many processes to serve requests with.  If they use the same  
>>> behavior,
>>> then this approach will work out just as well for you as it does for
>>> Apache based systems.
>>
>> Unfortunately, Mongrel doesn't spawn multiple processes to handle
>> requests: it queues up each request and dispatches each request to
>> Rails one at a time (since Rails is not threaded.) Mongrel doesn't
>> support the OPTIONS method.
>>
>> So instead of one Mongrel server managing several processes, we start
>> up N mongrel app instances on each app server, and expect the  
>> balancer
>> to handle the efficient routing. Sounds like is a bad expectation.
>>
>
> I may be missing something, but at the last company I worked for  
> (avvo.com),
> we did this exact thing.  As you burrow into the layers of proxying,  
> we had n
> mongrel instances on the front-ends sitting behind perlbal.  We  
> added a
> trivial OPTIONS handler to the rails application so that we could use
> verify_backend.  It has worked like a charm for quite some time with  
> munin
> charts showing very even load balancing between all mongrel intsances.
D'oh! Didn't think of that - thanks for the suggestion.

Drew

>
>
>> Now that you've explained this, it makes sense that there aren't that
>> many balancers trying to balance each request: they expect the back-
>> end to handle concurrent load.
>>
>> I think our problem is with Mongrel. I will investigate using
>> mod_rails, which will scale the way you describe.
>>
>> Thanks again,
>>
>> Drew
>
> -ryan woodrum
> rwoodrum at google.com



More information about the perlbal mailing list