my instance's performance.
dormando at rydia.net
Sat Apr 12 05:50:18 UTC 2008
I think you need to give your tests more FD's... Can you give more
details on the actual test you're running? It's a little suspect that
you're running out of FD's during the test. Add more, try again.
Perlbal sucks a bit with benchmarks against a single backend.
When a request comes in and a backend is necessary, perlbal will create
at most _one_ new outbound request to that backend. No other connections
will be opened until that one finishes.
With: verify_backend = on
... perlbal must open a connection, run an OPTIONS request, parse the
response, then send the queued request down the pipeline.
I'd recommend benching a few things across your different tests:
verify_backend = on
verify_backend = off
connect_ahead = something, I dunno. 100?
_requests_/sec with the same mix. Testers will often require a different
option to use client keepalives to reuse requests. perlbal saves a lot
of resources reusing existing connections.
Next, try with and without keepalives enabled on your actual backend
server. Double check that it's actually enabled and in use too.
Next, while running benchmarks, periodically check 'queues' and 'show
service whatever' to see if your requests are stacking up. You can also
check 'show backends' before/after tests. Blah blah blah.
Then use google about optimizing the kernel :)
Oi, I should spend some time this weekend beating on wiki's.
Elliott A. Johnson wrote:
> I've been running load tests on my perlbal 1.70 instance and I was wondering if others have been getting similar results as me or if there are some tweaks that I should be doing?
> Currently I have a perlbal instance (with XS headers activated) connected to a single backend node running thttpd. Both machines have 2 dual core, 3.6Ghz Xeons with 8gigs of ram (dell blades) and are running linux with a 2.6.24 kernel.
> I've used httperf to test direct connections to the backend server. Testing from 3 similar hosts on the same subnet as the webserver it can sustain several thousand connections per second and each connection fetches a static 12k css file.
> I'm performing the same tests on the perlbal frontend from same subnet it's on and here are my results:
> conn/sec cpu usage* tx traffic avg replies/sec concurrent conns
> 10 0-20% 280k/sec 10.0 2
> 100 0-70% 2820k/sec 100.0 3
> 200 0-80% 5630k/sec 200.0 6
> 300 20-100% 8440k/sec 300.0 11
> 400 80-100% 11M/sec 271.4 1022
> * cpu usage is the % usage on a single core out of 4.
> All tests finish successfully except on the 400 connections per second test, which always has several hundred fd-unavailable errors. There are also a huge number of concurrent connections, so it seems that connections are formed, but never finish and they pile up until the the test machine runs out of available file descriptors. Correct me if I'm wrong here.
> I was wondering if the 400 concurrent connection limit I'm seeing lines up with what others have experienced or if thttpd and perlbal aren't playing well together. Here is an example of my config:
> LOAD stats
> xs enable headers
> SERVER aio_mode = ioaio
> SERVER aio_threads = 10
> CREATE POOL http_static_servers
> POOL http_static_servers ADD 10.10.10.10:80
> CREATE SERVICE http_static_balancer
> SET listen = 126.96.36.199:80
> SET role = reverse_proxy
> SET pool = http_static_servers
> SET plugins = stats
> SET persist_client = on
> SET persist_backend = on
> SET verify_backend = on
> SET verify_backend_path = /ping
> ENABLE http_static_balancer
> CREATE SERVICE mgmt
> SET role = management
> SET listen = 127.0.0.1:60000
> ENABLE mgmt
> Are there any options I should think about including in my linux kernel config or anything under /proc that I should consider tweaking to help perlbal use less cpu and serve more connections per second?
More information about the perlbal