-
Notifications
You must be signed in to change notification settings - Fork 11
switch HA-proxy tests to h1load client #69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
e118d97 to
67ed709
Compare
nhorman
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks fine to me, but I'm a bit confused as to how h1load gets setup with this test. Is it meant to be run by hand independently?
it's run by it's the the part up to collecting results is mostly done. I'm still working on gnuplot scripts to post-process data. I will include them to separate PR. |
|
Sasha, before you spend too much time on gnuplot scripts, I'll share you some hints to ease your job (e.g. using -ll instead of -l to have raw numbers instead of human-friendly ones). I'm also finalizing a few small changes that ease selecting relevant lines if you want to compute averages. I'll ping you soon about this. |
|
Walter, thanks a lot. I will be also happy for tips on how to set the h1load arguments. to run benchmark tests. |
|
Agreed, that's what I want to show you because it's not quite hard. With -d you can set the duration of the test, with -s you can configure a slow ramp-up period (absolutely necessary to avoid measurement errors), and I'll also show you how to pick the relevant values to provide a meaningful measure. On the haproxy side, using taskset is an easy and convenient way to select the number of threads you want. I'll try to dedicate you some time next week to work on this. Today I'm busy chasing a few bugs. |
90d2df9 to
be2f9de
Compare
This change simplifies current HA-proxy test set up.
Testing no longer requires apache/nginx server as backend.
Instead of using siege as a client the test uses
h1load [1].
The pull request also install httpterm [2] http/1.1 server.
It's unused currently.
The HA-proxy configuration for testing matches the configuration
used in 'State of SSL stacks' write up.
The h1load client currently runs with options as follows :
h1load
-l \ # long results, output expected by h1load shell script
-P \ # report also percentiles for gathared data
-d ${TEST_TIME} \ # test duration, TEST_TIME is 10secs
-c 500 \ # 500 concurrent connections
-t ${THREAD_COUNT} \ # gather data for 1, 2, 4, 8, 16, 32, 64 threads
-u \ # use runtime instead of system time
${BASE_URL}${PORT} # url where to connect to
The options above is just the initial version.
[1] https://github.com/wtarreau/h1load
[2] https://github.com/wtarreau/httpterm
[3] https://www.haproxy.com/blog/state-of-ssl-stacks
be2f9de to
7d10408
Compare
- fix configuration for siege tests
This change simplifies current HA-proxy test set up. Testing no longer requires apache/nginx server as backend. Instead of using siege as a client the test uses
h1load [1].
The pull request also install httpterm [2] http/1.1 server. It's unused currently.
The HA-proxy configuration for testing matches the configuration used in 'State of SSL stacks' write up.
The h1load client currently runs with options as follows :
h1load
-l \ # long results, output expected by h1load shell script
-P \ # report also percentiles for gathared data
-d ${TEST_TIME} \ # test duration, TEST_TIME is 10secs
-c 500 \ # 500 concurrent connections
-t ${THREAD_COUNT} \ # gather data for 1, 2, 4, 8, 16, 32, 64 threads
-u \ # use runtime instead of system time
${BASE_URL}${PORT} # url where to connect to
The options above is just the initial version.
[1] https://github.com/wtarreau/h1load
[2] https://github.com/wtarreau/httpterm
[3] https://www.haproxy.com/blog/state-of-ssl-stacks