Benchmark

xing

LiteSpeed Staff
#23
Ah. Thanks. For some reason, my previous googling skill did not result in anything. My ability to mistype must be too good. =)

I see that hssTVS is:

1) Static content only
2) Fast (based on their benchmarks)
3) Uses epoll

We will definitely use them in our next round of benchmarks.

They did include LiteSpeed 2.0 Standard Edition in their benchmark which did very well, better than comparable (apples to apples) product such as Lighttpd despite the fact they had only "poll" enabled for LiteSpeed and epoll enabled for their own product and Lighttpd. LiteSpeed has supported epoll for ages. Strange they didn't turn it on. No conspiracy theory but it must've been a honest mistake at LiteSpeed's expense nonetheless.
 
Last edited:
#24
Being the programmer of hssTVS ....

Well, I didn't enable epoll for Litespeed because you definitely state in your benchmarks that poll would give better results ... At least I did disable anything else that could degrade performance, like logfiles or htaccess.

I'd be happy to change any config you desire for the next round of benchmarks I do. I'll be also very happy to assist in configuring my hssTVS for your next benchmarks.

What I really missed in your benchmarks was testing the conditional get and HEAD request speed as those methods are usually heavily used too in real life.

That being said, the TVS server supporting dynamic content will be started after hssTVS reaches beta (with my definition of beta being finished and waiting for any bugs to show up before declaring it final).
 

xing

LiteSpeed Staff
#25
On second thought. Having epoll enabled for LSWS Std on their benchmark probably would not have made much difference in the scalability tests as the LiteSpeed Standard Edition has a 300 concurrent connections cap. epoll only shows it's true use in very high concurrent rates.
 
Last edited:

xing

LiteSpeed Staff
#26
Nope, for the record, the the post above was posted before I saw your reply. =)

I do have to say I should've read the complete thread more fully as mistwang did have mention about the pros and cons of poll vs epoll and how it applied in concurrency rates. So you are right in that he was hinting that poll could be faster than epoll depending on concurrency rate.

Though I would still liked it having epoll enabled simply because it would put everyone on the same event dispatcher, even if it's at LiteSpeed's disadvantage, as it would be to no-one's advantage at using the same event api, when the product supports it.

Now I know who to contact for hssTVS setup, you will definitely hear from us when it comes to the next round of benchmarking.

As for conditional GETs and HEAD, I would say conditional GETs are much more important than HEAD and it would be a good benchmark metric.
 
Last edited:
#27
xing said:
Nope, for the record, the the post above was posted before I saw your reply. =)

I do have to say I should've read the complete thread more fully as mistwang did have mention about the pros and cons of poll vs epoll and how it applied in concurrency rates. So you are right in that he was hinting that poll could be faster than epoll depending on concurrency rate.

Though I would still liked it having epoll enabled simply because it would put everyone on the same event dispatcher, even if it's at LiteSpeed's disadvantage, as it would be to no-one's advantage at using the same event api, when the product supports it.
Well, I admit not reading the complete manuals/guides for all the servers I tested. I did usually look into comments on the pages/forums though. I did also disable all that came to mind which could hinder performance as especially hssTVS didn't support them and I wouldn't give it an unfair advantage. Honestely, I did the benchmarks mostly for my own benefit, wanting to know where my code was compared to others.

xing said:
Now I know who to contact for hssTVS setup, you will definitely hear from us when it comes to the next round of benchmarking.
Next time I'll ask you too. ;)

xing said:
As for conditional GETs and HEAD, I would say conditional GETs are much more important than HEAD and it would be a good benchmark metric.
Well, a lot search engine crawlers still use HEAD as do a couple of http/1.0 Proxies out there. But you are right, use of HEAD is definetely decreasing. What I like about HEAD additionally is the fact that it shows the raw speed of the internal parsing without the network/disk/ram speed being much of an issue. Of course a 304 is almost as lean, just adding the date parser to the results.
 
#29
I thought about that a while ago, but opted to wait until I get the features finished and at least into the final beta stage. As it's now I would have problems to provide a support to speak of to more than a handfull users and hunting for bugs.
My work as proxy admin/operator/developer is currently stressfull as we are currently in a state of transition between systems. That's on a bigger farm collection providing access to over 50000 users. So, such an exposure has to wait.

Are you sure YOUR forum is the right place to speak about MY server? -lol-
Besides it's getting way off topic now.
 

xing

LiteSpeed Staff
#30
Competition makes everyone better and I also don't mind mentioning Lighttpd, Zeus, Apache, BillGates 6.0, kryptonite 8.5, or any other server in the LiteSpeed Benchmark thread. =)

We stand by our product and will take on all challengers.
 
#31
Well, ok then.

Have you had the chance to take a closer look at my baby then? A couple of features might be overkill for a mere static content server I guess, but it's to be the core of the "real" thing later on.
 
#32
OK.. Just for you stat freaks out there, I "tried" to run a benchmark on LiteSpeed/2.1.15 standard vs hssTVS for static content. My results are below..

Notice to LiteSpeed Tech: Your server would not let me run the benchmark as it would only get to "Completed 10000 requests" in ApacheBench and stop. From what I can tell, the log files didn't give me any info, but I think even though I had all the security/throttle/banning options turned off, it was still banning the IP address from any further connections and wouldn't allow anything else from the IP until I restarted the server. Accessing the page using an alternate IP was successful.

Here are the results:

----
ApacheBench 2.0.40-dev
ab -k -n 100000 -c 100 http://127.0.0.1:65000/<numbytes>.html
Notice: Due to the lack of a private network,
these results are LOW due to the fact AB was running on the same machine as the web servers, 127.0.0.1 <-> 127.0.0.1 , therefore CPU usage was VERY effected. These results should be higher if ran across a network using 2 seperate PC's.
----

^^^^^
LiteSpeed/2.1.15 Standard
- 127.0.0.1 - No Logging - Follow Symbolic - Disable Script - Not Restrained - epoll()
- Max KeepAlive: 500 - Smart KeepAlive - No Security - No .htaccess - No Expires

Results:
100 Concurrent: Decided to stop responding to requests even though security was disabled?
Unable to test...
^^^^^^

^^^^^^
hssTVS 0.218d
- 127.0.0.1 - No Logging - 100 concurrent keepalive

100 byte static file: Requests per second: 9889.81
1000 byte static file: Requests per second: 9834.88
4000 byte static file: Requests per second: 8854.65
- Transfer rate: 36434.78 [Kbytes/sec]
16000 byte static file: Requests per second: 5404.62
- Transfer rate: 85601.56 [Kbytes/sec]
^^^^^^

I would be more than happy to run more benchmarks once I get LiteSpeed to work without banning the IP from what it seems like a LOT of connections at a time (even thought it is set to off). I would also like to run the test between 2 seperate PC's on a LAN so I can get real results.

Aaron
 
#33
Well, you should always include more information about the used system. So Processor/Ram (amount+speed)/hdd (type/size/speed) are the minimum. The OS including distro and exact kernel version, which runlevel and perhaps other demons/programs which may influence the results. If you use a real network it's often of interest which cards/chipsets/switches you've used.

Beside of that, ab in version 2.0.x is not very good. Better would be the older 1.3.x or, for some tests, httperf (also free and together with autobench really convenient). The local loopback is also not so good, but as long as all contestants run on the same machine it can give at least give a clue about the effectiveness. Sadly it ignores network specific things which can have a huge influence on the real world speed of a server program. In case you want to test huge files over a network you might have to start including the CPU usage as then the connection will be most likely the bottleneck.

If ab stops dead after a few thousand requests, check the number of not completely closed sockets. Might be that LiteSpeed uses something like a linger setting which might be bad in such a local loop test scenario. Then you'd have to run multiple tests with only 10000 (or whatever still works) and compare the averages for the contestants.
 

mistwang

LiteSpeed Staff
#35
Interesting product. :)
Benchmarked it a little bit, LiteSpeed Enterprise is about 15%-30% faster than Rock on a simple small static file test, for both non-keepalive and keepalive.
 
Top