Litespeed test.php benchmarks - Part 2 - Apache + Varnish vs Litespeed Cache

eva2000

Well-Known Member
#1
Correction: below tests were incorrectly done in that it wasn't actually static test.txt file tests continuance of Part 2 text.txt at http://www.litespeedtech.com/support/forum/showthread.php?t=4603 but in fact tested php file test.php.

For Part 2, revisiting test.php for some Apache + Varnish vs Litespeed cache tests. Part 1 tests were at apachebench low requests rate of 1000 where Litespeed cache came out on top with near 2x times Apache + Varnish performance. So for this round will be bumping that up to testing 100,000 and 1 million requests with concurrency levels of 200 to 1000 and 5000.

Both Apache 2.2.3 and Litespeed 4.0.19 are at stock out of the box settings size. Static test.php file used can be found here. ERROR I made a mistake, below tests aren't static test.txt file tests but test.php test as explained here

test.php contents:

PHP:
<?php
header('CurrentTime: '.gmdate('D, d M Y H:i:s', time()).' GMT',true);

echo "time()=" . time() . "<br>";
echo "date()=" . date(DATE_RFC822) ."<br>";
?>
Only items that have changed are:

  1. I have tuned TCP settings to allow better handling of larger connection requests. Settings added to /etc/sysctl.conf can be seen here
  2. Varnish has been upgraded from 2.1.4 to 2.1.5 and default.vcl has been slightly tuned. default.vcl configuration can be see here (for non-vB test) and here (for vB tests)
  3. Litespeed cache timeout has been risen from 45 seconds to 300 seconds
    Code:
    RewriteEngine on
    RewriteRule test.php - [E=Cache-Control:max-age=300]

Findings:
  1. For basic php test.php file apachebench tests both Apache+Varnish and Litespeed Cache had similar cpu usage once Litespeed Cache moved off default settings for max connections and max keep alive connections. Litespeed 4.0.19 Cache memory leak was fixed with updated 4.0.19 downloads so not more excessive disk swapping. Still Litespeed 4.0.19 Cache had slightly higher memory usage than Apache+Varnish but the Litespeed Cache ended up 35% faster than Apache+Varnish combo at 1 million request at 5000 concurrency. While at 100K request at 1000 concurrency, Apache+Varnish was 2.9% faster than Litespeed Cache.
  2. At 1 million requests and 5,000 concurrency level, Litespeed Cache performance dropped severely compared to Apache+Varnish combo due to disk swapping as it started to eat up alot of memory swapping to disk by as much as 1.57GB with 1GB system tests and swapping to disk by 682MB in 2GB system tests. Litespeed cache would of needed at least 2.8-3GB system memory for the 1 million requests with 5000 concurrency to probably maintain around 20K rps. This was a confirmed Litespeed 4.0.19 Cache memory leak bug which was fixed and re-tested below which eliminated the disk swapping with slight improvement in requests rate. However, Litespeed Cache was bottlenecked by default max connection/max keep alive requests settings and LSPHP5 process limits. Raising those limits pushed Litespeed Cache performance well a head of Apache+Varnish.



Update: Jan 28
Updated table. Ended up with these litespeed settings which pushed out >28k requests per second! Raising the max connection and max keep alive requests limits in litespeed admin console from 2000 to 5000 and 1000 to 5000 resulted in the biggest boost from 13,587 rps to 27,976 rps average. Tuning lsphp5 process limits and children values helped as well to top out at 28,093 rps average. Notice the cpu loads were closer to Apache+Varnish with these tuned changes.



Update: Jan 30
Added Litespeed + Varnish tests results to above table
 
Last edited:

eva2000

Well-Known Member
#2
Doh I made a mistake, I didn't actually do the above tests against static test.txt file but against test.php file which contained

Code:
<?php
header('CurrentTime: '.gmdate('D, d M Y H:i:s', time()).' GMT',true);

echo "time()=" . time() . "<br>";
echo "date()=" . date(DATE_RFC822) ."<br>";
?>
so above wasn't static file test but php test :eek:
 
Last edited:

eva2000

Well-Known Member
#3
Litespeed folks confirmed it was a memory leak in 4.0.19. They have patch fixed the leak and new 4.0.19 downloads will be memory leak free. Redoing the above test with 4.0.19 fixed version results in some improvements. Updated above table for re-run with fixed 4.0.19 version.

Looks like lower performance is due running out of connections in litespeed

2011-01-29 06:54:44.040 WARN [192.168.56.101:41679-0#APVH_Default] Running short of concurrent connections.
default is 2000 max connections (left), raised connections (right)



After raising litespeed max connection and max keep alive requests limit, litespeed cache really jumped into the lead with nearly 28k rps!

Updated table above.
 
Last edited:

eva2000

Well-Known Member
#4
Guess still running up against limits in centos/server need more tuning

2011-01-29 08:14:22.992 NOTICE The maximum number of file descriptor limit is set to 30000.
2011-01-29 08:14:22.992 NOTICE [config:server:epsr:lsphp5]'Process Limit' probably is too low, adjust the limit to: 310.
2011-01-29 08:14:22.992 NOTICE [config:server:epsr:lsphp4]'Process Limit' probably is too low, adjust the limit to: 310.
defaults (left) vs raised process limits (right)



will have to re-test soon.
 
Last edited:
#6
thanks! @eva2000 your posts have been some of the most interesting and informative found in the forums so far... researching LSWS/LS cache for my WordPress Multisite servers... thanks for the benchmarking & config discussions =)
 
Top