Configuring and testing litespeed cache Part 2

Discussion in 'LiteSpeed Cache' started by eva2000, Jan 21, 2011.

  1. eva2000

    eva2000 Member

    Part 1 tests were flawed in that I tested Apache 2.2.3 with keepalives OFF. So this is round 2 tests with Apache 2.2.3 keepalives On thrown into compare with Litespeed 4.0.18 inbuilt caching feature. The Litespeed 4.0.18 cache feature might have a bug in not observing mod rewrite urls to cache only specified files which upgrading to 4.0.19 may fix. I'll update to Litespeed 4.0.19 and rerun tests as well.

    Update: Upgraded from Litespeed 4.0.18 to 4.0.19 and fixed the cache feature bug see http://www.litespeedtech.com/support/forum/showpost.php?p=23024&postcount=11

    Configurations tested
    • Apache 2.2.3 keepalives off
    • Apache 2.2.3 keepalives on
    • Apache 2.2.3 keepalives on + Varnish 2.1.4
    • Litespeed 4.0.18 no cache
    • Litespeed 4.0.18 no cache + Varnish 2.1.4
    • Litespeed 4.0.18 + inbuilt cache

    Results

    Part 1 tests illustrated a slightly different picture to what these updated tests had showed. The end result though was still the same though, Litespeed + inbuilt cache was up to 2x times faster than Apache + Varnish for simple test.php file apachebench runs.

    1. Apache stand alone with keepalives enabled just edged out Litespeed no cached results at less <600 concurrency level. Past >600 concurrency level, Litespeed no cached average and minimum requests per second were much better. Note: Apache cpu load was much higher hitting into double digits by 5th run at >600 concurrency levels, while Litespeed cpu load was <1.5 all the way up to 1000 concurrency levels. Update: seems apachebench test configuration can play a big part, below tests were with 1000 requests tested, if you increase requests per test to 10000, litespeed starts to really shine more in stand alone nocache and cache tests - favouring even more, litespeed for scalability as traffic increases. Will retest later.
    2. Apache + Varnish caching with default.vcl kept the same trend in just edging out Litespeed + Varnish caching when below <600 concurrency level. At 800 concurrency level, Litespeed + Varnish took the lead over Apache + Varnish. But at 1000 concurrency level not sure what happened as Litespeed + Varnish took a nose dive for the first 3 runs at 1/2 the rps compared to Apache + Varnish lowering the average to 6,163.51 rps compared to Apache + Varnish at 8,643.01 rps average.
    3. A note on Varnish caching tests for part 2: Part 2 used a tuned Varnish configuration settings which consistently averages around + 500 rps more than untuned out of the box Varnish configuration.
    4. Litespeed using inbuilt cache feature was the overall winner though with a comfortable lead. Litespeed + inbuilt cache was 4x times faster than Litespeed with no cache at 200 concurrency level and 4.5x times faster at 1000 concurrency level. Litespeed + inbuilt cache was up to 1.6x to 1.9x times faster than Apache + Varnish at concurrency levels 200 all the way up to 1000.

    [​IMG]

    [​IMG]

    [​IMG]

    The test.php file used contains

    Code:
    <?php
    header('CurrentTime: '.gmdate('D, d M Y H:i:s', time()).' GMT',true);
    
    echo "time()=" . time() . "<br>";
    echo "date()=" . date(DATE_RFC822) ."<br>";
    ?> 
    Apache httpd.conf contains

    Code:
    #disk cache
    <IfModule mod_cache.c>
    <IfModule mod_disk_cache.c>
    CacheRoot /lscache/   
    #CacheEnable disk /  
    </IfModule>
    </IfModule>
    Litespeed 4.0.18 cache policy is set to

    Code:
    Enable Cache:Not set
    Cache Expire Time (seconds): 120
    Cache Request with Query String:Not set
    Cache Request with Cookie:Not set
    Cache Response with Cookie:Not set
    Ignore Request Cache-Control:Not set
    Ignore Response Cache-Control:Not set
    .htaccess used in /var/www/html doc root for Litespeed 4.0.18 cache tests

    Code:
    RewriteEngine on
    RewriteRule test.php - [E=Cache-Control:max-age=45]
    ApacheBench command ran back to back 5x times

    :for non-varnish tests:
    Code:
    ab -k -n 1000 -c 200 192.168.56.101/test.php
    :for varnish tests on port 8888:
    Code:
    ab -k -n 1000 -c 200 192.168.56.101:8888/test.php
    Varnish 2.1.4 configuration settings used
    Code:
    /etc/sysconfig/varnish
    
    # My Advanced configuration
    # Main configuration file. You probably want to change it :)
    VARNISH_VCL_CONF=/etc/varnish/default.vcl
    
    # Default address and port to bind to
    # Blank address means all IPv4 and IPv6 interfaces, otherwise specify
    # a host name, an IPv4 dotted quad, or an IPv6 address in brackets.
    VARNISH_LISTEN_ADDRESS=
    VARNISH_LISTEN_PORT=8888
    
    # Telnet admin interface listen address and port
    VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1
    VARNISH_ADMIN_LISTEN_PORT=2222
    
    # The minimum number of worker threads to start
    VARNISH_MIN_THREADS=1
    
    # The Maximum number of worker threads to start
    VARNISH_MAX_THREADS=1000
    
    # Idle timeout for worker threads
    VARNISH_THREAD_TIMEOUT=120
    
    # Cache file location
    VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin
    
    # Cache file size: in bytes, optionally using k / M / G / T suffix,      
    # or in percentage of available disk space using the % suffix.
    VARNISH_STORAGE_SIZE=64M
    
    # Backend storage specification             
    VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}"
    
    # Default TTL used when the backend does not specify one
    VARNISH_TTL=120                  
                
    # DAEMON_OPTS is used by the init script.  If you add or remove options, make
    # sure you update this section, too.
    DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
                 -f ${VARNISH_VCL_CONF} \
                 -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
                 -t ${VARNISH_TTL} \
                 -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \
                 -u varnish -g varnish \
                 -s ${VARNISH_STORAGE} \
                 -p thread_pool_min=300 \
                 -p thread_pool_max=2000 \
                 -p thread_pools=2 \
                 -p listen_depth=4096 \
                 -p session_linger=25/100/150 \
                 -p lru_interval=2 \
                 -p thread_pool_add_delay=2 \
                 -p cli_timeout=10"
    Server Configuration:
    • VirtualBox Guest - CentOS 5.5 64bit
    • Xeon W3540 @3408Mhz - assigned 2 cpu cores
    • 1GB memory allocated system memory + 64MB GPU memory @DDR3-1550Mhz 9-9-9-24
    • 20GB allocated (640GB Samsung SATAII OS Disk)
    • Apache 2.2.3 Prefork, PHP 5.3.5 (mod_php), MariaDB 5.2.4, Memcached 1.4.5, Varnish 2.1.4
    • Litespeed 4.0.18, PHP 5.3.4 (LSAPI v5.5), MariaDB 5.2.4, Memcached 1.4.5, Varnish 2.1.4
    • 2.6.18-194.32.1.el5 #1 SMP
    • Disk partitions set to noatime
    • Memcached 1.4.5 = 2x 16MB instances
    • Varnish 2.1.4 = 64MB size

    References:
    Last edited: Jan 24, 2011
  2. eva2000

    eva2000 Member

    mistwang and NiteWave just a question. In above tests, Litespeed 4.0.18 had smart keep alives enabled. Looking up the function it doesn't mention specifically how it treats php files ?


    Update: Looks like Litespeed with Smart Keep-Alive OFF (which is default) is better than with it On.

    [​IMG]

    [​IMG]

    [​IMG]

    [​IMG]

    And the interesting results,


    Apache + Varnish
    vs
    Litespeed Cache with Smart Keep Alives On vs Off​


    [​IMG]

    [​IMG]
    Last edited: Jan 21, 2011
  3. mistwang

    mistwang LiteSpeed Staff

    smart keepalive will close connection after serving a response with MIME type text/* regardless it is from PHP or static file. So, "Smart Keepalive" should be off when you did the benchmark tests.
  4. mistwang

    mistwang LiteSpeed Staff

    add environment
    LSAPI_ACCEPT_NOTIFY=1
    LSAPI_AVOID_FORK=1

    to lsphp5 external app should boost Litespeed no-cache results.
  5. mistwang

    mistwang LiteSpeed Staff

    I wonder if you use mod_php or suPHP with Apache?
    If uses mod_php, make sure "PHP suEXEC" is off to match Apache security model.
  6. eva2000

    eva2000 Member

    hmmm you should enable multi-quote feature in vB so i can quote you properly :)

    i see, yup 2nd post above has benchmarks redone with smart keep alive off with a nice boost especially at >1000 concurrency level.

    Ooooh additional tweaks to try - will do :)

    Yeah Apache is mod_php with PHP suExec disabled as well as disabled when loading Apache conf via litespeed :)
  7. eva2000

    eva2000 Member

    Tried these 2 suggested values but getting slower performance as opposed to the defaults as per http://www.litespeedtech.com/php-litespeed-sapi.html

    only benched the no cache config

    [​IMG]


    Code:
    LSAPI_ACCEPT_NOTIFY=1
    LSAPI_AVOID_FORK=1
    
    LSAPI PHP5 addition settings

    [​IMG]
  8. eva2000

    eva2000 Member

    Last edited: Jan 21, 2011
  9. mistwang

    mistwang LiteSpeed Staff

    It is about the same, you can also change
    "Max connections", "PHP_LSAPI_CHILDREN", to 10 and 100, see which one does better.
    PHP_LSAPI_MAX_REQUESTS can be increased to a larger number.

    How many total requests served for each run? I think it should be >1000.

    There is likely some factor affect the consecutive run of the benchmark tests, there should not be huge differences between them. Maybe the time you wait between each run has impact on this as kernel takes some time to recover from massive amount of sockets created from previous run?

    I may try to reproduce this in our lab to figure out why if I get a chance.
  10. eva2000

    eva2000 Member

    How many total requests served for each run, i have the raw ab numbers for latest runs here

    i.e for litespeed cached + smart keep alive off

    Code:
    ab -k -n 1000 -c 200 192.168.56.101/test.php
    This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Copyright 2006 The Apache Software Foundation, http://www.apache.org/
    
    Benchmarking 192.168.56.101 (be patient)
    Completed 100 requests
    Completed 200 requests
    Completed 300 requests
    Completed 400 requests
    Completed 500 requests
    Completed 600 requests
    Completed 700 requests
    Completed 800 requests
    Completed 900 requests
    Finished 1000 requests
    
    
    Server Software:        LiteSpeed
    Server Hostname:        192.168.56.101
    Server Port:            80
    
    Document Path:          /test.php
    Document Length:        63 bytes
    
    Concurrency Level:      200
    Time taken for tests:   0.64157 seconds
    Complete requests:      1000
    Failed requests:        0
    Write errors:           0
    Keep-Alive requests:    993
    Total transferred:      322741 bytes
    HTML transferred:       63000 bytes
    Requests per second:    15586.76 [#/sec] (mean)
    Time per request:       12.831 [ms] (mean)
    Time per request:       0.064 [ms] (mean, across all concurrent requests)
    Transfer rate:          4909.83 [Kbytes/sec] received
    
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   0.0      0       0
    Processing:     0    0   3.5      0      55
    Waiting:        0    0   3.4      0      55
    Total:          0    0   3.5      0      55
    
    Percentage of the requests served within a certain time (ms)
      50%      0
      66%      0
      75%      0
      80%      0
      90%      0
      95%      0
      98%      1
      99%      5
     100%     55 (longest request)
    
    Yeah i'm running the ab runs back to back not much wait time. Could also be virtualized server overhead ?

    Will try the other php settings once i can fix PHP 5.3.5 build errors http://www.litespeedtech.com/support/forum/showthread.php?p=23014#post23014 phpize not where it's suppose to be heh.
  11. eva2000

    eva2000 Member

    Good news Litespeed 4.0.19 fixed the cache settings issue, now test.php gets cached while testnocache.php doesn't get cached with cache policy in 4.0.19 set as per below settings.

    test.php header

    Code:
    view source
    Content-Encoding	gzip
    Vary	Accept-Encoding
    Date	Fri, 21 Jan 2011 20:54:52 GMT
    Server	LiteSpeed
    Connection	Keep-Alive
    Keep-Alive	timeout=5, max=100
    X-LiteSpeed-Cache	hit
    Content-Length	79
    X-Powered-By	PHP/5.3.5
    CurrentTime	Fri, 21 Jan 2011 20:54:34 GMT
    Content-Type	text/html; charset=UTF-8
    testnocache.php header

    Code:
    Content-Encoding	gzip
    Vary	Accept-Encoding
    Date	Fri, 21 Jan 2011 20:54:59 GMT
    Server	LiteSpeed
    Connection	Keep-Alive
    Keep-Alive	timeout=5, max=100
    X-Powered-By	PHP/5.3.5
    CurrentTime	Fri, 21 Jan 2011 20:54:59 GMT
    Content-Type	text/html; charset=UTF-8
    Content-Length	79
    test.php

    Code:
     ab -k -n 1000 -c 200 192.168.56.101/test.php
    This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Copyright 2006 The Apache Software Foundation, http://www.apache.org/
    
    Benchmarking 192.168.56.101 (be patient)
    Completed 100 requests
    Completed 200 requests
    Completed 300 requests
    Completed 400 requests
    Completed 500 requests
    Completed 600 requests
    Completed 700 requests
    Completed 800 requests
    Completed 900 requests
    Finished 1000 requests
    
    
    Server Software:        LiteSpeed
    Server Hostname:        192.168.56.101
    Server Port:            80
    
    Document Path:          /test.php
    Document Length:        63 bytes
    
    Concurrency Level:      200
    Time taken for tests:   0.42640 seconds
    Complete requests:      1000
    Failed requests:        0
    Write errors:           0
    Keep-Alive requests:    1000
    Total transferred:      346952 bytes
    HTML transferred:       63000 bytes
    Requests per second:    23452.16 [#/sec] (mean)
    Time per request:       8.528 [ms] (mean)
    Time per request:       0.043 [ms] (mean, across all concurrent requests)
    Transfer rate:          7926.83 [Kbytes/sec] received
    
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   1.6      0       6
    Processing:     0    2   1.0      3       5
    Waiting:        0    2   1.0      3       5
    Total:          0    3   1.5      3      11
    
    Percentage of the requests served within a certain time (ms)
      50%      3
      66%      3
      75%      3
      80%      3
      90%      6
      95%      7
      98%      8
      99%      8
     100%     11 (longest request)
    testnocache.php

    Code:
    ab -k -n 1000 -c 200 192.168.56.101/testnocache.php
    This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Copyright 2006 The Apache Software Foundation, http://www.apache.org/
    
    Benchmarking 192.168.56.101 (be patient)
    Completed 100 requests
    Completed 200 requests
    Completed 300 requests
    Completed 400 requests
    Completed 500 requests
    Completed 600 requests
    Completed 700 requests
    Completed 800 requests
    Completed 900 requests
    Finished 1000 requests
    
    
    Server Software:        LiteSpeed
    Server Hostname:        192.168.56.101
    Server Port:            80
    
    Document Path:          /testnocache.php
    Document Length:        63 bytes
    
    Concurrency Level:      200
    Time taken for tests:   0.184258 seconds
    Complete requests:      1000
    Failed requests:        0
    Write errors:           0
    Keep-Alive requests:    1000
    Total transferred:      323000 bytes
    HTML transferred:       63000 bytes
    Requests per second:    5427.17 [#/sec] (mean)
    Time per request:       36.852 [ms] (mean)
    Time per request:       0.184 [ms] (mean, across all concurrent requests)
    Transfer rate:          1709.56 [Kbytes/sec] received
    
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   1.5      0       6
    Processing:    20   28  14.6     21      89
    Waiting:       20   28  14.4     21      88
    Total:         20   29  15.7     21      89
    
    Percentage of the requests served within a certain time (ms)
      50%     21
      66%     26
      75%     27
      80%     28
      90%     59
      95%     67
      98%     72
      99%     83
     100%     89 (longest request)
    The test.php and testnocache.php file used contains

    Code:
    <?php
    header('CurrentTime: '.gmdate('D, d M Y H:i:s', time()).' GMT',true);
    
    echo "time()=" . time() . "<br>";
    echo "date()=" . date(DATE_RFC822) ."<br>";
    ?> 
    Apache httpd.conf contains

    Code:
    #disk cache
    <IfModule mod_cache.c>
    <IfModule mod_disk_cache.c>
    CacheRoot /lscache/   
    #CacheEnable disk /  
    </IfModule>
    </IfModule>
    Litespeed 4.0.18 cache policy is set to

    Code:
    Enable Cache:No
    Cache Request with Query String:Yes
    Cache Request with Cookie:Yes
    Cache Response with Cookie:Yes
    Ignore Request Cache-Control:Yes
    Ignore Response Cache-Control:Yes
  12. mistwang

    mistwang LiteSpeed Staff

    It may have something to do with how "ab" works, it send more than 1000 requests during the test, and close all connection once number of responses reaches 1000.
    It is better to do one test run of

    ab -k -n 5000 -c 200 ...

    than running ab 5 times with "-n 1000".

    In theory, the result should be same, but it is not due to the way "ab" works.
  13. eva2000

    eva2000 Member

    i see, maybe httperf would be better ?

    i.e.

    All above results are without any php caching. So maybe also try with xcache and eaccelerator ? Part 3 :D
  14. mistwang

    mistwang LiteSpeed Staff

    yes, httperf is better in this regard. However, httperf does rate limiting, not like "ab" sending request as fast as possible, so, httperf may report lower req/s than "ab" if the rate limit cannot push the server to 100%.

    Opcode cache will help for all web servers.

    There are other respects need to be taken into consideration other than the raw req/s, like server load, memory usage, CPU utilization, etc. :)
  15. eva2000

    eva2000 Member

    Yeah apache at >600 concurrency levels, the cpu utilization is in double digits at end of 5th ab run while litespeed hasn't budged much cpu load wise :D

    Maybe you need to come up with your own benchmark script called litespeedbench :D Could output cpu utilisation/load + mem usage as well as rps figures.
    Last edited: Jan 21, 2011
  16. eva2000

    eva2000 Member

    Not sure if this is directly related, but i also installed nginx 0.8.54 /php-fpm 5.3.5 for comparisons, and it's even worse with nginx, test.php apachebench run with =>200 concurrency, 2nd run already drops from 5500 rps to consistent 330 rps.

    Looked in my messages log and found heaps of logged entries for

    Code:
    tail -500 /var/log/messages
    
    Jan 23 02:18:22 localhost kernel: printk: 206 messages suppressed.
    Jan 23 02:18:22 localhost kernel: ip_conntrack: table full, dropping packet.
    Code:
    cat /proc/sys/net/ipv4/ip_conntrack_max
    32760
    so raised value in /etc/sysctl.conf

    Code:
    #net.ipv4.ip_conntrack_max=32760
    net.ipv4.ip_conntrack_max=262144
    no more ip_conntrack messages, but still subsequent apachebench runs back to back end with lower requests per second with test.php test, worse being nginx (can't handle =>200 concurrency), then apache (past >600 concurrency), while litespeed handles it better (up to 1000 concurrency).

    maybe need some tuning http://timanovsky.wordpress.com/2009/04/10/tuning-linux-firewall-connection-tracker-ip_conntrack/

    To confirm the issue ran some static test.txt file tests and nginx was fine, while apache still exhibited the problem at higher concurrency levels http://www.litespeedtech.com/support/forum/showthread.php?t=4617

    Edit: solved nginx php issue changed php-fpm from tcp to using unix socket :)
    Last edited: Jan 24, 2011
  17. mistwang

    mistwang LiteSpeed Staff

    One of the factor is what "ab" did at the end of the benchmark tests. It can affect one web server more than to the others. To reduce the impact of this, just combine multiple runs into one large run.
    Another factor is the way web server works, perfork, threaded or event-driven. perfork has poor scalability is for sure.
  18. eva2000

    eva2000 Member

    When you mean multiple runs you mean

    2 runs
    ab & ab

    4 runs
    ab & ab & ab & ab

    ?

    Curious as to performance of my virtualbox guest server, so ran Unixbench 5.1.2 as per http://www.webhostingtalk.com/showthread.php?t=924581 to see how it compares.

    • VirtualBox CentOS 5.5 64bit Guest
    • Xeon W3540 @3408Mhz (Assigned 2 cores only) - linux reports @3375Mhz
    • 64MB allocated GPU memory
    • 1GB out of 6GB DDR3-1550Mhz
    • 20GB out of 640GB Samsung SATAII

    Code:
    ========================================================================
       BYTE UNIX Benchmarks (Version 5.1.2)
    
       System: localhost.localdomain: GNU/Linux
       OS: GNU/Linux -- 2.6.18-194.32.1.el5 -- #1 SMP Wed Jan 5 17:52:25 EST 2011
       Machine: x86_64 (x86_64)
       Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
       CPU 0: Intel(R) Xeon(R) CPU W3540 @ 2.93GHz (6750.1 bogomips)
              Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
       CPU 1: Intel(R) Xeon(R) CPU W3540 @ 2.93GHz (6733.8 bogomips)
              Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
       21:45:27 up  1:11,  2 users,  load average: 0.00, 0.01, 0.00; runlevel 5
    
    ------------------------------------------------------------------------
    Benchmark Run: Sun Jan 23 2011 21:45:27 - 22:09:32
    2 CPUs in system; running 1 parallel copy of tests
    
    Dhrystone 2 using register variables       19101079.5 lps   (10.0 s, 7 samples)
    Double-Precision Whetstone                     3771.0 MWIPS (10.0 s, 7 samples)
    Execl Throughput                               4394.2 lps   (30.0 s, 2 samples)
    Pipe Throughput                             1457951.0 lps   (10.0 s, 7 samples)
    Pipe-based Context Switching                  29373.4 lps   (10.0 s, 7 samples)
    Process Creation                              13113.2 lps   (30.0 s, 2 samples)
    Shell Scripts (1 concurrent)                   7997.6 lpm   (60.0 s, 2 samples)
    Shell Scripts (16 concurrent)                   901.5 lpm   (60.0 s, 2 samples)
    Shell Scripts (8 concurrent)                   1756.9 lpm   (60.0 s, 2 samples)
    System Call Overhead                        1211587.2 lps   (10.0 s, 7 samples)
    
    System Benchmarks Partial Index              BASELINE       RESULT    INDEX
    Dhrystone 2 using register variables         116700.0   19101079.5   1636.8
    Double-Precision Whetstone                       55.0       3771.0    685.6
    Execl Throughput                                 43.0       4394.2   1021.9
    Pipe Throughput                               12440.0    1457951.0   1172.0
    Pipe-based Context Switching                   4000.0      29373.4     73.4
    Process Creation                                126.0      13113.2   1040.7
    Shell Scripts (1 concurrent)                     42.4       7997.6   1886.2
    Shell Scripts (16 concurrent)                     ---        901.5      ---
    Shell Scripts (8 concurrent)                      6.0       1756.9   2928.1
    System Call Overhead                          15000.0    1211587.2    807.7
                                                                       ========
    System Benchmarks Index Score (Partial Only)                          916.9
    
    ------------------------------------------------------------------------
    Benchmark Run: Sun Jan 23 2011 22:09:32 - 22:33:38
    2 CPUs in system; running 2 parallel copies of tests
    
    Dhrystone 2 using register variables       37996013.0 lps   (10.0 s, 7 samples)
    Double-Precision Whetstone                     7678.6 MWIPS (9.7 s, 7 samples)
    Execl Throughput                               8655.7 lps   (29.9 s, 2 samples)
    Pipe Throughput                             2863704.1 lps   (10.0 s, 7 samples)
    Pipe-based Context Switching                 739539.5 lps   (10.0 s, 7 samples)
    Process Creation                              25690.5 lps   (30.0 s, 2 samples)
    Shell Scripts (1 concurrent)                  14271.1 lpm   (60.0 s, 2 samples)
    Shell Scripts (16 concurrent)                  1001.8 lpm   (60.1 s, 2 samples)
    Shell Scripts (8 concurrent)                   1990.0 lpm   (60.0 s, 2 samples)
    System Call Overhead                        2279039.9 lps   (10.0 s, 7 samples)
    
    System Benchmarks Partial Index              BASELINE       RESULT    INDEX
    Dhrystone 2 using register variables         116700.0   37996013.0   3255.9
    Double-Precision Whetstone                       55.0       7678.6   1396.1
    Execl Throughput                                 43.0       8655.7   2012.9
    Pipe Throughput                               12440.0    2863704.1   2302.0
    Pipe-based Context Switching                   4000.0     739539.5   1848.8
    Process Creation                                126.0      25690.5   2038.9
    Shell Scripts (1 concurrent)                     42.4      14271.1   3365.8
    Shell Scripts (16 concurrent)                     ---       1001.8      ---
    Shell Scripts (8 concurrent)                      6.0       1990.0   3316.7
    System Call Overhead                          15000.0    2279039.9   1519.4
                                                                       ========
    System Benchmarks Index Score (Partial Only)                         2226.9
    
    top stats straight after unixbench run
    Code:
    top - 22:34:03 up  2:00,  2 users,  load average: 6.78, 7.13, 4.75
    Tasks: 124 total,   1 running, 123 sleeping,   0 stopped,   0 zombie
    Cpu(s): 14.4%us, 13.9%sy,  0.0%ni, 70.8%id,  0.4%wa,  0.0%hi,  0.4%si,  0.0%st
    Mem:   1026824k total,   688652k used,   338172k free,    31276k buffers
    Swap:  2064376k total,        0k used,  2064376k free,   457076k cached
    Code:
    processor       : 0
    vendor_id       : GenuineIntel
    cpu family      : 6
    model           : 26
    model name      : Intel(R) Xeon(R) CPU           W3540  @ 2.93GHz
    stepping        : 5
    cpu MHz         : 3375.078
    cache size      : 6144 KB
    physical id     : 0
    siblings        : 2
    core id         : 0
    cpu cores       : 2
    apicid          : 0
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 5
    wp              : yes
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc pni ssse3 lahf_lm
    bogomips        : 6750.15
    clflush size    : 64
    cache_alignment : 64
    address sizes   : 36 bits physical, 48 bits virtual
    power management:
    
    processor       : 1
    vendor_id       : GenuineIntel
    cpu family      : 6
    model           : 26
    model name      : Intel(R) Xeon(R) CPU           W3540  @ 2.93GHz
    stepping        : 5
    cpu MHz         : 3375.078
    cache size      : 6144 KB
    physical id     : 0
    siblings        : 2
    core id         : 0
    cpu cores       : 2
    apicid          : 0
    fpu             : yes
    fpu_exception   : yes
    cpuid level     : 5
    wp              : yes
    flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc pni ssse3 lahf_lm
    bogomips        : 6733.77
    clflush size    : 64
    cache_alignment : 64
    address sizes   : 36 bits physical, 48 bits virtual
    power management:
    Last edited: Jan 23, 2011
  19. mistwang

    mistwang LiteSpeed Staff

    I mean if you run "ab" 5 times with -n 1000, it will be more accurate result if you do "ab" once with -n 5000.
    "ab" will send maybe around 2000 requests when you do " -n 1000 -c 1000", after it receives the first 1000 response, it just close all connection, however, on web server side, those extra requests are still being processed, that's why the first "ab" run get better result.
  20. eva2000

    eva2000 Member

    I see. Will test with higher requests rate in future. I'll install test vB 3.8.6 pl1 forum now for more real life testing :)

Share This Page