Django benchmarks with Apache/mod_python and LiteSpeed/FastCGI.


I've always been a big fan of LiteSpeed Web Server and last weekend I did a bit of benchmarking to see how well it works with my web development framework of choice: Django. I'm not one for small talk so let's get right down to business.

The Testing Environment

Tests were performed on a server with a 2.4 GHz Dual Xeon processor and 4 GB of memory. The following software was either installed for or used by both the Apache/mod_python stack and the LiteSpeed/FastCGI stack:

  • ApacheBench 2.0.41
  • Django 0.96
  • PostgreSQL 8.1.4
  • psycopg 2.0.2
  • Python 2.4.3

The Apache/mod_python Stack

The Apache/mod_python stack consisted of Apache 2.0.52 and mod_python 3.2.8. Apache was compiled with the prefork MPM and the following modules:

  • core.c
  • http_core.c
  • mod_so.c
  • prefork.c

And the following modules were loaded dynamically:


I did not change the value of the ServerLimit directive so Apache was run with 1 control process and 2 children processes. Surely someone will ask to see my httpd.conf so here it is:

Listen 2964
ServerLimit 2
ServerRoot "/home2/iaihmb/webapps/django_apache/apache2"
LoadModule env_module modules/
LoadModule log_config_module modules/
LoadModule mime_module modules/
LoadModule python_module modules/
User iaihmb
Group iaihmb
LogFormat "%h %l %u %t \"%r\" %>s %b" CLF
CustomLog logs/access.log CLF
Errorlog logs/error.log
<VirtualHost *:2964>
    PythonDebug Off
    PythonHandler django.core.handlers.modpython
    PythonPath "['/home2/iaihmb/webapps/django_apache/'] + sys.path"
    SetEnv DJANGO_SETTINGS_MODULE myproject.settings
    SetHandler python-program
The LiteSpeed/FastCGI Stack

In addition to the standard edition of LiteSpeed Web Server 3.1.1 the LiteSpeed/FastCGI stack consisted of revision 2349 of flup. Even though the standard edition of LiteSpeed WebServer is limited to 150 concurrent connections the real bottleneck was flup which I've been told by one of the developers of LiteSpeed is limited to 50 concurrent connections. With that said I still think that you'll be impressed.

LiteSpeed was configured to create the UDS, keep the number of FastCGI proccesses at 3 or less, and start the FastCGI processes with this script:


from os import environ

from django.core.servers.fastcgi import runfastcgi

environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings'

runfastcgi(daemonize='false', minspare=0, maxchildren=3, maxspare=2)
Before and in between each test run the server was restarted, its logs were rotated, and a single request was made so that any initial overhead wouldn't affect the test results. Each test was run 5 times and each test run consisted of a total of 500 requests from ApacheBench with 1, 125, 250, 375, or 500 concurrent requests.

Test 1 - A Dynamic Request.

This one is pretty straight forward:

from time import ctime

from django.http import HttpResponse

def test1(request):
    return HttpResponse(ctime())

Test 2 - A dynamic request which retrieves data from the database.

10 2,500 character strings were stored in the database and retrieved at random:

from random import choice

from django.http import HttpResponse
from myproject.tests.models import Test

def test2(request):
    id = choice(range(1, 11))
    return HttpResponse(


Even though I've limited experience with LiteSpeed Web Server the results were pretty much as I expected. I can't wait to deploy one of my Django projects with LiteSpeed Web Server to see how well it does in an actual production environment with a more typical Django project.

Update: Shortly after finishing the benchmarks one of the developers of LiteSpeed Web Server let me in on a little secret: LiteSpeed/AJP should outperform LiteSpeed/FastCGI if I use Allan Saddi's AJP implementation in C. Great, just great. :) The next time I've got a couple of hours to spare I'll follow up with some LiteSpeed/AJP benchmarks.