specify an alternate conf directory

#1
Hi All!

Is it possible to specify an alternate conf directory or httpd_config.xml,
because i want to keep the config files on nfs and the LiteSpeed installation
on the local disk?

Just like:

lshttpd -f /exports/conf/lsws/httpd_config.xml

or something

lshttpd -c /exports/conf/lsws


thx
ah
 
#3
i don't really need the whole conf directory on nfs, but the
vhTemplateList and virtualHostList are very important for me to live on nfs,
because i wan't to have dataless nodes in my webcluster.

Is there a way to put vhTemplateList and vhTemplateList into external file, put
it on an nfs share and include it into the httpd_config.xml?

I would need something like the Apache Include statement.

BTW: Are there any options to consider when putting the DocRoot of LiteSpeed
on nfs (mmap, sendfile ...)?

Greets from Austria
Alex
 

mistwang

LiteSpeed Staff
#4
The location of each individual Vhost template and vhost configuration file are configurable, you don't have to put it under lsws/conf, you can put them anyway.
The list is stored in lsws/conf/httpd_config.xml, cannot be moved.

There are performance issue with serving files from a nfs partition for single-thread, event driven web servers due to the high latency of blocking I/O operations. It is not LiteSpeed specific issue though. AIO for nfs is not supported by Linux, and the only way to deal with it is to use a N-CPU license to compensate it, I think at least 4-CPU and above can bring good performance out of this kind of setup.
 
#5
is this a general problem with the nfs client or server on linux, because
we would like to use LiteSpeed with debian etch and our netapp filer.

maybe it works better in combination with freebsd/kqueue?
 
Last edited:

mistwang

LiteSpeed Staff
#6
With the nfs client implementation on linux. I am not sure if freebsd will do better or not, I think most nfs client does not implement non-blocking I/O. You can give it a try. If you would like to try out 4-CPU or 8-CPU license, just let us know.
 
#7
i have just looked at the kernel-source linux-source-2.6.22.

see fs/nfs/file.c

*
* Expire cache on write to a file by Wai S Kok (Oct 1994).
*
* Total rewrite of read side for new NFS buffer cache.. Linus.
*
* nfs regular file handling functions
*/
#include <linux/time.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/fcntl.h>
#include <linux/stat.h>
#include <linux/nfs_fs.h>
#include <linux/nfs_mount.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/pagemap.h>
#include <linux/smp_lock.h>
#include <linux/aio.h>
They include the linux/aio.h file.
Maybe linux supports aio for nfs?

Is there a way to dedect if the lshttpd process runs with aio on nfs?

Thanks for your help!

Greets from Austria
Alex
 
#9
i made a simple test in my lab:

Installed Litespeed 3.3.3 with trial.key on my Debian etch server.
The DocumentRoot of the Vhost is located on a netapp nfs share.

Then i created 6 random files each at the size of 256 MB (testfile1 - testfile6).
After that i started to download the 6 files from 3 different clients with wget.
On Client 4 i started to test Litespeed with ab -c 100 -n 100 http://server/test.html.
(test.html is only a simple html page). The Test took 0.7 seconds.

If i understood the aio nfs problem right the 256 MB testfile downloads would block
the 2 lshttpd processes and no other request would be possible?

Thank you for your help!

Alex
 

mistwang

LiteSpeed Staff
#10
It will slow lshttpd a little bit, not block it completely.
What is the I/O wait value of the test server during the test? If it is not very high, it won't affect much.

How long does "ab" test take when there is no large downloads?
 
#11
sorry i forgot to mention: without the large file downloads the ab took the same
time to complete (0.7 seconds).

i will check for iowait on monday.
 
#12
i did the same test again and watched the mpstat output.

during my test the iowait value was not higher than 2%.

we are currently testing some web performance software to simulate real world
traffic for a shared hosting environment maybe this will give more accurate results.

is there a way to see the iowait of a specific process?
 
Last edited:
#14
Yesterday we did another test for the LiteSpeed NFS combination.
We wrote a little perl script with threads.

It works like this:

There is a big array with 20.000 urls which are located on the NetApp filer and served by LiteSpeed (real world static html/css/img/js files).
Then we start 60 threads, each with a while loop.
Within the loop we fetch a random url from the array with the perl lwp lib.
After that the thread sleeps for a random time between 0.1 and 1 second.
So we get a lot of parallel requests to different files to test behaviour of LiteSpeed on NFS.

The average requests/s for LiteSpeed was 600 and the throughput was between
60 and 100 Mbit.

We ran this test with files on nfs and for comparison on the local disk with ext3.
We did not really measure the performance but we looked for the iowait value.

Here the average result:

nfs: 15 % iowait
ext3 local disk: 75 % iowait

greets
alex
 

mistwang

LiteSpeed Staff
#15
The result with NFS looks pretty good. I think it should be good enough for a lot of people. 100Mbits is a lot of traffic.
What kind of LiteSpeed license are you using? 2-CPU trial? If you would like to test Litespeed with higher level CPU license, please contact sales @ ...
Also, does it increase the throughput if you increase the number of threads in the perl client?

As I knew, NetApp's NFS server implementation is the best.
 
#16
I have used the trial version with 2 lshttpd processes and
yes i think this test showed us that LiteSpeed really suites our needs!

We are currently building our webcluster infrastructure and we will probably
start with smaller servers and the 2-CPU license.

I would like to run the test again with more threads but our NetApp is currently
in another location.

Thank you for your help in this long thread :)

Alex
 
Top