lsphp and high i/o

felosi

Well-Known Member
#1
I know I posted about this before. At the time I thought it may have been a crappy drive in the server I was on but the more litespeed servers I setup Im seeing this. Ok for example, Im doing a system backup. disk i/o is a lil high. Lsphp will lag badly and sometimes it wont even start until the disk intensive task is done or slacks down.

Like on cpanel servers, when quotaon runs on account creation or modification lsphp will lag. Looking at top from inside the box as soon as the disk intensive task settles down tons of lsphp processes start backup, I guess where they was stalled out and when they finally get some room they start backup.

The load doenst necessarily have to be high for this to happen, and Ive seen it happen with as lil as 40% i/o wait. And load as low as 3

When I set priority to -5 it helps some but still lags when doing backups and such.

Has anyone else seen this? And what is an ideal setting priority to run litespeed on to prevent it form happening?
 

felosi

Well-Known Member
#2
ok a few minutes after I wrote this thread. I was going to transfer some accounts to my server. It got to restoring files on one and just completely lagged out. Lsphp would not open any sites.

Here is a look at top, right as it was doing it

top - 16:06:30 up 6 days, 17:42, 1 user, load average: 5.69, 5.94, 2.87
Tasks: 200 total, 1 running, 199 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.5%us, 0.5%sy, 0.9%ni, 96.9%id, 1.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 6111300k total, 5376548k used, 734752k free, 37624k buffers
Swap: 1052248k total, 1380k used, 1050868k free, 2620896k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24764 optix 9 -6 291m 28m 14m S 2 0.5 0:01.35 lsphp
2475 mysql 15 0 784m 597m 3760 S 1 10.0 264:18.56 mysqld
4027 root 34 19 62228 19m 2044 S 1 0.3 4:49.82 cpanellogd
24748 nobody 9 -6 49608 32m 1284 S 1 0.5 0:00.92 lshttpd
24754 topsite 9 -6 285m 15m 7476 S 1 0.3 0:01.26 lsphp
24685 root 9 -6 49432 32m 1644 S 0 0.6 0:06.37 lshttpd
24694 root 15 0 12728 1340 912 R 0 0.0 0:00.85 top
24747 nobody 9 -6 49544 32m 1272 S 0 0.5 0:00.66 lshttpd
1 root 15 0 10352 724 596 S 0 0.0 0:05.73 init
2 root RT 0 0 0 0 S 0 0.0 0:01.18 migration/0
3 root 34 19 0 0 0 S 0 0.0 0:00.05 ksoftirqd/0
4 root RT 0 0 0 0 S 0 0.0 0:00.40 migration/1
5 root 34 19 0 0 0 S 0 0.0 0:00.20 ksoftirqd/1
6 root RT 0 0 0 0 S 0 0.0 0:00.73 migration/2
7 root 34 19 0 0 0 S 0 0.0 0:00.70 ksoftirqd/2
8 root RT 0 0 0 0 S 0 0.0 0:00.70 migration/3
9 root 34 19 0 0 0 S 0 0.0 0:00.67 ksoftirqd/3

I dont ujnderstand why it was doing that because i/o wait was barely high. Whatever is happening lsphp will not start during these times of disk usage, even relatively light usage.
Priority was set at -6 for lshttpd and lsphp during this time.
 

mistwang

LiteSpeed Staff
#3
Unfortunately, the I/O scheduler in Linux does not follow the process priority. A I/O intensive task will definitely affect other processes which need to perform disk I/O, even only a little bit. So, it is better to assign dedicate disks for I/O intensive tasks,

Have you mount the partition with "noatime"? It should help.

You can also experiment with the storage of php opcode cache, moving it to a ram disk might help.
 

felosi

Well-Known Member
#5
update , setting to noatime and priority of 5 helps.

In a few months when my users are over the last server move I will migrate this server to scsi, seems no matter what processor and ram you have it does no good if it cant read from the disk fast enough
 

xing

LiteSpeed Staff
#6
Put this at the end of your GRUB kernel line.

"elevator=deadline"

Should help a bit and make the i/o scheduler more predictable. Otherwise, there is not much you can do. A slow sub-system is a slow sub-system and affects everything above it.
 
Top