Process Soft Limit/Process Hard Limit in External App

bobykus

Well-Known Member
#1
Hello

I run php as suEXEC Daemon and really would like to limit users in number of processes.
So what I have


Process Soft Limit


28
Process Hard Limit
30

Unfortunately, for some reason there is a user who managed to lunch

ps axufwww | grep 242258 | wc -l
80

like t his


242258 10867 1.3 0.0 536076 9872 ? RN 11:55 0:00 | \_ lsphp56:cal/home/some.user/some.site/index.php

how it is possible?
 

NiteWave

Administrator
#6
the user can have 2 x 30 = 60 processes.
please don't set
Process Soft Limit: 28
Process Hard Limit:30
it's too low. they means limit of lsphp5 processes number for all users. if it's too low, your settings may be ignored.

why the limit is 60 but the user can launch 80 processes ... need further investigation.
for example, if within the index.php, it forks many processes like
system("echo aaa | echo bbb | echo ccc | sleep(1)");
(I haven't tried it)
it may have more than 60 processes
 

bobykus

Well-Known Member
#7
Dear Nite Wave

The manual says

Limits the total number of processes that can be created on behalf of a user. All existing processes will be counted against this limit, not just new processes to be started. If the limit is set to 10, and there are more than 10 processes running under one user, then no new process can be started by the web server for that user (through suEXEC).

Why do you say it is for all users, not per user?
Also

The main purpose of this limit is to prevent "fork bomb" attacks or excessive usage, not to impose a limit on normal usage (and this setting will be ignored by the server if it is set below certain levels).

And fork is possible with no limits?

I really need to limit them in terms of the number of processes. /etc/security/limits.conf does not work, neither cgroups.
 

NiteWave

Administrator
#8
>Why do you say it is for all users, not per user?
I think you're right. should be per user.

here's an accurate answer to your question:
First of all, you should not use the process limit for that purpose, it will break a lot of things.

The number of PHP processes each user can have is limited by the "PHP suEXEC max conn", server will not make more than that number of concurrent connections for one user, however, when "noabort" is set to allow process running in the background, server's count of concurrent connections becomes inaccurate. server may close a connection, and the associated PHP process still running, but it does not count towards to the limit any more, I think it is more likely the case how it happened on your server. A buggy PHP script with infinite loop running in "noabort" mode could could end up like that if user keep hitting refresh to reload a hanging page. If "noabort" is not set, server would kill that process when the connection is closed, but it is difficult to decide when to use "noabort", and when not to use it.

PHP suEXEC daemon mode make it more difficult as the PHP process running as root cannot use a very low process limit.

You can consider CloudLinux LVE if you have to do that.
>And fork is possible with no limits?
>I really need to limit them in terms of the number of processes. /etc/security/limits.conf does not work, neither cgroups.
"PHP suEXEC max conn" should work well in most/normal environment to throttle ""fork bomb" but not for all, since it's an application level throttling anyway.

as suggested above, CloudLinux is used for this purpose --- OS level protection.
if CloudLinux is not installed, it looks csf has such feature as well
http://configserver.com/cp/csf.html
  • Excessive user processes reporting
  • Excessive user process usage reporting and optional termination
 

bobykus

Well-Known Member
#9
>PHP suEXEC daemon mode make it more difficult as the PHP process running as root cannot use a very low process limit.

What do you mean by this? I see only one PHP process running as root for each External App.
What exactly makes a difficulties? Only one have to be a root, the rest should be run under user privileges.

All the tools to limit users available in plain CentOS f.ex. you have pam_limits ,
also apache with mod_fcgid can limit per vhost, FcgidMaxProcessesPerClass afaik .
Why it is so complicated for litespeed then?

What csf have todo to litespeed "the total number of processes that can be created on behalf of a user"?
We do not need any reports about usage or processes, all we need is the option mentioned in dosc works
as expected

The manual clearly said "Limits the total number of processes that can be created on behalf of a user."
under External App configuration If it does not work as described - it is a bug, no meter if it is in coding
or by design. In my case it is pretty serious one. It is unacceptable to recommend me to use 3d party
tools to fix it. Just unfair... May be because this was the feature why litespeed was chosen.
And now you say it does not work... Not good at all...
 

bobykus

Well-Known Member
#11
This is the help bubble from LiteSpeed (4.2.18, 2-CPU Lic) settings under Configuration > Server > External Apps > Process Soft ( and Hard) Limit

Limits the total number of processes that can be created on behalf of a user. All existing processes will be counted against this limit, not just new processes to be started. If the limit is set to 10, and there are more than 10 processes running under one user, then no new process can be started by the web server for that user (through suEXEC).

The main purpose of this limit is to prevent "fork bomb" attacks or excessive usage, not to impose a limit on normal usage (and this setting will be ignored by the server if it is set below certain levels). Make sure to leave enough head room. This can be set at the server level or at an individual external application level. The server-level limit will be used if it is not set at an individual application level. The operating system's default setting will be used if this value is 0 or absent at both levels.




Yes, I need a way to prevent "fork bomb" attacks or excessive usage, no more no less...
 
Last edited:
Top