Which memory limit am I missing ?

wanah

Well-Known Member
#1
Hello,

I've been running our litespeed server in workergroup mode since this morning with opcache disabled. No issues so tonight I manually installed ZendOpcache with 128MB per account and watched the error log.

I kept getting a few lines like this :

Code:
2013-11-16 18:53:47.793 [STDERR] fork() failed, please increase process limit: Cannot allocate memory
1) Cloudlinux memory limits are disabled
2) PHP memory limit is currently set to 512M
3) Zend opcache has the following settings :

Code:
opcache.memory_consumption=128
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=4000
4) Total server memory consumption :

Code:
# free -g
             total       used       free     shared    buffers     cached
Mem:           251        196         55          0         33        110
-/+ buffers/cache:         52        199
5) Litespeed memory limits :

Security > CGI Settings :

Memory Soft Limit (bytes) : 150000M
Memory Hard Limit (bytes) : 180000M

External App :

Memory Soft Limit (bytes) : 122880M
Memory Hard Limit (bytes) : 163840M

Do you have an idea what memory limit I'm hitting ? Is it the 512 MB for php being hit ? Is it a litespeed limit being hit ? (all litespeed limits mentionned above are larger than the 52 GB that were being used at the time. Or could it be a system limit ?

Thanks.
 

NiteWave

Administrator
#2
>2013-11-16 18:53:47.793 [STDERR] fork() failed, please increase process limit: Cannot allocate memory

please count current lsphp5 process number:
#ps -ef|grep lsphp5|wc

may need increase following limit:
External App:
Process Soft Limit
Process Hard Limit
 

wanah

Well-Known Member
#4
Sunday morning so a bit lower load...

Code:
# ps -ef|grep lsphp5|wc
    100     801    8458

So that's 100 processes...

My Process Soft and Hard limits are high though :

Process Soft Limit : 2500
Process Hard Limit : 3000

As for my system limits :

Code:
# ulimit -a
core file size          (blocks, -c) 1000000
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 2063089
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14335
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
Is there something that could be causing this error ?

Could this be a PHP lsapi limit that I would need to add an environement variable ?

I've currently got :

Code:
PHP_LSAPI_MAX_REQUESTS=1200
PHP_LSAPI_CHILDREN=1000
LSAPI_ALLOW_CORE_DUMP=0
LSAPI_MAX_PROCESS_TIME=600
Another thought I had was where zend opcache stored mmaped files. /tmp was using only 2% at the time so it wasn't storing the extra 20GB there... Is was defenetly using mmapd but maybe mmapd has a mode where is stores little data on disk ?

PS : Whey does your forum cut posts that contain any utf-8 characters ?

Exemple horizontal ellipsis (... in one character ACSI #8230). I use this character all the time as mac keyboards have an easy shortcut to this caracter.
 
Last edited:

wanah

Well-Known Member
#5
Ok, just checked, I disabled opcache before posting here at 7PM and checked my apache logs and I got 3 more fork failed lines at past 10PM.

Code:
2013-11-16 18:42:27.211 [STDERR] fork() failed, please increase process limit: Cannot allocate memory
2013-11-16 18:42:27.211 [STDERR] fork() failed, please increase process limit: Cannot allocate memory
2013-11-16 18:42:27.212 [STDERR] fork() failed, please increase process limit: Cannot allocate memory
2013-11-16 18:53:47.792 [STDERR] fork() failed, please increase process limit: Cannot allocate memory
2013-11-16 18:53:47.792 [STDERR] fork() failed, please increase process limit: Cannot allocate memory
2013-11-16 18:53:47.793 [STDERR] fork() failed, please increase process limit: Cannot allocate memory
2013-11-16 22:49:32.879 [STDERR] fork() failed, please increase process limit: Cannot allocate memory
2013-11-16 22:49:32.879 [STDERR] fork() failed, please increase process limit: Cannot allocate memory
2013-11-16 22:49:32.880 [STDERR] fork() failed, please increase process limit: Cannot allocate memory
My guess is that cache allowed more processes to be opened but I'm more than likly hitting a number of process limit and not a memory limit.
 

wanah

Well-Known Member
#7
It's 1000 shoud I increase it to a higher value ?

Cloudlinux limits the number of processes an account can have but no accounts hit their limit as it's set to 100 processes per account.

I checked cloudlinux limits while I was getting the errors with cache enabled. All accounts had 1 entry process and a few large accounts had 20 processes.

I've set the maximum workers to 100 per account to match the other settings.
 
Last edited:

wanah

Well-Known Member
#9
I don't see how this would help my current issue.

I want to allow each user up to 100 processes as each process is controlled by Cloudlinux.

The total processes were around 100 between all users so it shoud not be causing the problem.

Before reducing limits I would like to find what limit is causing these errors.

It currenty does match Max Connections, both are set to 1000
 
Last edited:

wanah

Well-Known Member
#10
Here's something I don't understand :

Code:
# ps -ef|grep lsphp5|wc && pstree|grep -A3 litespeed
    102     817    8623
     |-litespeed-+-httpd
     |           |-litespeed-+-19*[lsphp5---lsphp5]
     |           |           |-2*[lsphp5---4*[lsphp5]]
     |           |           |-6*[lsphp5---2*[lsphp5]]
     |           |           |-lsphp5---6*[lsphp5]
--
     |           |           `-2*[{litespeed}]
     |           |-3*[litespeed-+-lsphp5---lsphp5]
     |           |              |-2*[splitlogs]]
     |           |              `-2*[{litespeed}]]
     |           |-2*[lsphp]
     |           `-lsphp5
     |-lvestats-server
102 processes and yet pstree doesn't seem to show anything like this number, is this normal ?
 
#11
>It currenty does match Max Connections, both are set to 1000
too big

the default value is 35.

from the output of pstree, your license is 4-CPU license.
you set lsphp5->Max Connections:1000
will mean each account(or user) can fork 4000 lsphp5 processes.
should not be what you want.

maybe the document is not clear and confuse you

and the lsphp5 process number is 80 from the pstree output ... "pstree -p" is easier to count.
 

wanah

Well-Known Member
#12
Hello,

I had no idea that the 4 CPU licence meant that all limits were multiplied by 4………, nor did I understand this was per user, I understood it as total amount.

So if I want each user to be able to run 100 PHP processes would I set the limit to 25 ? (4x25 = 100). or maybe do I still have to set it to 100 because we can't be sure which litespeed instance will be called so to be sure that a user gets 100 would I set the limit to 25 or to 100 ?

Cloudlinux doesn't allow a user to go over 100 processes so the 4000 processes you are talking about could never be hit.

Is there any chance that is is what was causing the problem ? As with only a total of about 100 processes were being used for the whole server I don't see this being the cause of the problem.

I'm going to reduce the limit to 100 while waiting for your answer.

PS: I keep forgetting that your forum doesn't accept certain characters and it's a real pain to have to edit each message afterwards... could you please ask someone to look into this ?
 

wanah

Well-Known Member
#13
I was about to do this when I read the following :

Max Connections : Specifies the maximum number of concurrent connections that can be established between the web server and an external application.

This does not sound like a per user setting but like a per server setting.

Please clarify about reducing the PHP_LSAPI_CHILDREN and the Max connections setting. I don't want to limit the server to 100 connections nor to 400 connections devided between all users.

Our server has got 32 threads, we are using a 4 CPU licence because we're not even hitting the 1 CPU licence restrictions yet. We will increae to a 8 CPU licence when/if we start getting close to the 4 CPU licence limits.

Most CPU is currently used by PHP and MySQL.

I'm still unsure how this could be the cause of the errors but if you can ensure me that the maximum connections limit is not for the whole of lsphp then I will give it a go.

I wasn't getting these errors in the deamon mode, this is new to the workergroup mode, possibility beacuse of one entry process per user which increases the number of total processes on the server.
 

wanah

Well-Known Member
#14
In workergroup mode you set the number of workers. Isn't this enough ?

Each user gets one entry process and x forked workers.

If this is enough then what's the next step about finding what setting is causing the fork errors.
 

mistwang

LiteSpeed Staff
#15
Do not configure LSWS PHP worker above the limit set in CloudLinux, set it lower and leave some head room. The fork() error is likely due to CL limit.
 

Michael

Well-Known Member
Staff member
#18
Howdy all,

Our apologies if this causes confusion, but we have decided to change the name of the new WorkerGroup PHP setup. The new name will be ProcessGroup.

We realized this would be necessary while creating documentation to explain the different PHP setups. The default suEXEC setup is called suEXEC Worker. It spawns a brand new worker process for each time PHP is needed. ProcessGroup, on the other hand, has a constantly running parent process for each group (each user) which forks (not spawns) new processes when the user's sites need them. The two setups are very different and we wanted to make sure that it did not seem that there was a relation between them.

While it is never ideal to change the name of a feature, we are glad that we were able to catch this so early (most have not even begun to use this feature) and hope that it will make things clearer going forward.

We have just released documentation for Worker mode and a comparison of the three suEXEC setups (Worker, Daemon, and ProcessGroup). Here is also the updated documentation for ProcessGroup.

We are in the midst of updating all mentions of ProcessGroup/WorkerGroup on the site. Please let us know if you see something confusing.

Cheers,

Michael
 
Top