Two php crashes since our last update of litespeed and php

Discussion in 'General' started by Monarobase, Jun 21, 2013.

  1. bobykus

    bobykus Well-Known Member

    As far as I know CloudLinux do not limit processes with UID < 100. Who is the owner on of LSAPI deamon process in your case?
  2. Monarobase

    Monarobase Well-Known Member

    PHP processes are run with each account's user id.

    CloudLinux limits per virtual memory so if a process has more than 2GB virtual memory (as all litepspeed processes seem to share the same virtual memory) I guess that they could be hitting CloudLinux's per user memory limit.
  3. Monarobase

    Monarobase Well-Known Member

    Is this a normal usage ? :

    237659 user1 20 0 1316m 111m 23m R 82.5 0.0 0:16.62 lsphp5
    237675 user1 20 0 1294m 124m 56m R 36.0 0.0 0:18.10 lsphp5
    237728 user2 20 0 1274m 102m 53m S 21.5 0.0 0:03.76 lsphp5
    237719 user3 20 0 1247m 45m 23m S 20.8 0.0 0:04.19 lsphp5
    237913 user2 20 0 1341m 49m 26m S 10.9 0.0 0:00.33 lsphp5

    xCache's memory is currently set to 500M, is it normal that PHP's virtual memory with lsapi deamon should top 1GB of virtual memory ?
  4. bobykus

    bobykus Well-Known Member

    In my case is

    root 28103 10.9 0.4 68588 44916 ? RN 12:35 8:21 litespeed (lshttpd)
    root 28104 0.0 0.0 1860 444 ? SN 12:35 0:00 \_ httpd (lscgid)
    root 28177 0.0 0.0 291280 8316 ? SN 12:35 0:00 \_ lsphp -c /usr/local/lsws/lsphp5lite/etc/php.ini
    jomar 19720 6.0 0.1 300104 16556 ? SN 13:49 0:06 | \_ lsphp ....svejk/index.php sws/lsphp5lite/etc/php.ini

    I suppose pid 28177 is this is what they call daemon. It is not running with user privileges.
  5. Monarobase

    Monarobase Well-Known Member

    So I presume that your virtual memory usage is 300MB, I've got 500MB for xcache but at 1300MB that still leaves me with 500MB more virtual memory usage that you.

    I wonder if I've got a memory leak or just more processes
  6. bobykus

    bobykus Well-Known Member

    If CloudLinux is a case, you should be able to see the limit hit with lveinfo command.

    I have a feeling that the reason of the crash is new feature which supposed to perform cleanup against run away php processes. Check your log for incidents sequence. Mine indicates that first cleanup happens then suexec daemon gone.

    Also please mind I run XCache Cacher v3.0.1 with LiteSpeed API V6.1 with out any issues.
    With LiteSpeed API V6.2 I got a daemon crash every 10-12 hours.
    Last edited: Jul 9, 2013
  7. Monarobase

    Monarobase Well-Known Member

    Interesting, so I'm not alone :)

    I disabled Cloudlinux's per account memory limits and I've just had another crash.

    I'm getting desperate to find out what is causing this.

    I've just disabled xcache to see if I get another crash. However disabling oopcode cache is not a long term solution. I didn't have any crashes on LSPHP 6.1 with xcache either.

    My last two entries before PHP crashes are :

    2013-07-10 23:07:33.620 [STDERR] Killing runaway process PID: 0 with SIGTERM
    2013-07-10 23:07:33.620 [STDERR] Children tracking is wrong: PID: 733570, Cur Childen: 0, count: 7, idle: 0, dying: 4
    I've already sent core dumps and log extracts, I'm going to email you the latest log extracts.

    How can I turn off the feature to automaticaly check and kill run a way processes ? I'm beginngin to believe that this is what is causing the problem too?
  8. webizen

    webizen Well-Known Member

    try set 'External Application Abort' (Admin Console => Configuration => General) to 'No Abort'
  9. Monarobase

    Monarobase Well-Known Member

    I've already tried that, and got crashes too. That setting isn't anything to do with run away processes but to do with the processes being killed if the client isn't there anymore.

    You have my core dumps and my log extracts, and I never got an answer from your developpers about them.

    What did they find ?

    Please keep me informed when you get an answer.
    Last edited: Jul 10, 2013
  10. NiteWave

    NiteWave Administrator

    Hi Monarobase,

    from developer, FYI:
  11. Monarobase

    Monarobase Well-Known Member


    I think I'm going to finish testing without Xcache to make sure that it's not xcache related. This should give your developement team some time to make ls 4.2.4 more stable.

    I'm always a bit worried when I'm told to install a non stable package and am told it's not the final package.
  12. Monarobase

    Monarobase Well-Known Member

    I'm going to give our server a month without oopcode caching to see if it's Xcache the issue.

    Are there any known issues with eAccelerator and Litespeed in suexec mode ? I'm now thinking about giving eAccelerator a try as there is now a version on GitHub that is compatible with PHP 5.4 and as we are currently running PHP 5.3.

    eAccelerator has been the recommended oopcode cache system by cPanel for some time now, there must be a reason behind this and as APC isn't secure and xCache doesn't seem to be compatible with LSAPI 6.2 and Litespeed 4.2.3 I'm thinking about giving this a try.
  13. Monarobase

    Monarobase Well-Known Member

    I've just been told that current LS 4.2.4 should be more stable with PHP suexec than 4.2.3.

    I've updated litespeed to 4.2.4 but am waiting at least a couple of weeks before trying to activate a oopcode cache (xCache or eaccelerator) again.
  14. bobykus

    bobykus Well-Known Member

    Perhaps php have to be rebuilt with new API, didnt it? Or just lsws upgrade is enough?
  15. Monarobase

    Monarobase Well-Known Member

    Good question, so far since xCache has been disabled I haven't had any problems.

    I am however thinking about a new strategy that doesn't envolve xCache. I want to try eaccelerator with PHP5.3 and keep PHP5.3 while it still gets security updates and then update directly to PHP 5.5 with Zend opcache.

    I'm under the impression that xcache is good for a few sites but not for hundreds of sites and eaccelerator seems to have been tried and tested on large multi site servers.

    I'll still have to test but I'm thinking about allowing 10 to 15GB (and maybe more if it takes it) of shm and disabeling disk cache. If I can't get a high SHM to work I will create a 20GB tempfs partition and set eaccelerator to use this.

Share This Page