strange error.. & 503 service Unavailable

Discussion in 'Install/Configuration' started by xinn, Aug 29, 2005.

  1. xinn

    xinn Member

    well im getting this email almost every 15 minutes
    'At [27/Aug/2005:13:24:32 +0200], web server with pid=32328 received unexpected signal=11, no core file is created. A new instance of web server will be started automatically!' whats wrong?

    my second question is about 503 error.. i doubled limit of of rams for fastCgi and im still getting it.. my other site, which is hosted under lsws too and has similar amount of visitors ( it even eats more resources its ipb forum ) doesnt return 503err... what should i change in configuration?
  2. mistwang

    mistwang LiteSpeed Staff

    Which OS is used? Please follow try to generate a core file and send us the core file.

    When 503 happens, what do you see in the real time report from web admin interface. I think it is a problem between PHP and MySQL, all PHP processes were hanging on dead MySQL connections.

    You should turn off persistent connection in PHP's configuration, and adjust MySQL connection timeout in MySQL's configuration.
    And you can also try our own PHP SAPI from
  3. xinn

    xinn Member

    im using centOs 4.1.. on this site i use external DB ( it generates like 2queries/sec so maybe this outgoing connections may cause this error? )

    ive been using lsapi for 12 h and it seems that there is huge memory leak.. i got two extermely high load times.. 300 and 500.. so i gave up
  4. mistwang

    mistwang LiteSpeed Staff

    Yes, it is.
    You can use a cron job to monitor the server, if you got 503 errors, just kill all php processes.

    Really!? That's a big problem, did you know which process has memory leak? lshttpd? or php?

  5. xinn

    xinn Member

    hello, about this extreme high load times.. im not sure if it was caused by php lsapi... when there was first high load ( 300+ ) lsws noted this:
    [18280] EACCELERATOR: PHP crashed on opline 33 of file() at /home/ranma/public_html/data/functions.php:883
    ( eAccelerator version: 9.3 final )

    while this extreme Load, graphs showed high hdd usage. Under apache and mod_php5 everything worked fine. Ive been using eAccelerator for 7 months and never had such problems..

    anyway i cant risk to try this configuration one more time... last time i had to wait couple of hours for support of datacenter to reboot the server... ( reboot command ( like most of system commands ) havent worked )
    Currently im using fastCgi + APC

    i dont have bak_core directory in /tmp/lshttpd

    i have one more question.. how am i able to limit scripts not to use other directories than /home/user/ ( for example /home/user2/ ) ( something like open_basedir in apache )?
  6. mistwang

    mistwang LiteSpeed Staff

    Sorry about that. Looks like the high load is caused by memory leak and the server swap a lot.

    We had a beta user using LSAPI + PHP 5.0.4 + eAccelerator on his high volume web site, and there is no problem at all.

    What is your lsapi PHP and fcgi PHP configurations? usually, the "Max connection" should not be over 50.

    There is a new lsws release with updates on LSAPI and PHP SAPI, if don't mind you can try it again. I don't want to risk your server and there is not much risk if some process limit has been applied. Sever wide process limit can be set in file /etc/security/limits.conf, you can add something like:

    *   hard  rss     100000
    *   hard  stack   10000
    No process is allowed to use more than 100000K memory.

    Unfortunately, per virtual host PHP configuration overriden is a new feature in LSAPI PHP, and not available in FastCGI PHP.
    With FastCGI PHP, you have to set open_basdir in a php.ini for different vhost, each vhost should have its own FastCGI PHP using its own php.ini.
  7. xinn

    xinn Member

    Thank you for advices, you are great, this my configuration

    './configure' '--enable-fastcgi' '--with-xml' '--enable-bcmath' '--enable-calendar' '--enable-ftp' '--enable-magic-quotes' '--with-mysql=/usr' '--with-mysqli=/usr/bin/mysql_config' '--enable-discard-path' '--disable-path-info-check' '--enable-sockets' '--enable-track-vars' '--enable-versioning' '--with-zlib' '--with-gd' '--with-jpeg-dir=/usr/local' '--with-png-dir=/usr' '--with-xpm-dir=/usr/X11R6' '--with-ttf' '--with-freetype-dir=/usr' '--enable-gd-native-ttf'

    Name LSAPI
    Address uds://tmp/lshttpd/php.sock
    Max Connections 300
    Environment N/A
    Initial Request Timeout (secs) 30
    Retry Timeout (secs) 30
    Response Bufferring No
    Auto Start Yes
    Command $SERVER_ROOT/fcgi-bin/lsphp
    Back Log 100
    Instances 1
    Priority 0

    Name phpFcgi
    Address uds://tmp/lshttpd/php.sock
    Max Connections 50
    Environment PHP_FCGI_MAX_REQUESTS=500
    Initial Request Timeout (secs) 30
    Retry Timeout (secs) 0
    Response Bufferring No
    Auto Start Yes
    Command $SERVER_ROOT/fcgi-bin/php
    Back Log 100
    Instances 1
    Priority 0

    im using php 5.0.4 and lsws 2.1rc2 ( i'll download rc3 as soon as possible ) .. I want to thank you for this great support and of course great webServer.. ive dropped load times from 8,10 ( and even 20,30+ at peaktimes ) to 1, 1.5. One more time - Thank You
  8. mistwang

    mistwang LiteSpeed Staff

    You are welcome. :)

    You should change LSAPI configuration to something like below, if you want to try it. :)

    Name LSAPI 
    Address uds://tmp/lshttpd/lsphp.sock 
    Max Connections 50 
    Retry Timeout (secs) 0 or anything <10
    Back Log 100 
    Instances 50 
    Priority 0 
    You need to download the new Litespeed PHP patch as well.
  9. SyNeo

    SyNeo Well-Known Member


    I just started to hit this error, after I upgraded to the latest 2.1RC3. A restart of the server helps, but only for a while. I don't recall such behavior in the previous 2.1RC2 version.

    I have now set the suggested settings for the LSAPI configuration, and installed the latest LSAPI, and hope that it will resolve the issue. I will post the results here.
  10. mistwang

    mistwang LiteSpeed Staff

    What is the output of real time statistic report when that happened?
    That's strange, there is no major changes in FastCGI support between 2.1RC2 and 2.1RC3.

    Is there any change in PHP and MySQL setup? like moving MySQL to a standalone server?

    Please keep an eye on the memory usage of lshttpd and lsphp, the suggested process memory limit should be applied, you can do that with 'ulimit' from command line before start the web server as well, just in case.
  11. SyNeo

    SyNeo Well-Known Member

    It is a OS wide configuration, probably need a reboot. I am not sure. Can you apply the limit from command line any way.

    Yes, it is still vaild.
  12. xinn

    xinn Member

    Im still getting this 503 errors.. Should i raise any limits ( f/e Max Connection)? and is there a constant value for how long this error will stay?

  13. xing

    xing LiteSpeed Staff

    Raising limits will not help you situation. The most common reason for the 503 errors if your php processes crashing or taking so long to process they might as be considered zombies.

    You need to monitor stderr.log, error.log, or php.err for any php related errors. In addition, we might be able to help you if you provide the actual code to the script causing the problem.

    Also check for any core.XXXX files within your php file directory or within /tmp/lshttpd/.
  14. mistwang

    mistwang LiteSpeed Staff

    Another not so good temp solution, restart lsws automatically from a corn job once for a while. This way no service interruption at all.

    Have you tried LSAPI again?

    We certainly need more information about your server, maybe the login to the admin interface, if you don't mind. :)
  15. SyNeo

    SyNeo Well-Known Member


    I think I discovered another issue, related to the memory leak. My site application throws here and there the following error:

    Fatal Error
    [2] gzcompress() [function.gzcompress]: insufficient memory (@line 204 in file /var/myndweb/common/framework/TViewStateManager.php).

    Debug Backtrace
    #1 TViewStateManager.php:204 -- pradoErrorHandler(...)
    #2 TViewStateManager.php:204 -- gzcompress(...)
    #3 TPage.php:374 -- TViewStateManager->encode(...)
    #4 TPage.php:957 -- AllUsers->savePageStateToPersistenceMedium(...)
    #5 TApplication.php:481 -- AllUsers->execute()
    #6 index.php:27 -- TApplication->run()

    A restart of the server allows to resolve the issue, but only temporary. I'm using the LSAPI.

    This behavior is not spotted in Apache.

    The suggestion to restart the server seems interesting, but won't it destory the server side sessions, and will cause a mess with the applications?
  16. mistwang

    mistwang LiteSpeed Staff

    Maybe the memory limit for PHP process is too low, you can increase the limit under "Server"->"Security"->"CGI Resource Control". It controls the memory limit for FCGI and LSAPI app as well.

    Usually PHP store session data on disk, restart lshttpd will not affect PHP sessions.
  17. SyNeo

    SyNeo Well-Known Member


    Any recommeded values that I can try?

    Also, the "CGI Daemon Socket" is set to N/A - should it be set to some path?
  18. mistwang

    mistwang LiteSpeed Staff

    You can try current value + 10M.
    CGI Daemon Socket can be set at will, default is set to $SERVER_ROOT/admin/conf/cgid.sock. Will make it optional.
  19. mistwang

    mistwang LiteSpeed Staff

    Please download 2.1RC3 package again, there are small bug fixes, and one new feature for debuging. Debug logging can be dynamically turned on and off through web admin interface.

    So, when 503 error happened, please remove current error.log, turn on debug logging for a little while before restarting the server, then email us the error log file, it will help analyzing the problem a lot.
  20. SyNeo

    SyNeo Well-Known Member


    I wanted to update you that I switched to FastCGI, and never had any problems since then, including using the eAccelerator with optimizer turned on. I would like to switch to LSApi thought, the question is whether this issue was resolved in the latest release version (2.1.1).

Share This Page