[solved] Retry with new instance.

Discussion in 'General' started by IanD, Jan 13, 2011.

  1. IanD

    IanD Well-Known Member


    We have a PHP script that runs some MySql queries, it is for reporting and these queries can take up to a minute. That is not a problem for us.

    The problem is after x seconds, LiteSpeed restarts the process with:

    [NOTICE] Retry with new instance.

    In the log file.

    If I'm watching 'mytop' - I can see the first MySql process running and then I can see an identical query running from the 2nd process when a new instance is started.. and then another one sometimes from a 3rd retry.

    All these new instances are making original query take longer.

    I'm sure this is pretty basic, but how can I stop this? I presume there is a timeout setting I can change? Or just turn off the retries?

    Many thanks,

    Last edited by a moderator: Feb 23, 2011
  2. NiteWave

    NiteWave Administrator

  3. IanD

    IanD Well-Known Member

    It is currently 30 seconds.

    The problem is, I want to keep this as low as possible as we get a some DDoS problems. The server is getting over 1 million visits a day so I wasn't keen to increase this for every connection.

    I can try increasing it to 60 though and see what happens.

    Is there any way to turn off the 'retrying' feature? Just have it fail if it hits 60 seconds?

    The restarting of a long process just makes things worse as the problem just escalates more with each restart.

    Many thanks,

  4. NiteWave

    NiteWave Administrator

    not sure if set "Retry Timeout (secs)" of the lsphp5 ext app to a big number will help or not --- I've never tried it, just FYI.

    for long running php script, not sure if you can run it in command line, i.e., lsphp5 abc.php, if so, the connection time out between lsphp5 and lsws won't take effect, you can run abc.php as long as possible.
  5. IanD

    IanD Well-Known Member


    I'll give the Retry Timeout setting a go.
  6. IanD

    IanD Well-Known Member

    Unfortunately the settings don't seem to make any difference, this still happens after 60 seconds:

    2011-01-16 23:50:29.702 [NOTICE] [] Content len: 0, Request line: 'GET /test.php HTTP/1.1'
    2011-01-16 23:50:39.004 [NOTICE] [] No request delivery notification has been received from LSAPI process:27784, possible run away process.
    2011-01-16 23:50:39.004 [NOTICE] [] Retry with new instance.

    My settings are:

    Initial Request Timeout (secs) 600
    Retry Timeout (secs) 600

    Under External App - lsphp5

    Any more ideas? Should this work, is it a bug?

  7. NiteWave

    NiteWave Administrator

    can you try other 2 suggestions in my previous posts? to try them first anyway if possible. this is to identify where the problem is. if one workaround can resolve the issue, it'll help a lot for the final solution, and tell where the correct direction is.
  8. IanD

    IanD Well-Known Member

    I've increased in to 60 seconds. I really don't want to increase it more than that.. It's a very busy server and these problem requests must only be ~ 200 / 10M+ requests a day.

    Running the script on the command would work in regards to not having the 'retry new instance'.

    Updating all legacy systems that are run so infrequently to integrate with a command line php script.. it's a lot of work I'd rather not do.

    Much rather LiteSpeed just didn't retry long running scripts :)
  9. NiteWave

    NiteWave Administrator

    So, if litespeed can increase connection timeout for specific php script, it'll resolve your issue ideally.
  10. mistwang

    mistwang LiteSpeed Staff

    I think you may have a really old build of lsphp5, if yes, try rebuilding PHP again with latest PHP LSAPI. lsphp5 should send back a request received notification for each request, then LSWS will wait till the request finish, not timeout and try again.

    If you use latest LSAPI, and set "LSAPI_ACCEPT_NOTIFY" environment variable for lsphp external app, should remove it.
    It is the root cause.
  11. IanD

    IanD Well-Known Member

    That would be ideal.

    I see now it's not so much the 'retrying' that is the problem, because even if LiteSpeed didn't retry the process - the process would have stopped at the connection timeout.

    If Retry Timeout (secs) worked as well though, that would be great.

    Because at times mysql is under too much pressure and normal queries can back up fast. If LiteSpeed is then retrying these scripts (which run these queries) things just melt down quicker.. This might just be unique to me though and yes, I need upgrade / work on the server!

    I've been using LiteSpeed since October 2010 - would this be classed as old?

    I've got:

    Found LiteSpeed php binary at /usr/local/lsws/fcgi-bin/lsphp5. Version is 5.3.2

    Not 100% sure what you mean here, I've got:

    currently in my Environment. Do I add something like:

    Many thanks for your help.
  12. mistwang

    mistwang LiteSpeed Staff

    No, do not add LSAPI_ACCEPT_NOTIFY .

    check if your LSAPI version is the latest 5.5 release. if is, phpinfo() page shows it.
  13. IanD

    IanD Well-Known Member

    Server API LiteSpeed V5.4
  14. NiteWave

    NiteWave Administrator

    we just added new feature on 4.0.19 to address this timeout issue:
    "added another special rewrite environment variable "noconntimeout", this will disable connection timeout, it will keep the connection open forever till the request complete."

    for example:

    RewriteRule abc.php - [E=noconntimeout:1]

    please test at your end, see if it 100% resolves your issue.
  15. IanD

    IanD Well-Known Member

    That's great - I'll test.

    How do I upgrade to 4.0.19? I'm at the 'Version Management' but I don't see this option.
  16. NiteWave

    NiteWave Administrator

    manually upgrade:

    download lsws...4.0.19..tar.gz
    tar zxvf lsws...4.0.19..tar.gz
    cd lsws-4.0.19

    install to same directory of past installations. select U(upgrade)
    in cPanel, it's /usr/local/lsws
  17. IanD

    IanD Well-Known Member

    The 'no connection timeout' is great for one problem but I'm still looking a solution to the

    retry with new instance

    problem. It don't want it to retry, I want it to hit the connection timeout and die. If it's hitting the connection timeout it means there is a problem and just retrying it only makes things worse..

    In my situation with long running mysql queries anyway! I look at mytop and see 4 or 5 identical long queries running because they are from the same user and their process is just trying and retrying.

    Any ideas?
  18. NiteWave

    NiteWave Administrator

    I tried to set
    LSAPI_MAX_PROCESS_TIME (default value: 300 seconds)
    "In Self Managed Mode, LSAPI_MAX_PROCESS_TIME controls the maximum processing time allowed when processing a request. If a child process can not finish processing of a request in the given time period, it will be killed by the parent process. This option can help getting rid of dead or runaway child process."
    it looks working in test environment.

    your php in suExec mode or not? if not, above setting will give you an option.

    update: I re-read your post:
    it may not result of "re-try". rather, there are 4 or 5 independent requests to same page, and this page need access mysql. for each php request, lsws has to launch a lsphp process to handle it. I think the root cause is slow mysql. if you use memcache to cache the mysql query result(just for example), mysql and the whole site's performance may improve a lot.
    Last edited: Feb 18, 2011
  19. mistwang

    mistwang LiteSpeed Staff

    I am not sure those mysql queries are caused by "retry with new instance", maybe user keep trying to load that page.

    There are many cases will trigger retry with new instance, mainly due to PHP crashes or exits after finishing "PHP_MAX_REQUESTS".

    "retry with new instance" is to reduce 503 error as much as possible. Will see anything can be done to avoid "retry with new instance" when possible.
  20. IanD

    IanD Well-Known Member

    No it's not the user refreshing the page that causes it - because I can replicate it and I'm not refreshing the page.

    Yes, I agree to the core problem is the mysql slow query.. but at the moment is unavoidable. I just don't want it to keep retrying it.

    This is what I got from the log file at the time:

    2011-02-21 20:15:23.007 [NOTICE] [89.238.173.**:1399-0#APVH_] Content len: 0, Request line: 'GET /test.php HTTP/1.1'
    2011-02-21 20:15:33.006 [NOTICE] [94.250.17.**:21751-0#APVH_] No request delivery notification has been received from LSAPI process:8249, possible run away process.
    2011-02-21 20:15:33.006 [NOTICE] [94.250.17.**:21751-0#APVH_] Retry with new instance.
    2011-02-21 20:15:54.654 [NOTICE] [89.238.173.**:1409-0#APVH_] Content len: 0, Request line: 'GET /test.php HTTP/1.1'
    test.php is the script that contained my test slow mysql query.

    89.238.173.** is my IP - not sure where 94.250.17.** came from. Maybe it was a coincidence (there is other traffic on the server) but it certainly looks like when it says "Retry with new instance" it's talking me and my script.

    My php is in SuExec mode.

    I think something like 'LSAPI_MAX_PROCESS_TIME' is what I'm looking for, I want to set that to 60 seconds and then I want the process to die (along with the mysql query).

    Thanks for all your help.
    Last edited: Feb 22, 2011

Share This Page