Requests in Processing

#1
Hey,

The Whole LS server was hanging up today for a few hours (no sites responding). Restarting it would fix it for 5-10 mins but then the problem started again. In the end I tracked it down to an external app that was hanging and that in turn was causing its virtual host to a large number of "requests in processing" (80, not sure if that's actually high, but the others all have 0 right now..albeit late at night).

Fixing the external app appears to have fixed the problem, but is there any way I can prevent one virtual host from taking down the rest? Is there a way of limiting the number of requests a host can process at one time (assuming this is the problem). External App Settings (which seem the most logical place to start) look like this:

Max Connections - 20
Keep Alive Time out - 1000
Initial Request Timeout - 1
Retry Time out - 0
Response Buffering - No

I thought it would be Max Connections, but I'm not sure how it got up to 83 requests on 20 connections?....

best, Zach

(am still on LS 3.3)
 

mistwang

LiteSpeed Staff
#2
Maybe you can consider to create a external app for each vhost, for PHP, use PHP suEXEC.

Your Inital request timeout is too low, keep it at 60 or the longest time that a request would take.

LSWS queue the requests when running short on connections to external app. So, "WaitQ" is not 0.
 
#3
>Maybe you can consider to create a external app for each vhost, for PHP, use PHP suEXEC.

Not sure what you mean here. Each vhost has 3 external apps linked to mongrels.

>Your Inital request timeout is too low, keep it at 60 or the longest time that a request would take.

Agreed, but I thought this was the "time out trying to connect" - not sure what it does if it is the "time out during response" as we obviously have a lot of requests that take longer that 1sec and they don't get cut off. the problem we have if anything is that they weren't being cut off.

>LSWS queue the requests when running short on connections to external app. So, "WaitQ" is not 0.

Is this how the 20 connections could have run up to 83? I'm still not sure how this would help as the connections were never timing out (so raising this number, which I assume shouldn't go more than a few secs) would only slow down the inevitable.
What I'm really looking for I think is something more like PHPs set_time_limit variable which would kill a connection if it took too long (as far as I can tell these connections are just getting opened up and never closed, maybe wrong about that... we had 83 triggered by a script that only runs every 5 mins so they must have built up over time)
 

mistwang

LiteSpeed Staff
#4
If each vhost get a its own mongrel instances, one vhost should not affect the others.

Have to fix the script, changing timeout may not help as the mongrel may still working on the previous request when LSWS send next request to it.

Why not use our dedicate rails support via ruby-lsapi?
 
#5
can't use lsapi - we've made customizations to mongrel, and they're running secondary servers.

any other possibilities? 1 vhost definitely taking down all the others (and it wasn't a cpu/mem issue, everything seemed fine - only odd number i could find was that "80 requests in processing").. still seems that somewhere there should be a way of limiting the number of requests on vhost can do at once - have throttles for everything else (client connections/keep alive etc).
 
#6
(should point out to anyone else reading this (prospective LS users etc) that we've been using LS for 2 1/2 years and its been incredibly solid / never had any problems. fixing this problematic script (as it should have been) has eliminated the problem - more trying to protect against my own future scripting mistakes than any real world load issues with LS).
 

mistwang

LiteSpeed Staff
#7
I think each mongrel instance can only handle one connection and process one request at a time, so, I think you should use "Max connections = 1" for all mongrel external app.
 
Top