SIGUSR1 killing running rails requests

Discussion in 'Ruby/Rails' started by dsmalley, Dec 12, 2006.

  1. dsmalley

    dsmalley New Member

    Since tweaking our setup to get Rails running using dispatch.lsapi we are getting lots of SIGUSR1 errors messages produced in our app.

    They appear to be produced whenever a request results in a large dataset being returned.

    Can anyone point me in the direction of the configuration settings that can stop these errors occuring? My first thoughts were to raise the memory limits on our processes but this doesn't seem to have stopped them.

    Is there any other information I can supply to help diagnose this problem?
  2. mistwang

    mistwang LiteSpeed Staff

    Please check your "Max Idle time" under "ruby" tab. Leave it not set or increase it.
  3. dsmalley

    dsmalley New Member

    Max Idle time had a value of "0" in it, I've now changed that to be blank and applied changes.

    In reading some other threads I noticed that ruby-lsapi has been upgraded a few times, when i do a "gem list" on the production box I get a version number of... "ruby-lsapi (1.11)"

    Is there an upgrade that would help?
  4. mistwang

    mistwang LiteSpeed Staff

    1.11 is the latest.
  5. dsmalley

    dsmalley New Member

    Yes, I just took a look and realised that.

    What settings can I play around with here to try and resolve the problem, I changed the Max Idle time but we're still getting SIGUSR1 errors.

    Soft/hard memory limits are set to 80mb/100mb respectively.
    Max connections is set to 10
    Initial timeout is 120
    Retry timeout is 60
    Connection keep-alive timeout is 1000
    Backlog is 100
    Instances is 1
    Process soft limit is 200
    Process hard limit is 250

  6. mistwang

    mistwang LiteSpeed Staff

    Why not use our easy rails setup, check out the wiki.
  7. xing

    xing LiteSpeed Staff

    Like mistwang said, we definitely recommend using our easy raills setup outlined in our wiki. It will make your life a lot easier.

    From the looks of it, your Soft/hard memory limits are set way to low if you expect your scripts to return large buffered db data sets. If you are returning 30K rows containing blobs, I'm just throwing out a relative figure, you can easily exhaust this value. Remember, this is not a "Per Ruby/Rails Process" limit but the limit for all the ruby rail processed spawned total.

Share This Page