odd error.log messages, lots of lsapi spawning

Discussion in 'Ruby/Rails' started by fantasydreaming, Mar 13, 2007.

  1. fantasydreaming

    fantasydreaming Well-Known Member

    Does ruby-lsapi 2.1 require that we're using the latest lsws RC2 beta? I keep getting these odd errors in the /opt/lsws/logs/error.log file:

    2007-03-13 14:50:03.927 [NOTICE] [idle] [LSAPI:STDERR]: SIGUSR1
    2007-03-13 14:50:03.927 [NOTICE] [idle] [LSAPI:STDERR]: (
    2007-03-13 14:50:03.927 [NOTICE] [idle] [LSAPI:STDERR]: SignalException
    2007-03-13 14:50:03.927 [NOTICE] [idle] [LSAPI:STDERR]: )
    2007-03-13 14:50:03.927 [NOTICE] [idle] [LSAPI:STDERR]: from /opt/lsws/fcgi-bin/RailsRunner.rb:11
    2007-03-13 14:50:23.719 [NOTICE] [idle] [LSAPI:STDERR]: /opt/lsws/fcgi-bin/RailsRunner.rb:11:in `accept'
    2007-03-13 14:50:23.719 [NOTICE] [idle] [LSAPI:STDERR]: :
    2007-03-13 14:50:23.719 [NOTICE] [idle] [LSAPI:STDERR]: SIGUSR1
    2007-03-13 14:50:23.719 [NOTICE] [idle] [LSAPI:STDERR]: (
    2007-03-13 14:50:23.719 [NOTICE] [idle] [LSAPI:STDERR]: SignalException
    2007-03-13 14:50:23.719 [NOTICE] [idle] [LSAPI:STDERR]: )
    2007-03-13 14:50:23.720 [NOTICE] [idle] [LSAPI:STDERR]: from /opt/lsws/fcgi-bin/RailsRunner.rb:11

    gem list shows:
    ruby-lsapi (2.1, 1.11)
    A ruby extension for fast communication with LiteSpeed Web Server.

    And there are are always 3 or so lsapi processes that were spawned in the last minute whenever i do a process list. I have it set to start 4 processe, and it usually shows 6-9. All appear to be doing things when I strace them - none are stuck.

    I also have some of these, what causes this again?
    2007-03-13 14:49:58.660 [NOTICE] [] Premature end of response header.
  2. mistwang

    mistwang LiteSpeed Staff

    You can safely ignore those messages, those are produced by idle children ruby processes being killed.

    More children processes could be started by parent process if there is no child process can handle the new request timely. The real limit is 2 times the configured value.

    "Premature end of response header" cause by incomplete response header from upstream server, when connection was close while terminating "\n\n" has not been received.
  3. fantasydreaming

    fantasydreaming Well-Known Member

    So "premature end" is when the client closes their web browser or hits stop or whatever?

    Real limit = 2x the number of processes - ah-hah! Do these temporary processes obey the idle-timeout as well? I don't think they do.

    My site keeps getting bursts of bots trying to kill it. The temporary processes are created and then destroyed again seemingly over and over again, as they've never been around more than 2 minutes from what I can see (when idle-timeout is 900 seconds).
  4. mistwang

    mistwang LiteSpeed Staff

    No, when a backend server closes the connection in the middle of a request, like when you proxy to backend apache.

    There is another environment variable control idle children processes, LSAPI_MAX_IDLE_CHILDREN, please check the README file.
  5. andreas

    andreas Well-Known Member

    I have the same problem. I get a SIGUSR1 error log message about every five seconds, and I always have lots of new ruby processes in my process list, but I don't see any reason why. Each process uses about 40 MB of memory, so with my maximum of 5 processes there is no way it could hit the memory or process limit.

  6. mistwang

    mistwang LiteSpeed Staff

    Pleae try getting rid of

    see if it help or not.
  7. fantasydreaming

    fantasydreaming Well-Known Member

    Are you using the latest version of memcached? There is a bug with it that can cause it to hang when you have a fair amount of concurrency... I plan to submit a report today.

    Try running strace on a process and seeing where it ends/dies or hangs. With mine, memcache was hanging so then getting killed promptly by litespeed (usually anyways). Which is actually pretty good, when I had lighttpd, about 200 processes would build up and crash the box...
  8. mistwang

    mistwang LiteSpeed Staff

    LiteSpeed has the most polished built-in process manager for sure. :)
  9. andreas

    andreas Well-Known Member

    No, same problem.

    2008-01-17 15:40:50.565 [NOTICE] [idle] [LSAPI:STDERR]: /opt/lsws/fcgi-bin/RailsRunner.rb:23:in `accept'
    2008-01-17 15:40:50.565 [NOTICE] [idle] [LSAPI:STDERR]: : 
    2008-01-17 15:40:50.565 [NOTICE] [idle] [LSAPI:STDERR]: SIGUSR1
    2008-01-17 15:40:50.565 [NOTICE] [idle] [LSAPI:STDERR]:  (
    2008-01-17 15:40:50.565 [NOTICE] [idle] [LSAPI:STDERR]: SignalException
    2008-01-17 15:40:50.565 [NOTICE] [idle] [LSAPI:STDERR]: )
    2008-01-17 15:40:50.565 [NOTICE] [idle] [LSAPI:STDERR]:         from /opt/lsws/fcgi-bin/RailsRunner.rb:23
    2008-01-17 15:40:50.574 [INFO] [idle] connection to [uds://tmp/lshttpd/xyz:_.sock] on request #1, error: Connection reset by peer!
  10. andreas

    andreas Well-Known Member

    There is still something wrong. I have tried setting everything so that all Rails processes are kept forever (no idle timeout, no max requests, etc.), but I have the impression that new processes are created much too often. When i look at the output of top there are always a few ruby processes with time << 1s.
  11. fantasydreaming

    fantasydreaming Well-Known Member

    Me too.

    I was told long ago that it was the "Dynamic" half of the processes - the "max" you set is always there, and an additional number equal to that number are spawned as necessary.

    But in my experience they're almost always spawned, they just only stay for about ~6 seconds. At least 2 "temporary" ones are spawned about 60% of the time, checking at about 1 second intervals for me. My server load is at .4, and there's not any waiting in the queue when I check on the website - the 4 "base" processes should be enough to handle it really...

    This is annoying because even though it's copying-on-writing the memory from the parent process, rails still does a bunch of junk unique each time and so the memory gets dirtied pretty quickily. If you're at all short on memory this means thrashing and really low performance.

    I'd thought it was more fixed in the past, but perhaps not. Perhaps the most recent patch added problems back again too though, it's hard to tell...

Share This Page