Increasing PHP suEXEC Max Conn

Discussion in 'General' started by semprot, Mar 17, 2014.

  1. semprot

    semprot Well-Known Member

    I experienced lot of 503 errors during peak hours.
    And i still have lot of free RAM.
    I increased PHP suEXEC Max Conn from 25 to 80 and 503 errors was reduced 98%, and server becomes fast.

    However i have some questions.
    • What is the consequences of increasing PHP suEXEC Max Conn, if any? For example requiring more RAM etc?
    • What other config should i edit after increasing PHP suEXEC Max Conn, if any?

    Thank you.
  2. NiteWave

    NiteWave Administrator

    yes, more RAM is required.

    if the server only has 1 user account and 1 website, you can use non suEXEC mode, it'll have better performance than suEXEC.
  3. semprot

    semprot Well-Known Member

    How high is high?
    If i see lot of idle lsphp5 process on "top -c", does that mean i set too many suEXEC connections?

    Thank you.
  4. NiteWave

    NiteWave Administrator

    lsws web admin->Actions->Real-Time Stats

    when peak hours, monitor
    External Application -> lsphp5
    row, echo columns, especially "WaitQ", if it keeps non-zero for long time, then increase "PHP suEXEC Max Conn"
    if not, no need increase it or can consider decrease it,
  5. semprot

    semprot Well-Known Member


    Do "In Use" and "Idle" mean : "in use suEXEC" and "idle suEXEC" ?
    Is it necessary to keep "max Conn" more than "Req/Sec"?
  6. NiteWave

    NiteWave Administrator

    yeah, "In Use" mean "in use suEXEC conn"
    you can see that already you define 80 "max conn", only 18 conn (Pool) is established, in which 6 in use and 12 is idle.

    no necessary to keep "max conn" more than "Req/Sec"
    please link "max conn" with "WaitQ" as said before.
  7. semprot

    semprot Well-Known Member

    Okay now is my peak hours, everything seems very fast, timeout error still happens but very very rare (like 3 in 100), which is a big improvement.

    Before this suexec max conn increase, on peak hours timeout happened quite often (like 40 in 100), and page load was slow.

    However i check the litespeed log and i see quite lot of lines like these :

    [] connection to [uds://tmp/lshttpd/lsphp5.sock.23445] on request #14, confirmed, 0, associated process: -1, running: 0, error: Connection reset by peer!

    [] Abort request processing by PID:29496, kill: 0, begin time: 10, sent time: 0, req processed: 0
    [] Abort request processing by PID:29350, kill: 0, begin time: 10, sent time: 10, req processed: 1
    [] No request delivery notification has been received from LSAPI process group [-1], possible run away process.
    [] Retry with new process group.

    What does that mean?

    Should i care about those lines although my server experiences big improvement now ?

    Thank you
    Last edited: Mar 18, 2014
  8. semprot

    semprot Well-Known Member

    Some strange thing just happened. So i set max conn to 480.


    Due to temporary traffic spike, pool was = max conn which is 480, and there is around 1000ish of waitq.

    So i set max conn to higher number such as 1024, and 600.
    But when i see the real time stats, max conn was back to 5.
    Is there any hardcoded limit of max conn?

    Thank you.
  9. NiteWave

    NiteWave Administrator

    it's not good to set max conn too high.
    when WaitQ disappears quickly(not pile up), it's not a big problem.
    when WaitQ keep increasing, only incresing max conn may not resolve the problem.
    should try to identify the bottleneck at this time. for example, can check mysql status by "mysqladmin processlist"

    if bottleneck is mysql, no matter how many connections, the WaitQ will keep increasing.
  10. semprot

    semprot Well-Known Member

    Thank you, what is the negative effect of too high max conn?
  11. semprot

    semprot Well-Known Member

    Any solution to this error?
    connection to [uds://tmp/lshttpd/lsphp5.sock.5406] on request #43, confirmed, 0, associated process: -1, running: 0, error: Connection reset by peer

Share This Page