Hello, we are running LiteSpeed 2.x (and tested with 3.x) with FastCGI. Our application uses one fastcgi, which does AJAX response (actually checks with the imap server, which could be slower from time to time). What is a problem is, that we have set soft and hard process limits like this: Code: <extProcessor> <type>fcgi</type> <name>checkmail</name> <address>UDS://tmp/lshttpd/fcgi/checkmail</address> <maxConns>30</maxConns> <initTimeout>30</initTimeout> <retryTimeout>0</retryTimeout> <persistConn>0</persistConn> <pcKeepAliveTimeout></pcKeepAliveTimeout> <respBuffer>1</respBuffer> <autoStart>1</autoStart> <path>/www/checkmail.fcgi</path> <backlog>10</backlog> <instances>3</instances> <runOnStartUp>0</runOnStartUp> <extMaxIdleTime>120</extMaxIdleTime> <priority></priority> <memSoftLimit>1024000000</memSoftLimit> <memHardLimit>2048000000</memHardLimit> <procSoftLimit>20</procSoftLimit> <procHardLimit>40</procHardLimit> </extProcessor> The problem is, that neither soft nor hard process limit is enforced in peak time. We don't require this script to be ultra-fast, but as it consumes lots of memory (it's a perl script), running 160 of these processes is quite a big problem. How do I tell lsws to never ever run more than procHardLimit processes? I currently do this by setting max-lwp for lshttpd project, so if there's more than 130 processes run from litespeed, additional forks get Resource temporarily unavailable. But since this litespeed server serves more scripts, this takes the whole site down in peak time (as opposed to taking the whole machine on swapping an I/O, so it's an improvement). Please let me know if you need aditional information.