CGI vs suexec deamon for servers with a large number of sites ?

wanah

Well-Known Member
#1
Hello,

I'm currently running php un suexec deamon mode but have been unable to use any opcode cache because none of the existing opcode caches seem to like to have a large cache and they also don't like it when the cache gets full as they empty it all at once.

We invested nearly a year ago in a server with 256GB of ram that we are finding very difficult to use up as no caching schemes seem to like anything more than 256Mb.

I'm now looking into how I could use more ram while accelerating customers websites. I'm thinking about removing the deamon mode and going back to cgi mode in order to have a cache per user instead of a cache shared by all users.

Does opcode cache work with litespeed lsphp in cgi mode ? I know the deamon mode uses up less memory but I'm not looking for this, I'm looking for a way to get the most out of php scripts in terms of speed.

In cgi mode would all php instances of that user be able to share that users cache or would it be a cache per instance ? If it's a cache per instance will the same instance be able to serve multiple requests making use of the cache or would this be pointless.

I would like to be able to reserve 128MB opcode cache per account (100 GB would be used over 800 users using the maximum ram available to them).

Is this possible ?

If not, would it be an interesting idea to have a semi-deamon mode for lsphp ?, thus having one instance per user instead of one instance shared by all users allowing each user's instance to fork php instances for that user only ?

Would I be the only one interested in this ? It would make variable cacheing safe as well as making litespeed compatible with opcode caching on servers with alot of different sites (so long as they have enough memory).
 

NiteWave

Administrator
#2
have you tried eAccelerator ? save opcode cache on disk instead of shared memory. but the disk storage at /dev/shm, so actually still in memory. it looks eAccelerator can't set 0 shared memory size, 1M is minimal per my past experience.

APC / Xcacche / Zend OPcache all have their own SHM logic, while it may be very fast when no issue, it may cause problems you described. I'm curios too why don't use disk storage in /dev/shm. so they can release the SHM logic part, just rely on OS's file system. the SHM logic part, should be very similar with the memory management part in OS, it can be very complex, an application can't handle them well like OS for various cases.
 

wanah

Well-Known Member
#3
Last time I tried eaccelerator everything crashed and I had to comment out the eaccelerator lines from the php.ini to get things back up and running again.

I haven't had time yet to find out why it crashed.

Thanks
 

wanah

Well-Known Member
#4
I believe they don't use /dev/shm as tempfs is supposed to be slower than direct memory access due to going through tempfs but I will defenetly give eaccelerator another go some time soon with everythig on /dev/shm once I've prevented it from crashing php with the default settings
 

wanah

Well-Known Member
#5
Sounds like 4.2.5 has just solved this whole issue with the new suexec per account ! Very exciting, can't wait to give this a try !
 
Last edited:

mistwang

LiteSpeed Staff
#6
Just be careful with the shared memory allocated for each account, it could exhaust physical memory, and bring the server down to its knee when swap memory is used.
 

wanah

Well-Known Member
#7
I will don't worry, but the server has got 256 GB memory so I should be able to share quite a bit before swapping :)

I was thinking about allowing 128MB per account which over 800 accounts would be a maximum of 100GB.

I'm also guessing that I will be able to not have any instances running on accounts that haven't had any visits in the past 30 minutes for example.
 
Last edited:

wanah

Well-Known Member
#9
Thanks,

Is this only usible with Apache httpd.conf ? Could a user who doesn't have Apache on his server still use this ?
 
Top