eAccelerator Purges Automatically = Not Wanted

J.T.

Well-Known Member
#1
Hi,

I have eAccelerator working, on PHP 5.2.13

I can see in the control.php admin that it is caching scripts as intended.

Here are my settings:

Code:
extension="eaccelerator.so"
eaccelerator.shm_size="32"
eaccelerator.cache_dir="/tmp/eaccelerator"
eaccelerator.enable="1"
eaccelerator.optimizer="1"
eaccelerator.check_mtime="1"
eaccelerator.debug="0"
eaccelerator.log_file="/opt/lsws/eaccelerator.log"
eaccelerator.filter=""
eaccelerator.shm_max="0"
eaccelerator.shm_ttl="0"
eaccelerator.shm_prune_period="0"
eaccelerator.shm_only="0"
eaccelerator.compress="1"
eaccelerator.compress_level="9"
eaccelerator.allowed_admin_path="/path/to/control.php"
Unfortunately, after just a few minutes of inactivity, eAccelerator purges the cached scripts automatically. Unfortunately, it doesn't actually log anything. As per my settings, it shouldn't purge anything, certainly not when the cache isn't even full (which it isn't). At no point is anything written in /tmp - /tmp/eaccelerator doesn't get created, perhaps I need to do this myself.

I don't get why it would purge the cached scripts. Perhaps Linux is clearing the memory? top shows plenty of spare memory left so this seems unlikely.

Reading eaccelerator's setting documentation, the scripts should remain cached, not disappear after a few minutes (literally about a minute or two).

What could be causing this unwanted behaviour?
 

NiteWave

Administrator
#2
your settings looks fine.

Reading eaccelerator's setting documentation, the scripts should remain cached, not disappear after a few minutes (literally about a minute or two).
please check if the lsphp5 process is still there when you found opcode cache disappeared. the shared memory will be freed when lsphp5 process ends.
 

J.T.

Well-Known Member
#3
please check if the lsphp5 process is still there when you found opcode cache disappeared. the shared memory will be freed when lsphp5 process ends.
You mean the corresponding .sock.N file in /tmp/lshttpd?

Right now, I'm setting up this server so there's hardly any traffic. I guess, if LSWS by design kills the socket/process after inactivity, them this purging is expected behaviour, which should go away after we get traffic here. Is that right?

How long does a socket/process hang around for? Is this a setting I can tweak?

Thanks for your reply.
 

J.T.

Well-Known Member
#6
Thanks for the responses. I'll look into that.

I didn't see much in the official docs re the cache dir. For now, I've created it with owners nobody:nobody (like lshttpd) and chmod 710.

One more eAccelerator related question.

I have a bunch of separate VH's running off the same PHP version.

For example:

VH www.site1.com
VH www.siteB.net

When I put the control.php file in the docroot or subfolder of www.site1.com - I don't see any cached scripts/files from www.siteB.net and vice versa. It seems to recognise only those scripts within the current VH. Is there a way to use control.php to get a good overview of everything that is cached? Sure, I can copy control to each site, but would like an overall view, like APC offered me on my previous Apache server.

I now also wonder, the 32MB I allocated, is that for the whole PHP instance or per VH, considering the reporting is per VH?

Thanks again for your insights.
 
Last edited:

NiteWave

Administrator
#7
It seems to recognise only those scripts within the current VH. Is there a way to use control.php to get a good overview of everything that is cached?
you may run PHP in suEXEC mode, each VH run different lsphp process.
what control.php reports is cached php opcode in SHM(shared memory) allocated to current lsphp process. i.e., in PHP suExec mode, each lsphp process has its own 32M(in your case) shared memory.

if PHP suEXEC not eanbled, lsphp process run as noboby/nobody(global user/group), then you can see all cached opcode in one place. -- for VPS or 1-CPU license.

for 2-CPU license, there are 2 groups of lsphp process, each group share one segment of SHM to store opcode. same for 4/8 CPU license.
 
#8
I am having a similar issue as well. I have set the Max Idle Time to -1.

This is a test server so we also have minimal traffic on it.

We are running litespeed, with Plesk. We need to be able to use the FastCGI option per vhost to keep users chrooted into their home directories.

When I set the vhost (in Plesk) it use the 'Apache module' eAccelerator (and APC) work fine, when I switch to FastCGI mode (in Plesk) it appears to dump the cache after about 1 min of inactivity. This appears to negate the point of the caching is that correct? Are there any suggestions?
 

NiteWave

Administrator
#9
only eAccelerator store opcode in both SHM and disk. APC or XCache only store opcode cache in SHM(shared memory). but cache in SHM will disappear when associated php process terminates.

in PHP suExec mode(or FastCGI option per vhost in Plesk), there is another setting "PHP suExec Max Conn", default is 5. Assuming all 5 lsphp are running for this vhost, there'll be 5 segment shared memory opcode cache, and each SHM can't be shared between the 5 lsphp process, it's a waste.

My recommendation for opcode cache in PHP suExec mode:
1.use eAccelerator
2.eaccelerator.shm_size = 1
3.eaccelerator.cache_dir= "/dev/shm/eA"
 
#10
How can we ensure that lsphp5 stay alive when we already have Max Idle Time set to -1?

What is (or if there's a way) the proper setting (in LSWS + Plesk) to limit 1 PHP suExec per Vhost?

Thanks
 
#11
this will be somewhat related to the topic.
what is the proper way to accomplish the following requirement:

- LSWS Enterprise
- Plesk
- 400+ domains
- PHP suExec
- Opcode Cache

Thanks.
 
#12
What is (or if there's a way) the proper setting (in LSWS + Plesk) to limit 1 PHP suExec per Vhost?
1 php process? maybe too low? the default is 5.

in lsws web admin console,

Server-->Using Apache Configuration File->
PHP suEXEC ->Yes
PHP suEXEC Max Conn->1

How can we ensure that lsphp5 stay alive when we already have Max Idle Time set to -1?
in your case, there will be 400 lsphp5 keep running there, even no access for a long time. So I think this setting not effective in php suExec mode.
 
Last edited:
#13
Does that mean that any Opcode cache would not be effective or practical given the number of domains/vhost we are having on the server? Since any Opcode cache will require the process to stay alive, correct?

Thanks,

Tommy
 
#14
Since any Opcode cache will require the process to stay alive, correct?
yes, for opcode cache in SHM(shared memory). but eAccelerator stores opcode cache on disk too. cache on disk will keep on disk.

but it's possible a vhost keep busy and associated lsphp processes keep running for very long time. only in this case, the vhost will take advantage of SHM and feel the vhost fast.
 
#15
This is interesting. So if I have 500 + hosts with a shm_size="1" that is 500mbs+ of RAM used right off the bat?

I have eaccelerator.cache_dir="/dev/shm/eA" specified but /dev/shm/eA is empty. When I place a control.php in a client account, it does indeed show the 1mb fully utilized. Does this mean it is all in memory and nothing is on disk?
 
#16
So if I have 500 + hosts with a shm_size="1" that is 500mbs+ of RAM used right off the bat?
yes, it's better than 32M x 500 = ... but these 500 vhost may not run at the same time. check pstree or ps, how many lsphp process running in suExec mode. 1 lsphp process consume individual 1M for opcode cache.

I have eaccelerator.cache_dir="/dev/shm/eA" specified but /dev/shm/eA is empty. When I place a control.php in a client account, it does indeed show the 1mb fully utilized. Does this mean it is all in memory and nothing is on disk?
1.check /dev/shm/eA's permission. set it to 777
2.set eaccelerator.shm_only=0
eaccelerator.shm_only
Enables or disables caching of compiled scripts on disk. It has no effect on session data and content caching. Default value is "0" that means - use disk and shared memory for caching.
 
#17
Hmm I am thinking since most do not run at the same time, why not set it to 2mb which would give better performance for scripts like wordpress and joomla which are bloated.

Isn't setting it to 777 a security risk?

yes, it's better than 32M x 500 = ... but these 500 vhost may not run at the same time. check pstree or ps, how many lsphp process running in suExec mode. 1 lsphp process consume individual 1M for opcode cache.


1.check /dev/shm/eA's permission. set it to 777
2.set eaccelerator.shm_only=0
 
Top