Capistrano Maintenance Page Not Being Displayed

#1
I'm having trouble getting LiteSpeed to properly display the maintenance page that Capistrano creates while performing an update. The 404.html page that is found in my RAILS_ROOT/public directory is always served unless I specifically make a request for /system/maintenance.html.

Here is what I have for Rewrite Rules for my Virtual Host:
Code:
RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f
RewriteCond %{SCRIPT_FILENAME} !maintenance.html
RewriteRule ^.*$ /system/maintenance.html [L]
Here is the logging info for a request to the Virtual Host:
Code:
[REWRITE] Rule: Match '/' with pattern '^.*$', result: 1
[REWRITE] Cond: test '/var/www/rails_project/releases/20060806174920/public/system/maintenance.html' with pattern '-f', result: 0
[REWRITE] Cond: Match '/var/www/rails_project/releases/20060806174920/public/' with pattern 'maintenance.html', result: -1
[REWRITE] Source URI: '/' => Result URI: '/system/maintenance.html'
[REWRITE] Last Rule, stop!
As you can see, things appear to be working correctly, but the maintenance.html page is still not displayed. I have verified that the Allow Override HTAccess option on my Virtual Host's General tab is set to "N/A". I have also commented out all the lines in the .htaccess file that exists in RAILS_ROOT/public thinking that might still be getting in the way.

Does anyone have any ideas what might be going on here? Thanks in advance!

--
DeLynn Berry
delynn@gmail.com
http://delynnberry.com
 

mistwang

LiteSpeed Staff
#2
That's strange.

Is "File not found" logged after the rewrite log entries in the error.log? You should be able to verify whether that file exists or not.

It does looks like that the rewrite rule has been executed properly. If you can get to the maintenance page with "/system/maintenance.html" then you should get that page when you access "/".

I think maybe Capistrano replaced the document root during the update, so "/system/maintenance.html" does not exist. However, in the manually crafted test environment, it should work fine.
 
#3
mistwang said:
That's strange.
Strange indeed. I did a double check on the VHOST log file as well as the server error log file and in both instances the last entry when making a request to "/" is:
Code:
[REWRITE] Last Rule, stop!
mistwang said:
I think maybe Capistrano replaced the document root during the update, so "/system/maintenance.html" does not exist. However, in the manually crafted test environment, it should work fine.
I'm not aware of Capistrano doing anything to change the document root, but if I get some time tomorrow, I'll look into it. I know that "/system/maintenance.html" does exist (both physically as well as being able to be accessed via that URI). To troubleshoot this issue, I've manually run the "disable_web" Capistrano task to ensure that the file exists permanently until I manually run the "enable_web" task. What has me so puzzled is that the RAILS_ROOT/public/400.html file is what is being displayed instead of a more generic Litespeed 400 or 500 page.

Just as an experiment, I renamed the RAILS_ROOT/public/404.html and 500.html files and now I'm getting the following:
Code:
Status: 500 Internal Server Error Content-Type: text/html
Application error (Rails)
Color me stumped.

--
DeLynn Berry
delynn@gmail.com
http://delynnberry.com
 

mistwang

LiteSpeed Staff
#4
Just as an experiment, I renamed the RAILS_ROOT/public/404.html and 500.html files and now I'm getting the following:
Code:
Status: 500 Internal Server Error Content-Type: text/html
Application error (Rails)​

Color me stumped.
Looks lite rails did that, as LiteSpeed has no idea about using 404.html, and will not produce a 500 response.

Are you using rewrite rule or 404 handler to dispatch request to rails dispatcher?
I guess that the rewritten url "/system/maintance.html" has been redirected to Rails dispatcher instead of being served by web server itself. You can turn on debug logging to finit it out.
Also, you can try changing the rewrite rule from
Code:
RewriteRule ^.*$ /system/maintenance.html [L]
to
Code:
RewriteRule ^.*$ /system/maintenance.html [L,R]
to perform a redirect, maybe that can help.
 
#5
mistwang, thanks for all your help so far. I may be on to something here on my end though.

I pulled all the Rewrite Rules out of my LiteSpeed Virtual Host configuration, and since I'm still seeing the same results, I'm beginning to think that the problem must have something to do with my setup. I failed to mention last night that I'm proxying all my requests through a Load Balancer to a cluster of Mongrel processes. Might this be the cause of my problem?

As an aside question, while looking through the LiteSpeed Wiki this morning, I noticed the article on LSAPI. Since I only just started playing around with LiteSpeed and have been happy with Mongrel, I made note of the article but decided to hold off implementing it. I'm beginning to wonder if ditching Mongrel and going with LSAPI might not be a better solution though. My first question here with regard to LSAPI is, how easy is it to scale LSAPI up from one listener on one box to mulitple listeners on one or more machines?

Thanks again for your help and input.

--
DeLynn Berry
delynn@gmail.com
http://delynnberry.com
 

mistwang

LiteSpeed Staff
#6
Ok, that's more clear now. The request must be forwarded to mongrel, when request get forwarded through proxy, the original request URL is used instead of the written URL.

Using LSAPI will fix this problem for sure. It is better than proxy to mongrel as well. At least for a single machine setup, no need to bother mongrel cluster and load balancing, and performance should be better.

To scale beyond one server, I recommend to install LSWS + LSAPI on each cluster node and have a instance of LSWS load balancing to those cluster nodes. This configuration should be faster than a mongrel cluster setup.

In the future, we may come up with a better solution to have LSAPI integrated with Capistrano, so the cluster can be easily managed via capistrano.

We will release a dedicate load balancer soon if you need more than the stateless load balancer in LSWS.
 
#7
mistwang said:
Ok, that's more clear now. The request must be forwarded to mongrel, when request get forwarded through proxy, the original request URL is used instead of the written URL.

Using LSAPI will fix this problem for sure. It is better than proxy to mongrel as well. At least for a single machine setup, no need to bother mongrel cluster and load balancing, and performance should be better.
Yeah, this makes a ton more sense. Sorry I didn't think of that last night, but it had been a long day dealing with Capistrano and I just wasn't thinking.

mistwang said:
In the future, we may come up with a better solution to have LSAPI integrated with Capistrano, so the cluster can be easily managed via capistrano.
Hmm, that answers my next question. Do you know if anyone has attempted using the Spinner/Spawner/Reaper scripts with the LSAPI dispatcher?

mistwang said:
We will release a dedicate load balancer soon if you need more than the stateless load balancer in LSWS.
Cool! I look forward to the release.

Thanks again for the help!

--
DeLynn Berry
delynn@gmail.com
http://delynnberry.com
 

mistwang

LiteSpeed Staff
#8
delynn said:
Hmm, that answers my next question. Do you know if anyone has attempted using the Spinner/Spawner/Reaper scripts with the LSAPI dispatcher?
Our knowledge regarding Capistrano is still pretty limited at this moment. If it can handle FCGI deployment well, it can do the same thing with LSAPI Ruby, as LSAPI works in the similar way as FCGI, just the underlying protocol is different.

If LSWS + LSAPI is used on each node, you only need to find a way to let Capistrano run command "lswsctrl restart" to apply the code change, NO DOWNTIME at all during the restart. :cool:
 
#9
mistwang said:
Our knowledge regarding Capistrano is still pretty limited at this moment. If it can handle FCGI deployment well, it can do the same thing with LSAPI Ruby, as LSAPI works in the similar way as FCGI, just the underlying protocol is different.

If LSWS + LSAPI is used on each node, you only need to find a way to let Capistrano run command "lswsctrl restart" to apply the code change, NO DOWNTIME at all during the restart. :cool:
Cool. I'll give it a shot tonight or tomorrow and report back here with my findings!
 
#11
Any chance to try it out? Any update?
Forgive me, but I totally forgot to post back here. I ended up briefly trying to get LSAPI and Capistrano to work nicely, and then went on a trip last week, and then have been busy trying to catch up at work so far this week.

I only played with this briefly, but what I did discover was that the Rails spawner script (which Capistrano ends up using) requires that a script to spawn new listeners is available. Apparently with a standard lighttpd configuration there is a spawn-fcgi script which is capable of creating new FastCGI listeners.

When I discovered this, I meant to ask here whether or not something like this existed for LSAPI, but (as noted above) I got a little sidetracked. :)

Right now, I've got my configuration back using the Mongrel Cluster + Proxy + Load Balancer, and that is working well. There are hooks for the Rails spawner script to specify a different dispatcher script to use, so if something like spawn-fcgi is available (or if spawn-fcgi could be coxed into doing this job), I'd be more than happy to give it a shot.

--
DeLynn Berry
delynn@gmail.com
http://delynnberry.com
 

mistwang

LiteSpeed Staff
#12
No problem. :)

Actaully, LSAPI can be started exactly the same way by using a FCGI spawner.

However, we are thinking about a better solution by installing LSWS + ruby LSAPI on each node. As LSAPI processes are managed by LSWS directly, there is no need for spawner and spinner anymore, just let them do nothing or run "lswsctrl start" if LSWS can be started by a normal user (Not sure Capistrano can execute command as 'root' on a remote server or not?).

Only Reaper need to be changed to restart all ruby processes, I think a simple shell command "killall ruby" will do the trick (make sure ruby LSAPI 1.6 is installed). Or, just run "lswsctrl restart" if LSWS is started by a normal user.

We will release LSWS 2.2 soon, which features easy Rails configuration, should be easier than Mongrel, at least for a single server setup.

LSWS + Ruby LSAPI should give better performance on Dynamic pages as well as pages cached in file system than using Mongrel on each cluster node. I believe. :)

Any comments? Concerns?
 
Last edited:
#13
Actaully, LSAPI can be started exactly the same way by using a FCGI spawner.
I was wondering if that might not be the case. I just didn't get a chance to find/install/configure/etc the spawn-fcgi script.

However, we are thinking about a better solution by installing LSWS + ruby LSAPI on each node. As LSAPI processes are managed by LSWS directly, there is no need for spawner and spinner anymore, just let them do nothing or run "lswsctrl start" if LSWS can be started by a normal user (Not sure Capistrano can execute command as 'root' on a remote server or not?).

Only Reaper need to be changed to restart all ruby processes, I think a simple shell command "killall ruby" will do the trick (make sure ruby LSAPI 1.6 is installed). Or, just run "lswsctrl restart" if LSWS is started by a normal user.

We will release LSWS 2.2 soon, which features easy Rails configuration, should be easier than Mongrel, at least for a single server setup.

LSWS + Ruby LSAPI should give better performance on Dynamic pages as well as pages cached in file system than using Mongrel on each cluster node. I believe. :)

Any comments? Concerns?
Capistrano does have the capability to execute commands as a sudo user, so talking to lswsctrl shouldn't be a problem.

I'm excited to see the new features in 2.2. It sounds like there is going to be some good Rails related additions to the platform (which I think is a very smart move for LSWS). That said, I'm pretty happy with the configuration that I have right now--especially considering I'm still pretty much in development/testing mode. I think what I'm going to do is keep things as they are right now and then play around again when 2.2 is released and re-evaluate things then. From the sounds of it though, I'll be moving to LSAPI soon!

Thanks for all your help and information!

--
DeLynn Berry
delynn@gmail.com
http://delynnberry.com
 
#14
Just wanted to let you know that I was able to get everything (including Capistrano and it's maintenance page) working with the 2.2 release.

For reference all I had to do was create my own restart task in deploy.rb and have it call the following:
Code:
sudo "/usr/local/lsws/bin/lswsctrl restart"
Then in my Virtual Host I added the following rewrite rules to get the maintenance page to be served:
Code:
RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f
RewriteCond %{SCRIPT_FILENAME} !maintenance.html
RewriteRule ^.*$ /system/maintenance.html [L]
I've discovered an odd permissions problem related to the application I'm running trying to create a cached page, but I figured I'd post the details on that in a new thread.

Keep up the great work guys!

--
DeLynn Berry
delynn@gmail.com
http://delynnberry.com
 
#15
I've discovered an odd permissions problem related to the application I'm running trying to create a cached page, but I figured I'd post the details on that in a new thread.
Actually the permissions issue had to do with me not correctly setting group write access on the public directory, so my application wasn't able to writing out cache files. D'oh! :rolleyes:

Everything is working great!

--
DeLynn Berry
delynn@gmail.com
http://delynnberry.com
 
Top