512 MB RAM on a vserver is enough?

#1
Hi,
I have a Rails application running on a vserver with the following configuration:
- Linux server running SuSE Linux Entreprise Server 10
- Shared 3.4 GHz Intel Xeon processor
- dedicated 512 MB RAM

Now I am experiencing some problems with my Rails app. On certain requests the app chrashes my vserver. The server recovers after some time, around half an hour or so, I'm not sure.

I am going to try to raise the memory limits, they are default now, and looked pretty high to me. Because they are high but not high enough i have a question. Does my server have enough resources to run a typical rails application?
Thanks and Regards,
Onno
 
#3
It should be enough

Okay, I didn't know Rails apps where CPU and memory hungry, but thinking about it, it should be more than enough. I think that something is going horribly wrong with LSWS on this server on a specific request.

I raised the memory limits:
On server Memory I/O Buffer to 300M (was 120M)
Ruby Rails:
Memory soft limit: 350M (was 250)
Memory hard limit: 400 (was 300)

The same result: the server dies, the web server doesn't respond, my Putty session does not respond, and I cannot connect any longer to the server.

The rails app I'm deploying was previously hosted with dreamhost and was easily processing 560 unique visitors, almost 20.000 hits each day. The memory at dreamhost is 256MD dedicated.

On this new server I'm only testing: so with only 1 user and 1 single request I can crash the server. I'm pretty sure the root cause is not the resource limit. The RailsRunner.rb Ruby process is out of control and this causes the crash.

There is a lot I like about LSWS but this is a showstopper. I have Apach2 on this machine, I stopped and disabled its service, but now it looks that I have to opt for Apache and FCGI on this machine. I dont want to do this, as this is hard to configure right I think, at least for me.

Any other suggestions?

At one time I was able to do a strace on the RailsRunner.rb Ruby process. It continuously outputs some error and kill signal? It is trying to do something which is not succeeding.
epf:/home/user # strace -p 17556
Process 17556 attached - interrupt to quit select(1, [0], NULL, NULL, {0, 276000}) = 0 (Timeout)
kill(17555, SIG_0) = -1 EPERM (Operation not permitted)
select(1, [0], NULL, NULL, {1, 0}) = 0 (Timeout)
kill(17555, SIG_0) = -1 EPERM (Operation not permitted)
select(1, [0], NULL, NULL, {1, 0}) = 0 (Timeout)
kill(17555, SIG_0) = -1 EPERM (Operation not permitted)
select(1, [0], NULL, NULL, {1, 0}) = 0 (Timeout)
kill(17555, SIG_0) = -1 EPERM (Operation not permitted)
select(1, [0], NULL, NULL, {1, 0}) = 0 (Timeout)
kill(17555, SIG_0) = -1 EPERM (Operation not permitted)

Thanks and Regards,
Onno
 

mistwang

LiteSpeed Staff
#4
The strace out is normal when the parent ruby process is idle, it will fork a child process to process a new request.

Memory limit may not be the cause of the problem. It prevents a ruby process from using more memory than the limit.

If you lose the ssh when it happens, it is likely that the vps was crashed for some reason. Have you checked the kernel log?

I could take a look if I can logon to that box. We have to find out what happened to RailsRunner.rb. start strace with "strace -f <ruby_pid>" to follow all children processes created.


You do not have a problem with mongrel serving that request, do you?
 
#5
Problem solved

This of course was not a problem with LiteSpeed :)

For this application I'm using the Ruby interface to HTML tidy (tidy gem version 1.1.2). I discovered that the use of this interface on this new server (with SuSE Linux enterprise 10) causes segmentation faults.

In my code the segmentation fault occurs when I call the clean operation tidy.clean. Below I attached some of the log file.

I was unable to pin point the library that is causing this problem. My best guess now is that is a conflict between the Tidy lib and the readline lib. In the end I decided to use the poor man's interface, the command line.

BTW, LiteSpeed does recover from segmentation fault although it takes some time, I think around 15 minutes.

#0:/usr/lib/ruby/1.8/dl/struct.rb:56:DL::Importable::Internal::Memory:-: return @ptr
#0:/usr/lib/ruby/1.8/dl/struct.rb:57:DL::Importable::Internal::Memory:<: end
(eval):5: [BUG] Segmentation fault
ruby 1.8.6 (2007-03-13) [x86_64-linux]
 

mistwang

LiteSpeed Staff
#6
Does it happen with Mongrel as well? I think it should.

It still puzzled me why the segmentation fault can bring down the whole VPS, as you told you that even SSH will not respond anymore. It should have something to do with the VPS software.

As I knew there are a few bugs in Ruby related to fork(), something like, the process fork and try to execute a shell command by calling exec(), if the exec() failed, the child process was not properly shutdown, which cause some trouble.
 
#7
Although I did not test it with Mongrel I'm sure it will be the same. I tested it from the Ruby console and from there I also see the segmentation fault.

From closer inspection the problem is that the HTML file I'm trying to clean grows untill the VPS crashes? There is no HTML in the file, just lines of garbage repeated over and over again. I can read a package name glibc-2.4 in this garbage, so maybe there is a conflict between the Ruby Tidy interface and glibc. The file is 22MB!
 
Top