Best performance - which technique?
We got mid-size script doing several db queries and some computations.
We are trying to achieve as big performance (requests/sec) as possible.
Which techniques should we choose? Can you recommend something?
- php with truckMMcache?
- some kind of php scripts compiling techniques?
- php via fastcgi?
- perl, python or java?
- OR C/C++ FastCGI? (which is ebay platform we think).
Any feedback greatly appreciated!
I think C/C++ fastcgi always gives the best performance. How big a difference it will make depends on the speed of your db queries and computations. But C/C++ fastCGI is much more difficult to code.
So, it depends what is your first priority. :-)
eBay is very big on J2EE, not FastCGI:
It depends on your budget on hardware, you have to invest more on hardware (HPC or cluster) to archive the same level performance that a FastCGI application can provide on a budget server.
Again, it is also depends on your application, if your DB query take long time to execute, then it will not make big difference no matter what is used.
Thank for your advices guys!
We assume that our db will do the job and will be fine-tuned.
We are just worried about server/script performance. Lets say we will have to manage 1000 requests per second.
1. We are sure that db will handle this.
2. We are sure that php won't. Apache reloads mod_php each 100 requests because of memory leaks. Don't know how is it in LSWS, which we are very impressed.
So... Anyone knows what is the performance factor for fcgi c/c++ and fcgi php?
They both uses persistent connections (which is greatly performance boot).
LSWS needs to handle PHP memory leaks as well, it is controled by a environment variable of the PHP process.
For the performance factor, just check our benchmark page, http://www.litespeedtech.com/benchmark.html
All benchmark results are almost the BEST request rate you can get with different server API.
I know you said your db can handle it but still want to reiterate the point.
Are you sure your db can handle 1000 requests per second (assuming you want 1000 web req/s) in real-wolrd conditions? If it doesn't, it really doesn't matter what you choose as your script language as they are cpu bound where as dbs are hugely i/o.
If you have almost no db writes, then db is fine. But if you are using non-transaction safe db tables such as mysql's myisam tables then even a few writes per second will absolute destroy your performance goal due to table locks. Innodb would be ticket for mysql.
If LiteSpeed+PHP+LSAPI doesn't do it for you, use a app cluster setup and have it all conect to the same db, provied it can spawn/maintain 1000 active queries threads/processes. Or use a master/slave setup with the db and have each app machine host it's own sql environment so long read queries don't block the cluster performance.
If the queries are very redudant, data returned do not change often, use a middle tier cache such as memcached as lighting fast buffer for your db queries. If you can scale your db setup/data i/o fetch/write, you can scale everything else. With Lsws/PHP, you can always add more machines.
|All times are GMT -7. The time now is 02:51 PM.|