[IDEA] Multi user support

Discussion in 'LSWS 4.1 Release' started by gkulewski, Sep 14, 2008.

  1. gkulewski

    gkulewski New Member

    Hello,

    I would be glad to see one nice feature (mainly) for those running shared hosting environments.

    The goal is to run whole httpd process with UID and GID of the user owning the content. Not only suEXEC and similar hacks. That way user does not have to give additional permissions and the whole setup should be much more secure.

    Implementation? I guess main process running under some global user plus "spawner" process running as root plus (max) one process per user (that only exists when it is needed). The global process receives connection and transparently proxies it (preferably via UNIX socket) to the user process (based for example on virtual host or path) that is spawned by "spawner" if it does not exist. IMHO implementation should be rather simple if you have all the rest. :)

    Also, Gentoo ebuild/overlay would be nice. And of course open sourcing the free edition would be even nicer!

    Not speaking about some price reduction of paid versions since my hosting in Poland is too small to afford paid version in it's current state. Especially that it is spread into several VPSes over several servers. Maybe some day when it will get bigger... But I am not sure, especially with the level of competition on current hosting market you must cut the prices not make them bigger. But anyway - there are customers who are big enough to pay and there are many small who simply don't have that big revenues. Giving them the free version for free makes your product well tested, well known and better.

    Thanks!
    Last edited: Sep 14, 2008
  2. mistwang

    mistwang LiteSpeed Staff

    Thank you for the suggestion.

    LSWS' architecture is a single process event-driven non-blocking server for the maximum performance and scalabilities, what you described cannot apply. suEXEC should be good enough as we only need to worry about the security of CGI script, for static content it should be fine to be served from one process.

    Our VPS pricing is very affordable, should be affordable for even small hosting companies, and the benefits should be worth a lot more than what being paid. Our past experiences tell us that offering something free will not help much.
  3. gkulewski

    gkulewski New Member

    I am using similar setup I described right now with lighttpd (also event driven and single process) and it works very well. The only problem is that lighttpd does not have all the features you have and that this setup (proxy) is configured by hand statically not automatically. But I am sure (I am professional Linux system programmer so I know what I am talking about) that it is technically possible to implement the spawner to be automatic and transparent and to kill/die processes for users not currently needed.

    Also I tested that processing the request two times instead of one (the first pass to proxy it, the second to serve) does not increase latency or decrease performance too much.
  4. mistwang

    mistwang LiteSpeed Staff

    What you want is in the mind of event-driven and perfork server, it has serious scalability issue, imagine when you want to put 2000 accounts on one server, there is going to be too many processes, and forking processes frequently is not a good idea either.

    We will stick with our current architecture, on the security side, you only need to have all static content readable by the user ID that web server run as, you can even use role based security, no need to worry about User/Group that own the document root of each web site.
  5. gkulewski

    gkulewski New Member

    If we have 2000 active accounts at once (during say 1-2 minute window) then we are going to die even with suEXEC and only way to survive is to disable all such features.

    But on typical hosting we may have 2000 accounts but we have say 20 active during one time. If so we can enable suEXEC or my idea and selectively spawn and kill processes when needed (still max is one httpd per user).

    I don't know how big lsws process is but lighttpd is < 1MB of RAM. And PHP with accelerator is usually between 5MB and 50MB. So I would say if you can do suPHP then you can do suHTTPD as well.

    But I accept your decision to stay with your current architecture.
  6. mistwang

    mistwang LiteSpeed Staff

    suHTTPD need a httpd process for serving static contents, and need to start twice amount of processes even for pure PHP dynamic content comparing suPHP. It is a pretty big scalability issue.

    I think your lighttpd solution may not able to go beyond couple hundred accounts.
  7. gkulewski

    gkulewski New Member

    I have several hostings:

    1. Big (about 500 users). Standard mod_php under one UID. Not very secure, security problems all the time. But should be ok even for 1000-2000 users. Only about 5-10 users are really active at one time. But nearly every request to static content is followed (or preceded) by some PHP request.

    2. Testing (about 50 users). Static lighttpd proxy and static per-UID httpd and php via fcgi. Big amount of memory is wasted for processes not doing anything useful at the given time. It probably won't scale to installations bigger than 100 users (in 2-3GB RAM).

    But 2. with dynamic as-needed spawner and killer keeping only needed processes alive should be able to scale to 1000 users (given no more than several are active at once). It's not 2000 (at least on this hw) but no suANYTHING solution will scale better.

    Particularly I state that suPHP scales not much better than suHTTPD (httpd proxy + dynamic per-UID httpd spawner/killer + php via LSAPI/fcgi).

    Of course that only holds when the server is event driven and only more or less constant number of processes/threads is running at once (not 1/request).
  8. mistwang

    mistwang LiteSpeed Staff

    That's the part I cannot agree with the point I mentioned earlier, suPHP should be 200% or better scalable than suHTTPD, it debatable with different workload, I will skip that, just for typical shared hosting situation, many web sites are just static.
  9. gkulewski

    gkulewski New Member

    I am not saying it should be mandatory. It should be optional.

    I can only find one case when there is big overhead. When you are doing only very small amount of static HTML requests on user that has no httpd process spawned for him. And then wait till his process gets killed. And then again. And no PHP requests at the same time. In that case we get fork overhead. Not very small but can be decreased by keeping several spare processes forked that only must do setuid.

    But in all real hosting loads I ever saw the case was completely different. First there was much more than one request at once (... every hundred seconds). Second - every few requests there is some PHP requests. In both cases (and especially in both at once) the accumulated performance lost is very small.

    From my load:
    # grep -E '(GET|POST) [^ ]+\.php' /var/log/syscplogs/*access.log* | wc -l
    4193833
    # cat /var/log/syscplogs/*access.log* | wc -l
    45254231

    And that pretty much means exactly that every 1/10 request is PHP one.

Share This Page