Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
litespeed_wiki:cache:lscache:reverse-proxy [2015/07/27 14:55] Michael Alegre |
litespeed_wiki:cache:lscache:reverse-proxy [2017/01/03 21:45] (current) Michael Alegre created |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Setup LSWS as cache reverse proxy ====== | + | ~~REDIRECT>litespeed_wiki:cache:no-plugin-advanced:reverse-proxy~~ |
- | + | ||
- | + | ||
- | ===== Summary ===== | + | |
- | + | ||
- | LSWS can be setup as reverse proxy since earlier version (2.0). It can integrate with built-in cache to provide cache reverse proxy. This setup extends the benefit of LSCache for backends that are not necessarily using a LSWS web server. It also provides the benefit of LSWS anti-ddos as well. | + | |
- | + | ||
- | + | ||
- | + | ||
- | ===== Steps ===== | + | |
- | + | ||
- | + | ||
- | + | ||
- | ==== 1. Create Web Server External App ==== | + | |
- | <file> | + | |
- | Admin CP => Configuration => Server => External App | + | |
- | Type: Web Server | + | |
- | Name: test-proxy | + | |
- | Address: 10.1.2.3:80 | + | |
- | Max Connections: 150 | + | |
- | Initial Request Timeout (secs): 60 | + | |
- | Retry Timeout (secs): 0 | + | |
- | </file> | + | |
- | {{ http://i48.tinypic.com/11lkg9l.png?700 |Web Server type external app}} | + | |
- | + | ||
- | **Note:** | + | |
- | * Max Connections is for 1 CPU core. IOW, for 2 CPU license (minimum requirement for Cache), the total Max Conn is 300 (150x2). | + | |
- | + | ||
- | + | ||
- | + | ||
- | ==== 2. Create vhost for the proxy service ==== | + | |
- | <file> | + | |
- | Admin CP => Configuration => Virtual Hosts | + | |
- | Basic | + | |
- | ===== | + | |
- | Base | + | |
- | Virtual Host Name: proxy-vhost | + | |
- | Virtual Host Root: $SERVER_ROOT/proxy/ | + | |
- | Config File: $VH_ROOT/conf/vhconf.xml | + | |
- | Connection | + | |
- | Max Keep-Alive Requests: 1000 | + | |
- | Security | + | |
- | Follow Symbolic Link: No | + | |
- | Enable Scripts/ExtApps: No | + | |
- | Restrained: Yes | + | |
- | Leave default for the rest. | + | |
- | </file> | + | |
- | + | ||
- | <file> | + | |
- | General | + | |
- | ======= | + | |
- | General | + | |
- | Document Root: $VH_ROOT/html/ | + | |
- | </file> | + | |
- | + | ||
- | {{ http://i50.tinypic.com/elaf0y.png?700 |vhost general section}} | + | |
- | + | ||
- | + | ||
- | ==== 3. Create vhost level context ==== | + | |
- | <file> | + | |
- | Admin CP => Configuration => Virtual Hosts => Context | + | |
- | Type: proxy | + | |
- | URI: exp: /* | + | |
- | Web Server: [Server Level]: test-proxy | + | |
- | Leave default for the rest. | + | |
- | </file> | + | |
- | {{ http://i46.tinypic.com/33kb2at.png?700 |vhost level context}} | + | |
- | + | ||
- | ==== 4. Map vhost to listener ==== | + | |
- | + | ||
- | {{ http://i45.tinypic.com/35c0hgl.png?700 }} | + | |
- | + | ||
- | + | ||
- | ==== 5. Enable cache for the vhost (proxy) ==== | + | |
- | + | ||
- | 5a. Enable cache at Server Level | + | |
- | {{ http://i49.tinypic.com/2r4kgih.png?700 |enable cache at server}} | + | |
- | + | ||
- | 5b. Set up cache policy at vhost Level | + | |
- | {{ http://i49.tinypic.com/2i7v3h5.png?700 |vhost cache policy}} | + | |
- | + | ||
- | + | ||
- | + | ||
- | + | ||
- | ==== 6. Enable Per Client throttling for vhost (for http level anti-ddos) ==== | + | |
- | + | ||
- | {{ http://i48.tinypic.com/2i0f902.png?700 |vhost per client throttling}} | + | |
- | + | ||
- | **Note:** | + | |
- | + | ||
- | - Some of the configuration settings (vhost name, IP addresses, etc) shown in the screenshots are meant for illustration purposes only. | + | |
- | - The set up has been tested and worked on a production server running varnish+nginx+tomcat+postgresql, being able to defend 20000 requests/sec HTTP ddos attacks (with LiteSpeed Advance Anti-DDoS setup). | + | |
- | - Above setup can be easily extended to support proxy'ing multiple backends with each vhost (with its own web server type context) per each backend. | + | |
- | - The setup mentioned above can easily be extended to support load balancing multiple backends by creating a loadbalancer context at vhost level based on a loadbalancer external app (server or vhost level) with proxy::backend* being its worker. | + |