本文收录在Linux运维企业架构实战系列html
今天想起当初研究nginx反向代理负载均衡时,nginx自身的upstream后端配置用着很是不舒服; 当时使用的淘宝基于nginx二次开发的Tengine,今天总结一下。nginx
官网:http://tengine.taobao.org/download.htmlc++
[root@along app]# wget http://tengine.taobao.org/download/tengine-2.2.3.tar.gz [root@along app]# tar -xvf tengine-2.2.3.tar.gz
[root@along app]# groupadd nginx [root@along app]# useradd -s /sbin/nologin -g nginx -M nginx [root@along app]# yum -y install gc gcc gcc-c++ pcre-devel zlib-devel openssl-devel
[root@along app]# cd tengine-2.2.3/ [root@along tengine]# ./configure --user=nginx --group=nginx --prefix=/app/tengine --with-http_stub_status_module --with-http_ssl_module --with-http_gzip_static_module [root@along tengine]# make && make install [root@along tengine]# chown -R nginx.nginx /app/tengine [root@along tengine]# ll /app/tengine total 8 drwxr-xr-x 2 nginx nginx 4096 Feb 20 14:55 conf drwxr-xr-x 2 nginx nginx 40 Feb 20 14:50 html drwxr-xr-x 2 nginx nginx 4096 Feb 20 14:50 include drwxr-xr-x 2 nginx nginx 6 Feb 20 14:50 logs drwxr-xr-x 2 nginx nginx 6 Feb 20 14:50 modules drwxr-xr-x 2 nginx nginx 35 Feb 20 14:50 sbin
注:web
[root@along nginx]# vim /usr/lib/systemd/system/nginx.service [Unit] Description=nginx - high performance web server Documentation=http://nginx.org/en/docs/ After=network.target remote-fs.target nss-lookup.target [Service] Type=forking PIDFile=/app/tengine/logs/nginx.pid ExecStartPre=/app/tengine/sbin/nginx -t -c /app/tengine/conf/nginx.conf ExecStart=/app/tengine/sbin/nginx -c /app/tengine/conf/nginx.conf ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s QUIT $MAINPID PrivateTmp=true [Install] WantedBy=multi-user.target
[root@along ~]# systemctl start nginx [root@along ~]# ss -nutlp |grep 80 tcp LISTEN 0 128 *:80 *:* users:(("nginx",pid=4933,fd=6),("nginx",pid=4932,fd=6))
网页访问验证算法
由于tengine的其余功能和nginx配置差很少,就不在演示了;主要演示,我认为较为方便的反向代理配置。vim
tengine配置反向代理格式和haproxy很类似;后端
后端两台服务器事先本身准备好网页服务(nginx/httpd等均可以)缓存
[root@along tengine]# cd /app/tengine/conf/ [root@along conf]# vim nginx.conf http { ... ... #配置后端代理集群,默认是轮询算法 upstream srv { server 192.168.10.101:80; server 192.168.10.106:80; check interval=3000 rise=2 fall=5 timeout=1000 type=http; check_http_send "HEAD / HTTP/1.0\r\n\r\n"; check_http_expect_alive http_2xx http_3xx; } ... ... #在server端location反向代理 server { location / { proxy_pass http://srv; } } ... ... }
(1)验证配置是否有误安全
[root@along tengine]# ./sbin/nginx -t nginx: the configuration file /app/tengine/conf/nginx.conf syntax is ok nginx: configuration file /app/tengine/conf/nginx.conf test is successful
(2)重启服务器bash
[root@along tengine]# systemctl restart nginx
(3)网页访问验证
由于默认是轮询算法,因此刷新页面,就会轮询调度到后台2个网页服务器
轮询是upstream的默认分配方式,即每一个请求按照时间顺序轮流分配到不一样的后端服务器,若是某个后端服务器down掉后,能自动剔除。
upstream srv { server 192.168.10.101:80; server 192.168.10.106:80; }
加权轮询,轮询的增强版,便可以指定轮询比率,weight和访问概率成正比,主要应用于后端服务器异质的场景下。
upstream srv { server 192.168.10.101:80 weight=1; server 192.168.10.106:80 weight=2; }
每一个请求按照访问ip(即Nginx的前置服务器或者客户端IP)的hash结果分配,这样每一个访客会固定访问一个后端服务器,能够解决session一致问题。
upstream srv { ip_hash; server 192.168.10.101:80; server 192.168.10.106:80; }
注意:
fair顾名思义,公平地按照后端服务器的响应时间(rt)来分配请求,响应时间短即rt小的后端服务器优先分配请求。若是须要使用这种调度算法,必须下载Nginx的upstr_fair模块。
upstream srv { fair; server 192.168.10.101:80; server 192.168.10.106:80; }
与ip_hash相似,可是按照访问url的hash结果来分配请求,使得每一个url定向到同一个后端服务器,主要应用于后端服务器为缓存时的场景下。
upstream srv { server 192.168.10.101:80; server 192.168.10.106:80; hash $request_uri; hash_method crc32; }
(1)参数说明
(2)举例说明以下:
upstream backend { server backend1.example.com weight=5; server 127.0.0.1:8080 max_fails=3 fail_timeout=30s; server unix:/tmp/backend3; }