nginx中健康检查(health_check)机制深刻分析

不少人都知道nginx能够作反向代理和负载均衡,可是关于nginx的健康检查(health_check)机制了解的很少。其实社区版nginx提供的health_check机制其实很薄弱,主要是经过在upstream中配置max_fails和fail_timeout来实现,这边文章主要是深刻分析社区版的health_check机制,固然还有更好的一些建议,好比商业版的nginx plus或者阿里的tengine,他们包含的健康检查机制更加完善和高效,若是你坚持使用nginx社区版,固然还能够本身写或者找第三方模块来编译了。html


首先说下个人测试环境,CentOS release 6.4 (Final) + nginx_1.6.0 + 2台tomcat8.0.15做为后端服务器。(声明:如下全部配置仅仅为测试所用,不表明线上环境真实所用,真正的线上环境须要更多配置和优化。)
nginx配置以下:nginx

#user  nobody;
worker_processes  1;
#pid        logs/nginx.pid;
events {
worker_connections  1024;
}

http {
include       mime.types;
default_type  application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';

access_log  logs/access.log  main;

sendfile        on;
keepalive_timeout  65;
upstream backend {
    server localhost:9090 max_fails=1 fail_timeout=40s;
    server localhost:9191 max_fails=1 fail_timeout=40s;
}
server {
    listen       80;
    server_name  localhost;
    location / {
        proxy_pass http://backend;
        proxy_connect_timeout 1;
        proxy_read_timeout 1;
    }
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }   
}

}后端

关于nginx和tomcat的配置的基本配置再也不说明,你们能够去看官方文档
咱们能够看到我在upstream 指令中配置了两台server,每台server都设置了max_fails和fail_timeout值。浏览器


如今开始启动nginx,而后启动后台的2台server, 故意把在Tomcat Listener中Sleep 10分钟,也就是tomcat启动要花费10分钟左右,端口已开,可是没有接收请求,而后咱们访问http://localhost/response/ (response这个接口是我在tomcat中写的一个servlet接口,功能很简单,若是是9090的server接收请求则返回9090,若是是9191端口的server则返回9191.),如今观察nginx的表现。缓存


咱们查看nginx中tomcat

access.log

192.168.42.254 - - [29/Dec/2014:11:24:23 +0800] "GET /response/ HTTP/1.1" 504 537 720 380 "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36" 2.004 host:health.iflytek.com
192.168.42.254 - - [29/Dec/2014:11:24:24 +0800] "GET /favicon.ico HTTP/1.1" 502 537 715 311 "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36" 0.000 host:health.iflytek.com

error.log

2014/12/29 11:24:22 [error] 6318#0: *4785892017 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.42.254, server: health.iflytek.com, request: "GET /response/ HTTP/1.1", upstream: "http://192.168.42.249:9090/response/", host: "health.iflytek.com"
2014/12/29 11:24:23 [error] 6318#0: *4785892017 upstream timed out (110: Connection timed out) while reading response header from upstream, client:     192.168.42.254, server: health.iflytek.com, request: "GET /response/ HTTP/1.1", upstream: "http://192.168.42.249:9191/response/", host: "health.iflytek.com"
2014/12/29 11:24:24 [error] 6318#0: *4785892017 no live upstreams while connecting to upstream, client: 192.168.42.254, server: health.iflytek.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://health/favicon.ico", host: "health.iflytek.com"

(为何要在listener中设置睡眠10分钟,这是由于咱们的业务中须要作缓存预热,因此这10分钟就是模拟服务器启动过程当中有10分钟的不可用。)服务器


观察日志发如今两台tomcat启动过程当中,发送一次请求,nginx会自动帮咱们进行重试全部的后端服务器,最后会报 no live upstreams while connecting to upstream错误。这也算是nginx作health_check的一种方式。这里须要特别强调一点,咱们设置了proxy_read_timeout 为 1秒。后面再重点讲解这个参数,很重要。app


等待40s,如今把9090这台服务器启动完成,可是9191这台服务器仍然是正在启动,观察nginx日志表现。负载均衡

access.log测试

192.168.42.254 - - [29/Dec/2014:11:54:18 +0800] "GET /response/ HTTP/1.1" 200 19 194 423 "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36" 0.210 host:health.iflytek.com
192.168.42.254 - - [29/Dec/2014:11:54:18 +0800] "GET /favicon.ico HTTP/1.1" 404 453 674 311 "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36" 0.212 host:health.iflytek.com

error.log

没有打印任何错误

浏览器返回9090,说明nginx正常接收请求。

咱们再次请求一次。

access.log

192.168.42.254 - - [29/Dec/2014:13:43:13 +0800] "GET /response/ HTTP/1.1" 200 19 194 423 "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36" 1.005 host:health.iflytek.com

说明正常返回,同时返回9090

error.log

2014/12/29 13:43:13 [error] 6323#0: *4801368618 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.42.254, server: health.iflytek.com, request: "GET /response/ HTTP/1.1", upstream: "http://192.168.42.249:9191/response/", host: "health.iflytek.com"

发现nginx error.log 增长了一行upstream time out的错误。可是客户端仍然正常返回,upstream默认是轮训的负载,因此这个请求默认会转发到9191这台机器,可是由于9191正在启动,因此此次请求失败,而后有nginx重试转发到9090机器上面。


OK,可是fail_timeout=40s是什么意思呢?咱们要不要重现一下这个参数的重要性?Let's go ! 如今你只须要静静的作个美男子,等待9191机器启动完毕!多发送几回请求!而后咦,你发现9191机器返回9191响应了噢!fail_timeout=40s其实就是若是上次请求发现9191没法正常返回,那么有40s的时间该server会不可用,可是一旦超过40s请求也会再次转发到该server上的,无论该server到底有没有真正的恢复。因此可见nginx社区版的health_check机制有多么的薄弱啊,也就是一个延时屏蔽而已,如此周而复始!若是你用过nginx plus其实你会发现nginx plus 提供的health_check机制更增强大,说几个关键词,大家本身去查! zone slow_start health_check match ! 这个slow_start其实就很好的解决了缓存预热的问题,好比nginx发现一台机器重启了,那么会等待slow_starts设定的时间才会再次发送请求到该服务器上,这就给缓存预热提供了时间。

相关文章
相关标签/搜索