FastDFS蛋疼的集群和负载均衡(五)之tracker配置反向代理

diary_report.jpg
###Interesting things

接着上一篇。php

###What did you do todayhtml

  • 咱们须要在tracker1和tracker2配置反向代理服务,那么你确定会问了什么是反向代理服务?

反向代理(Reverse Proxy)方式是指以代理服务器来接受internet上的链接请求,而后将请求转发给内部网络上的服务器,并将从服务器上获得的结果返回给internet上请求链接的客户端,此时代理服务器对外就表现为一个反向代理服务器。nginx

  • 在tracker1和tracker2上解压ngx_cache_purge-2.3.tar.gz文件到/usr/local/fast/ 命令:tar zxvf ngx_cache_purge-2.3.tar.gz -C /usr/local/fast/ bash

    image.png

  • 咱们发现/usr/local/fast/目录下多了ngx_cache_purge-2.3文件夹。 服务器

    image.png

  • 下载依赖库 yum install pcre、yum install pcre-devel、yum install zlib、yum install zlib-devel网络

  • 解压nginx-1.6.2.tar.gz到/usr/local/目录下,命令: tar -zxvf nginx-1.6.2.tar.gz -C /usr/local/ session

    image.png

  • 进入/usr/local/nginx-1.6.2/目录下, cd /usr/local/nginx-1.6.2/ app

    image.png

  • 添加ngx_cache_purge-2.3模块而且检查。命令:./configure --add-module=/usr/local/fast/ngx_cache_purge-2.3/ tcp

    image.png

  • 老操做,make && make install编译安装nginx。ui

  • 进入/usr/local/nginx/conf/目录下,找到nginx.conf,配置反向代理。

#user nobody;
worker_processes  1;

error_log  /usr/local/nginx/logs/error.log;
error_log  /usr/local/nginx/logs/error.log  notice;
error_log  /usr/local/nginx/logs/error.log  info;

pid        /usr/local/nginx/logs/nginx.pid;


events {
         worker_connections  1024;
        use epoll;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log /usr/local/nginx/logs/access.log  main;

    sendfile        on;
    tcp_nopush      on;
    #tcp_nopush on;

    #keepalive_timeout 0;
    keepalive_timeout  65;

    #gzip on;
        server_names_hash_bucket_size   128;
        client_header_buffer_size       32k;
        large_client_header_buffers     4       32k;
        client_max_body_size    300m;

        proxy_redirect  off;
        proxy_set_header        Host    $http_host;
        proxy_set_header        X-Real-IP       $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_connect_timeout   90;
        proxy_send_timeout      90;
        proxy_read_timeout      90;
        proxy_buffer_size       16k;
        proxy_buffers   4       64k;
        proxy_busy_buffers_size 128k;
        proxy_temp_file_write_size      128k;

        proxy_cache_path        /fastdfs/cache/nginx/proxy_cache levels=1:2
        keys_zone=http-cache:200m       max_size=1g     inactive=30d;
        proxy_temp_path /fastdfs/cache/nginx/proxy_cache/tmp;

        upstream fdfs_group1 {
                server 192.168.12.33:8888 weight=1  max_fails=2 fail_timeout=30s;
                server 192.168.12.44:8888 weight=1  max_fails=2 fail_timeout=30s;

        }

        upstream fdfs_group2 {
                server 192.168.12.55:8888 weight=1 max_fails=2 fail_timeout=30s;
                server 192.168.12.66:8888 weight=1 max_fails=2 fail_timeout=30s;

        }
    server {

        listen      8000;
        server_name  localhost;

        #charset koi8-r;

       access_log  /usr/local/nginx/logs/host.access.log  main;

       location / {
           root   html;
           index  index.html index.htm;
       }

        location /group1/M00 {
                proxy_next_upstream http_502 http_504 error timeout invalid_header;
                proxy_cache http-cache;
                proxy_cache_valid 200 304 12h;
                proxy_cache_key $uri$is_args$args;
                proxy_pass http://fdfs_group1;
                expires 30d;

        }

        location /group2/M00 {
                proxy_next_upstream http_502 http_504 error timeout invalid_header;
                proxy_cache http-cache;
                proxy_cache_valid 200 304 12h;
                proxy_cache_key $uri$is_args$args;
                proxy_pass http://fdfs_group2;
                expires 30d;
        }

        location ~/purge(/.*) {
                allow 127.0.0.1;
                allow 192.168.12.0/24;
                deny all;
                proxy_cache_purge http-cache $1$is_args$args;

        }

       # location ~/group([0-9])/M00 { 
            # ngx_fastdfs_module; 
        # }

        #error_page 404 /404.html;

        #redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;

        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        # proxy_pass http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        # root html;
        # fastcgi_pass 127.0.0.1:9000;
        # fastcgi_index index.php;
        # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
        # include fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        # deny all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    # listen 8000;
    # listen somename:8080;
    # server_name somename alias another.alias;

    # location / {
    # root html;
    # index index.html index.htm;
    # }
    #}


    # HTTPS server
    #
    #server {
    # listen 443 ssl;
    # server_name localhost;

    # ssl_certificate cert.pem;
    # ssl_certificate_key cert.key;

    # ssl_session_cache shared:SSL:1m;
    # ssl_session_timeout 5m;

    # ssl_ciphers HIGH:!aNULL:!MD5;
    # ssl_prefer_server_ciphers on;

    # location / {
    # root html;
    # index index.html index.htm;
    # }
    #}

}
复制代码
  • 建立/fastdfs/cache/nginx/proxy_cache 和/fastdfs/cache/nginx/proxy_cache/tmp,由于proxy_cache_path和proxy_temp_path设置了路径,因此咱们要建立。
    image.png

image.png

  • 因为tracker1和tracker2的端口是8000,因此须要在防火墙配置8000端口。 -A INPUT -p tcp -m state --state NEW -m tcp --dport 8000 -j ACCEPT,而后重启防火墙,让策略生效。

    image.png

  • 启动tracker1和tracker2的nginx。命令:/usr/local/nginx/sbin/nginx

    image.png

  • 咱们在tracker1上传2张图片,发现一张存储在group1,一张存储在group2.

    image.png

  • 咱们能够经过tracker1(192.168.12.11)和tracker2(192.168.12.22)的8000端口去访问这2张图片。

  • 访问http://192.168.12.11:8000/group1/M00/00/00/wKgMIVpEgoSAcs8VAADRd6mMX3g514.jpg

    image.png

  • 访问http://192.168.12.22:8000/group2/M00/00/00/wKgMQlpEgoaABUrWAADRd6mMX3g168.jpg

image.png

###Summary

Nginx对外提供服务有可能碰到服务挂掉的时候,咱们须要搭建一个nginx和keepalived集合实现的nginx集群高可用环境,下一篇讲。

相关文章
相关标签/搜索