集群部署
部署环境
IP地址node |
计算机名nginx |
部署的服务c++ |
172.16.10.10git |
node1.fastdfsgithub |
StorageGroup1redis |
172.16.10.11apache |
node2.fastdfs后端 |
StorageGroup1缓存 |
172.16.10.12bash |
node3.fastdfs |
StorageGroup2 |
172.16.10.13 |
node4.fastdfs |
StorageGroup2 |
172.16.10.17 |
node5.fastdfs |
Tracker1 |
172.16.10.18 |
node6.fastdfs |
Tracker2 |
172.16.10.14 |
node1.nginx |
nginx HAProxy keepalived |
172.16.10.15 |
node2.nginx |
nginx HAProxy keepalived |
172.16.10.16(VIP) |
服务器操做系统:CentOS Linux release 7.3.1611 (Core)
SELinux:关闭
Iptables:清空
时间:保持同步
FastDFS版本:v5.08(2016-02-14最新版本)
Hosts文件解析:
172.16.10.10 node1.fastdfs 172.16.10.11 node2.fastdfs 172.16.10.12 node3.fastdfs 172.16.10.13 node4.fastdfs 172.16.10.17 node5.fastdfs 172.16.10.18 node6.fastdfs 172.16.10.14 node1.nginx 172.16.10.15 node2.nginx
使用的软件包:
FastDFS_v5.08.tar.gz
fastdfs-nginx-module_v1.16.tar.gz
libfastcommon-master.zip
nginx-1.6.2.tar.gz
ngx_cache_purge-2.3.tar.gz
全部软件下载地址:点这里
架构图以下所示
环境部署
在tracker节点和storage节点安装FastDFS
在tarcker节点和storage节点执行如下操做(172.16.10.10,172.16.10.11,172.16.10.12,172.16.10.13,172.16.10.17,172.16.10.18)
安装基础开发包
yum -y install gcc gcc-c++
首先须要安装libfastcommon
下载地址:https://github.com/happyfish100/libfastcommon,在源码包的INSTALL文件有说明
下载完成后解压进入到解压后目录执行如下命令
./make.sh ./make.sh install
安装成功后会生成一个文件:/usr/lib64/libfastcommon.so
咱们须要建立软连接,由于FastDFS程序设置的目录不是这里
ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so
安装FastDFS
下载完成后进入安装目录执行如下命令
./make.sh ./make.sh install
安装完成后会配置文件在:/etc/fdfs
配置tracker节点
在tarcker节点执行下面命令(172.16.10.17,172.16.10.18)
建立的数据存储和日志存储目录
mkdir -pv /data/fastdfs-tracker
重命名tarcker配置文件
cd /etc/fdfs && mv tracker.conf.sample tracker.conf
修改tracker.conf配置文件
修改base_path的值为刚建立的目录
修改store_lookup的值为0
说明:store_lookup的值默认为2,2表明负载均衡模式,0表明轮询模式,1表明指定节点,为方便一会测试,咱们选改成0。另外store_group这人选项只有当store_lookup值为1的时候才会生效
启动fdfs_trackerd服务
service fdfs_trackerd start
启动成功后在刚建立的目录下面会生成data和logs两个目录
/data/fastdfs-tracker ├── data │ ├── fdfs_trackerd.pid │ └── storage_changelog.dat └── logs └── trackerd.log
日志输出内容大概以下图所示
查看是否监听22122端口
配置开机自启动
chkconfig fdfs_trackerd on
配置storage节点
在storage节点执行如下操做(172.16.10.10,172.16.10.11,172.16.10.12,172.16.10.13)
建立数据存储目录和日志存储目录
mkdir -p /data/fastdfs-storage
重命名tarcker配置文件
cd /etc/fdfs/ && mv storage.conf.sample storage.conf
修改配置文件
修改base_path路径为刚建立的路径
修改store_path0路径为刚建立的路径
修改tracker_server后面的IP的端口为tarcker服务器的IP和监听的端口,就算在同一台机器也不可使用127.0.0.1,另外咱们还须要再增长一个tarcker_server的选项,指定不一样的tarcker,以下所示
base_path=/data/fastdfs-storage store_path0=/data/fastdfs-storage tracker_server=172.16.10.17:22122 tracker_server=172.16.10.18:22122
注意:配置文件里面有一项为group_name,用于指定group名,咱们有两个分组,这里须要注意不一样的storage节点,这里的名称都不同,其它都同样,默认为group1,若是在是在172.16.10.12节点和172.16.10.13节点安装这里须要改为group2
启动服务
service fdfs_storaged start
启动成功后在刚建立的目录下面会生成data和logs两个目录,而且data目录里面会有不少子目录,最的会监听23000端口
日志说明:下面日志说明了客户端启动链接两台tracker server成功,选举172.16.10.13为leader,最后说明链接同一个group的storage server成功
加入开机自启动
chkconfig fdfs_storaged on
第一次测试
主要测试tracker server的高可用,咱们能够试图关掉storage server选择出来的leader,这样正常状况下会触发从新选举出新的leader,而且会报错链接不上被关掉的tracker server,若是再把关掉的tracker server启动的话会提示链接成功的信息,这些都能在日志里面体现
在集群内任何的storage server节点执行如下命令
fdfs_monitor /etc/fdfs/storage.conf
会输出一样的信息,信息中包含有两个group,而且会输出每一个group内的storage信息
客户端上传图片测试
在任何一台tracker节点测试
重命名客户端配置文件
cd /etc/fdfs/ && mv client.conf.sample client.conf
修改配置文件
修改base_path的值为tracker配置文件里面的base_path同样的路径
修改tracker_server为tracker监控的IP和端口,若是都在本机也不可使127.0.0.1
以下所示
base_path=/data/fastdfs-tracker tracker_server=172.16.10.17:22122 tracker_server=172.16.10.18:22122
上传图片测试,执行下面命令
fdfs_upload_file client.conf test.png
fdfs_upload_file 命令
client.conf 指定的客户端配置文件
test.png 指定图片路径
上传成功后会返回一个相似下面的路径
group1/M00/00/00/rBAKCloXyT2AFH_AAAD4kx1mwCw538.png
group1 表明上传到了group1的内的storage节点
M00 表明磁盘目录,若是只有一个磁盘那么只有M00,多个就是M01……
00/00 表明磁盘目录,每一个目录下又有00到FF共256个目录,两级目录就有256*256个目录
rBAKCloXyT2AFH_AAAD4kx1mwCw538.png 这是最终上传上去的文件
最终咱们知道咱们的图片被上传到了哪几台服务器的哪一个目录,咱们能够直接在服务器上找到咱们上传的图片,同一个group内的图片同样的由于咱们前面在配置tracker节点的时候咱们配置的为0模式(轮询)所以咱们上传的时候一次为group1一次为group2,若是有一个group宕机,那么就始终在另外的group以下图所示
这时候咱们的group1里面全部storage的M00号磁盘上面的00/00目录下面将有rBAKClobtG6AS0JKAANxJpb_3dc838.png和rBAKC1obtHCAEMpMAANxJpb_3dc032.png而group2里面全部storage的M00号磁盘上面的00/00目录下面将有rBAKDFobtG-AIj2EAANxJpb_3dc974.png和rBAKDVobtHGAJgzTAANxJpb_3dc166.png
以下图所示
说明:若是同一组的其中一台storage发生故障,那么上传的文件只能存放到同一组的其它设备上面,等故障恢复后会自动将数据同步到该故障设备上面,不须要人工干预
加入开机自启动
chkconfig fdfs_storaged on
与Nginx结合
前面咱们测试只是用客户端测试,咱们须要使用http的方式来上传和下载,所以咱们还须要搭建Nginx或apache,这里咱们就使用,使用最多的Nginx
在全部的storage节点部署Nginx
将全部源码包复制到/usr/local/src目录下面,而后解压
进入到/usr/local/src/fastdfs-nginx-module/src/
cd /usr/local/src/fastdfs-nginx-module/src
修改config文件里面的/usr/local/include/fastdfs为/usr/include/fastdfs
修改config文件里面的/usr/local/include/fastcommon/为/usr/include/fastcommon/
进入到Nginx解压后的目录执行下面命令
yum -y install zlib-devel openssl-devel ./configure --prefix=/usr/local/nginx --with-pcre --add-module=/usr/local/src/fastdfs-nginx-module/src make make install
添加nginx可执行文件到环境变量
cat >> /etc/profile.d/nginx.sh << EOF #!/bin/sh PATH=$PATH:/usr/local/nginx/sbin export PATH EOF
刷新环境变量
source /etc/profile.d/nginx.sh
复制配置文件
cp /usr/local/src/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/ cp /usr/local/src/FastDFS/conf/{http.conf,mime.types} /etc/fdfs/
建立Nginx配置文件
nginx.conf(/usr/local/nginx/conf/nginx.conf)
worker_processes 2; worker_rlimit_nofile 51200; events { use epoll; worker_connections 51200; } http { include mime.types; default_type application/octet-stream; client_max_body_size 50m; sendfile on; tcp_nopush on; keepalive_timeout 60; gzip on; server_tokens off; include vhost/*.conf; }
FastDFS.conf(/usr/local/nginx/conf/vhost/FastDFS.conf)
server { listen 9000; location ~/group[1-3]/M00 { ngx_fastdfs_module; } }
更改Linux最大可打开文件数量
编辑/etc/security/limits.conf在文件最后加入如下内容
* soft nofile 65536 * hard nofile 65536
注销,从新登陆便可
编辑/etc/fdfs/mod_fastdfs.conf配置文件
修改connect_timeout为10
修改tracker_server为taacker监听的服务器IP和地址,不可使用127.0.0.1
修改url_have_group_name为true
修改store_path0的路径为storage配置文件配置的路径
修改group_count为2(由于咱们就只有两个组)
在最后增长如下配置
[group1] group_name=group1 storage_server_port=23000 store_path_count=1 store_path0=/data/fastdfs-storage [group2] group_name=group2 storage_server_port=23000 store_path_count=1 store_path0=/data/fastdfs-storage
以下所示
connect_timeout=10 tracker_server=172.16.10.17:22122 tracker_server=172.16.10.18:22122 group_name=group1 url_have_group_name = true store_path0=/data/fastdfs-storage group_count = 2 [group1] group_name=group1 storage_server_port=23000 store_path_count=1 store_path0=/data/fastdfs-storage [group2] group_name=group2 storage_server_port=23000 store_path_count=1 store_path0=/data/fastdfs-storage
Nginx启动,中止,重启,Reload,配置检查脚本,脚本名(/etc/init.d/nginx)
#!/bin/bash # chkconfig: - 30 21 # description: http service. # Source Function Library . /etc/init.d/functions # Nginx Settings NGINX_SBIN="/usr/local/nginx/sbin/nginx" NGINX_CONF="/usr/local/nginx/conf/nginx.conf" NGINX_PID="/usr/local/nginx/logs/nginx.pid" RETVAL=0 prog="Nginx" start() { echo -n $"Starting $prog: " mkdir -p /dev/shm/nginx_temp daemon $NGINX_SBIN -c $NGINX_CONF RETVAL=$? echo return $RETVAL } stop() { echo -n $"Stopping $prog: " killproc -p $NGINX_PID $NGINX_SBIN -TERM rm -rf /dev/shm/nginx_temp RETVAL=$? echo return $RETVAL } reload(){ echo -n $"Reloading $prog: " killproc -p $NGINX_PID $NGINX_SBIN -HUP RETVAL=$? echo return $RETVAL } restart(){ stop start } configtest(){ $NGINX_SBIN -c $NGINX_CONF -t return 0 } case "$1" in start) start ;; stop) stop ;; reload) reload ;; restart) restart ;; configtest) configtest ;; *) echo $"Usage: $0 {start|stop|reload|restart|configtest}" RETVAL=1 esac exit $RETVAL
将nginx添加以系统服务,并设置开机自启动,最后再启动
chkconfig --add nginx chkconfig nginx on service nginx start
测试:经过任何一个存储节点的Nginx均可以访问到咱们上传的全部图片
在全部tracker节点部署Nginx
在/usr/local/src目录下解压nginx-1.6.2.tar.gz和ngx_cache_purge-2.3.tar.gz
安装依赖包
yum -y install zlib-devel openssl-devel
进入到/usr/local/src/nginx-1.6.2这个目录执行下面命令进行安装
./configure --prefix=/usr/local/nginx --with-pcre --add-module=/usr/local/src/ngx_cache_purge-2.3 make make install
添加nginx可执行文件到环境变量:参考在storage节点部署Nginx
编辑Nginx配置文件(nginx.conf)
worker_processes 2; worker_rlimit_nofile 51200; events { use epoll; worker_connections 51200; } http { include mime.types; default_type application/octet-stream; client_max_body_size 50m; sendfile on; tcp_nopush on; keepalive_timeout 60; gzip on; server_tokens off; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 16k; proxy_buffers 4 64k; proxy_busy_buffers_size 128k; proxy_temp_file_write_size 128k; proxy_cache_path /data/cache/nginx/proxy_cache levels=1:2 keys_zone=http-cache:200m max_size=1g inactive=30d; proxy_temp_path /data/cache/nginx/proxy_cache/tmp; include vhost/*.conf; }
建立缓存目录和子配置文件目录
mkdir -p /data/cache/nginx/proxy_cache/tmp mkdir /usr/local/nginx/conf/vhost
修改子配置文件(/usr/local/nginx/conf/vhost/FastDFS.conf)
upstream fdfs_group1 { server 172.16.10.10:9000 weight=1 max_fails=2 fail_timeout=30s; server 172.16.10.11:9000 weight=1 max_fails=2 fail_timeout=30s; } upstream fdfs_group2 { server 172.16.10.12:9000 weight=1 max_fails=2 fail_timeout=30s; server 172.16.10.13:9000 weight=1 max_fails=2 fail_timeout=30s; } server { listen 8000; location /group1/M00 { proxy_next_upstream http_500 http_502 http_503 error timeout invalid_header; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_cache http-cache; proxy_cache_valid 200 304 12h; proxy_cache_key $uri$is_args$args; proxy_pass http://fdfs_group1; expires 30d; } location /group2/M00 { proxy_next_upstream http_500 http_502 http_503 error timeout invalid_header; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_cache http-cache; proxy_cache_valid 200 304 12h; proxy_cache_key $uri$is_args$args; proxy_pass http://fdfs_group2; expires 30d; } location ~/purge(/.*) { allow all; proxy_cache_purge http-cache $1$is_args$args; } }
Nginx启动,中止,重启,Reload,配置检查脚本,脚本名(/etc/init.d/nginx):参考在storage节点部署Nginx
将nginx添加以系统服务,并设置开机自启动,最后再启动:参考在storage节点部署Nginx
测试:经过两中中的任意一台trackr节点的8000端口去访问后端任何group里面的图片都没有问题
在nginx节点部署Nginx+HAProxy+Keepalived高可用
在172.16.10.17和172.16.10.18上面执行下面操做
安装软件
yum -y install nginx haproxy keepalived
node1的Keepalived配置
! Configuration File for keepalived global_defs { router_id NodeA } vrrp_script chk_nginx { script "/etc/keepalived/nginx_check.sh" interval 2 weight 20 } vrrp_script chk_haproxy { script "/etc/keepalived/haproxy_check.sh" interval 2 weight 20 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1314 } track_script { chk_nginx } virtual_ipaddress { 172.16.10.16/24 } } vrrp_instance VI_2 { state MASTER interface eth0 virtual_router_id 52 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 2222 } track_script { chk_haproxy } virtual_ipaddress { 172.16.10.16/24 } }
node2的Keepalived配置
! Configuration File for keepalived global_defs { router_id NodeB } vrrp_script chk_nginx { script "/etc/keepalived/nginx_check.sh" interval 2 weight 20 } vrrp_script chk_haproxy { script "/etc/keepalived/haproxy_check.sh" interval 2 weight 20 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1314 } track_script { chk_nginx } virtual_ipaddress { 172.16.10.16/24 } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 52 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 2222 } track_script { chk_haproxy } virtual_ipaddress { 172.16.10.16/24 } }
nginx_check.sh脚本内容(/etc/keepalived/nginx_check.sh)172.16.10.17和172.16.10.18同样
#!/bin/bash A=`ps -C nginx --no-header | wc -l` if [ $A -eq 0 ];then nginx sleep 2 if [ `ps -C nginx --no-header | wc -l` -eq 0 ];then pkill keepalived fi fi
haproxy_check.sh脚本内容(/etc/keepalived/haproxy_check.sh)172.16.10.17和172.16.10.18同样
#!/bin/bash A=`ps -C haproxy --no-header | wc -l` if [ $A -eq 0 ];then haproxy -f /etc/haproxy/haproxy.cfg sleep 2 if [ `ps -C haproxy --no-header | wc -l` -eq 0 ];then pkill keepalived fi fi
nginx.conf的配置(172.16.10.17和172.16.10.18同样)
user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; include /usr/share/nginx/modules/*.conf; events { worker_connections 51200; use epoll; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; include /etc/nginx/conf.d/*.conf; }
FastDFS.conf的配置(/etc/nginx/conf.d/FastDFS.conf)
upstream tracker_server { server 172.16.10.17:8000; server 172.16.10.18:8000; } server { listen 80; location /fastdfs { proxy_pass http://tracker_server/; proxy_set_header Host $http_host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 300m; } }
haproxy.cfg配置文件的内容(172.16.10.17和172.16.10.18同样)
global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 4000 listen rabbitmq_cluster bind 0.0.0.0:22122 mode tcp option tcplog timeout client 1m timeout server 1m timeout connect 1m balance roundrobin server node1 172.16.10.17:22122 check inter 5000 rise 2 fall 3 server node2 172.16.10.18:22122 check inter 2000 rise 2 fall 3
启动Keepalived
chmod 755 /etc/keepalived/*.sh systemctl start keepalived
配置开机自启动
systemctl enable nginx systemctl enable keepalived systemctl enable haproxy
检查VIP
ip addr
开启HAProxy日志(172.16.10.17和172.16.10.18同样)
编辑/etc/rsyslog.conf,打开如下四行注释
$ModLoad imudp
$UDPServerRun 514
$ModLoad imtcp
$InputTCPServerRun 514
增长一行配置
local2.* /var/log/haproxy.log
重启rsyslog服务
systemctl restart rsyslog
到目前为止,RabbitMQ集群部署完毕,最后就是测试集群
集群测试
关闭集群中一半相同功能的服务器,集群能够照常运行
如:
组1:172.16.10.11,172.16.10.13,172.16.10.15,172.16.10.18
组2:172.16.10.10,172.16.10.12,172.16.10.14,172.16.10.17
集群启动顺序
1. 首先启动全部节点的Nginx
2. 再启动Tracker节点
3. 最后启动Storage节点