随着应用系统规模的不断扩大,对数据的安全性和可靠性也提出的更好的要求,rsync在高端业务系统中也逐渐暴露出了不少不足。
首先,rsync在同步数据时,须要扫描全部文件后进行比对,进行差量传输。若是文件数量达到了百万甚至千万量级,扫描全部文件将是很是耗时的,而且正在发生变化的每每是其中不多的一部分,这是很是低效的方式。
其次,rsync不能实时的去监测、同步数据,虽然它能够经过linux守护进程的方式进行触发同步,可是两次触发动做必定会有时间差,这样就致使了服务端和客户端数据可能出现不一致,没法在应用故障时彻底的恢复数据。html
基于以上两种状况,可使用rsync+inotify的组合来解决,能够实现数据的实时同步。linux
inotify是一种强大的、细粒度的、异步的文件系统事件控制机制。linux内核从2.6.13起,加入了inotify支持,经过inotify能够监控文件系统中添加、删除、修改、移动等各类事件,利用这个内核接口,第三方软件就能够监控文件系统下文件的各类变化状况,而inotify-tools正是实施监控的软件。
在使用rsync首次全量同步后,结合inotify对源目录进行实时监控,只有有文件变更或新文件产生,就会马上同步到目标目录下,很是高效使用!nginx
分别将c++
192.168.1.1的/Data/fangfull_upload和/Data/erp_upload 192.168.1.2的/Data/xqsj_upload/和/Data/fa`n`ghu_upload_src 192.168.1.3的/Data/Static_img/webroot/ssapp-prod和/usr/local/nginx/html/ssapp.prod
实时同步到192.168.1.5的/home/backup/image-back目录下对应的fangfull_upload、erp_upload、xqsj_upload、fanghu_upload_src、ssapp-prod和ssapp.prod目录。
git
这样的话:
(1)192.168.1.一、192.168.1.二、192.168.1.3这三台服务器是源服务器,做为rsync的客户端,部署rsync+inotify。
(2)192.168.1.5是目标服务器,做为rsync的服务端。只须要安装配置rsync便可,不须要安装inotify。github
vim /etc/selinux/config SELINUX=disabled setenforce 0
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.1" port protocol="tcp" port="22" accept" firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.1" port protocol="tcp" port="873" accept" firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.2" port protocol="tcp" port="22" accept" firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.2" port protocol="tcp" port="873" accept" firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.3" port protocol="tcp" port="22" accept" firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.3" port protocol="tcp" port="873" accept" systemctl restart firewalld
yum install rsync xinetd vim /etc/xinetd.d/rsync ..... disable = no #由默认的yes改成no,设置开机启动rsync
/etc/init.d/xinetd start
web
vim /etc/rsyncd.conf log file = /var/log/rsyncd.log #日志文件位置,启动rsync后自动产生这个文件,无需提早建立 pidfile = /var/run/rsyncd.pid #pid文件的存放位置 lock file = /var/run/rsync.lock #支持max connections参数的锁文件 secrets file = /etc/rsync.pass #用户认证配置文件,里面保存用户名称和密码,后面会建立这个文件 motd file = /etc/rsyncd.Motd #rsync启动时欢迎信息页面文件位置(本身建立这个文件,内容随便自定义) [fangfull_upload] #自定义名称 path = /home/backup/image-back/fangfull_upload #rsync服务端数据目录路径,即同步到目标目录后的存放路径 comment = fangfull_upload #模块名称与[fangfull_upload]自定义名称相同 uid = nobody #设置rsync运行的uid权限。这个要保证同步到目标目录后的权限和源目录一致,即都是nobody! gid = nobody #设置rsync运行的gid权限。 port=873 #默认的rsync端口 use chroot = no #默认为true,修改成no或false,增长对目录文件软链接的备份 read only = no #设置rsync服务端文件为读写权限 list = no #不显示rsync服务端资源列表 max connections = 200 #最大链接数 timeout = 600 #设置超时时间 auth users = RSYNC_USER #执行数据同步的用户名,须要后面手动设置。能够设置多个,用英文状态下逗号隔开 hosts allow = 192.168.1.1 #容许进行数据同步的客户端IP地址,能够设置多个,用英文状态下逗号隔开 hosts deny = 192.168.1.194 #禁止数据同步的客户端IP地址,能够设置多个,用英文状态下逗号隔开(若是没有禁止,就不用设置这一行) [erp_upload] path = /home/backup/image-back/erp_upload comment = erp_upload uid = nobody gid = nobody port=873 use chroot = no read only = no list = no max connections = 200 timeout = 600 auth users = RSYNC_USER hosts allow = 192.168.1.1 [xqsj_upload] path = /home/backup/image-back/xqsj_upload comment = xqsj_upload uid = nobody gid = nobody port=873 use chroot = no read only = no list = no max connections = 200 timeout = 600 auth users = RSYNC_USER hosts allow = 192.168.1.2 [fanghu_upload_src] path = /home/backup/image-back/fanghu_upload_src comment = fanghu_upload_src uid = nobody gid = nobody port=873 use chroot = no read only = no list = no max connections = 200 timeout = 600 auth users = RSYNC_USER hosts allow = 192.168.1.2 [ssapp-prod] path = /home/backup/image-back/ssapp-prod comment = ssapp-prod uid = nginx gid = nginx port=873 use chroot = no read only = no list = no max connections = 200 timeout = 600 auth users = RSYNC_USER hosts allow = 192.168.1.3 [ssapp.prod] path = /home/backup/image-back/ssapp.prod comment = ssapp.prod uid = nginx gid = nginx port=873 use chroot = no read only = no list = no max connections = 200 timeout = 600 auth users = RSYNC_USER hosts allow = 192.168.1.3
vim /etc/rsync.pass xiaoshengyu:123456@rsync
chmod 600 /etc/rsyncd.conf chmod 600 /etc/rsync.pass
/etc/init.d/xinetd restart lsof -i:873 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME xinetd 22041 root 5u IPv6 3336440 0t0 TCP *:rsync (LISTEN)
cd /home/backup/image-back/ mkdir fangfull_upload erp_upload xqsj_upload fanghu_upload_src ssapp-prod ssapp.prod
vim /etc/selinux/config SELINUX=disabled setenforce 0
yum install rsync xinetd vim /etc/xinetd.d/rsync ..... disable = no #由默认的yes改成no,设置开机启动rsync
/etc/init.d/xinetd start lsof -i:873 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME xinetd 22041 root 5u IPv6 3336440 0t0 TCP *:rsync (LISTEN)
vim /etc/rsync.pass 123456@rsync
chmod 600 /etc/rsync.pass
vim
ll /proc/sys/fs/inotify max_queued_events max_user_instances max_user_watches
yum install make gcc gcc-c++ #安装编译工具 cd /usr/local/src wget http://github.com/downloads/rvoicilas/inotify-tools/inotify-tools-3.14.tar.gz tar zxvf inotify-tools-3.14.tar.gz cd inotify-tools-3.14 ./configure --prefix=/usr/local/inotify make && make install
vim /etc/profile export PATH=$PATH:/usr/local/inotify/bin source /etc/profile
vim /etc/ld.so.conf /usr/local/inotify/lib ldconfig
查看系统默认参数值数组
sysctl -a | grep max_queued_events fs.inotify.max_queued_events = 16384 sysctl -a | grep max_user_watches fs.inotify.max_user_watches = 8192 sysctl -a | grep max_user_instances fs.inotify.max_user_instances = 128
sysctl -w fs.inotify.max_queued_events="99999999" sysctl -w fs.inotify.max_user_watches="99999999" sysctl -w fs.inotify.max_user_instances="65535 "
max_queued_events:
inotify队列最大长度,若是值过小,会出现" Event Queue Overflow "错误,致使监控文件不许确
max_user_watches:
要同步的文件包含多少目录,能够用:find /Data/xqsj_upload -type d | wc -l 统计这些源目录下的目录数,必须保证max_user_watches值大于统计结果(这里/Data/xqsj_upload为同步的源文件目录)
max_user_instances:
每一个用户建立inotify实例最大值安全
在192.168.1.1服务器上
第一次全量同步:
rsync -avH --port=873 --progress --delete /Data/fangfull_upload/ RSYNC_USER@192.168.1.5::fangfull_upload --password-file=/etc/rsync.pass rsync -avH --port=873 --progress --delete /Data/erp_upload/ RSYNC_USER@192.168.1.5::erp_upload --password-file=/etc/rsync.pass
实时同步脚本里添加的是--delete-before参数,而不是--delete参数(第一次全量同步时rsync用的参数),两者区别:
--delete参数:表示rsync同步前,强制删除目标目录中的全部文件,而后再执行同步操做。
--delete-before参数:表示rsync同步前,会先对目标目录进行一次扫描检索,删除目标目录中对比源目录的多余文件,而后再执行同步操做。显然比--delete参数安全些。
cd /home/rsync/ cat rsync_fangfull_upload_inotify.sh #!/bin/bash SRCDIR=/Data/fangfull_upload/ USER=RSYNC_USER IP=192.168.1.5 DESTDIR=fangfull_upload /usr/local/inotify/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,modify,delete,create,attrib,move $SRCDIR | while read file do /usr/bin/rsync -avH --port=873 --progress --delete-before $SRCDIR $USER@$IP::$DESTDIR --password-file=/etc/rsync.pass echo " ${file} was rsynced" >> /tmp/rsync.log 2>&1 done
cat rsync_erp_upload_inotify.sh #!/bin/bash SRCDIR=/Data/erp_upload/ USER=RSYNC_USER IP=192.168.1.5 DESTDIR=erp_upload /usr/local/inotify/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,modify,delete,create,attrib,move $SRCDIR | while read file do /usr/bin/rsync -avH --port=873 --progress --delete-before $SRCDIR $USER@$IP::$DESTDIR --password-file=/etc/rsync.pass echo " ${file} was rsynced" >> /tmp/rsync.log 2>&1 done
nohup sh rsync_fangfull_upload_inotify.sh & nohup sh rsync_erp_upload_inotify.sh &
ps -ef|grep inotify
root 11390 1 0 13:41 ? 00:00:00 sh rsync_erp_upload_inotify.sh
root 11392 11390 0 13:41 ? 00:00:00 sh rsync_erp_upload_inotify.sh
root 11397 1 0 13:41 ? 00:00:00 sh rsync_fangfull_upload_inotify.sh
root 11399 11397 0 13:41 ? 00:00:00 sh rsync_fangfull_upload_inotify.sh
root 21842 11702 0 17:22 pts/0 00:00:00 grep --color=auto inotify
好比在源目录/Data/fangfull_upload中建立一个文件或目录,会自动实时同步到目标机器192.168.1.5的目标目录/home/backup/image-back/fangfull_upload中。
第一次全量同步:
rsync -avH --port=873 --progress --delete /Data/xqsj_upload/ RSYNC_USER@192.168.1.5::xqsj_upload --password-file=/etc/rsync.pass rsync -avH --port=873 --progress --delete /Data/fanghu_upload_src/ RSYNC_USER@192.168.1.5::fanghu_upload_src --password-file=/etc/rsync.pass
rsync+inotify实时同步:
cd /home/rsync/ cat rsync_xqsj_upload_inotify.sh #!/bin/bash SRCDIR=/Data/xqsj_upload/ USER=RSYNC_USER IP=192.168.1.5 DESTDIR=xqsj_upload /usr/local/inotify/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,modify,delete,create,attrib,move $SRCDIR | while read file do /usr/bin/rsync -avH --port=873 --progress --delete-before $SRCDIR $USER@$IP::$DESTDIR --password-file=/etc/rsync.pass echo " ${file} was rsynced" >> /tmp/rsync.log 2>&1 done
cat rsync_fanghu_upload_src_inotify.sh #!/bin/bash SRCDIR=/Data/fanghu_upload_src/ USER=RSYNC_USER IP=192.168.1.5 DESTDIR=fanghu_upload_src /usr/local/inotify/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,modify,delete,create,attrib,move $SRCDIR | while read file do /usr/bin/rsync -avH --port=873 --progress --delete-before $SRCDIR $USER@$IP::$DESTDIR --password-file=/etc/rsync.pass echo " ${file} was rsynced" >> /tmp/rsync.log 2>&1 done
nohup sh rsync_xqsj_upload_inotify.sh & nohup rsync_fanghu_upload_src_inotify.sh &
好比在源目录/Data/xqsj_upload中建立一个文件或目录,会自动实时同步到目标机器192.168.1.5的目标目录/home/backup/image-back/xqsj_upload中。
第一次全量同步:
rsync -avH --port=873 --progress --delete /Data/Static_img/webroot/ssapp-prod/ RSYNC_USER@192.168.1.5::ssapp-prod --password-file=/etc/rsync.pass rsync -avH --port=873 --progress --delete /usr/local/nginx/html/ssapp.prod/ RSYNC_USER@192.168.1.5::ssapp.prod --password-file=/etc/rsync.pass
cd /home/rsync/ cat rsync_ssapp-prod_inotify.sh #!/bin/bash SRCDIR=/Data/Static_img/webroot/ssapp-prod/ USER=RSYNC_USER IP=192.168.1.5 DESTDIR=ssapp-prod /usr/local/inotify/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,modify,delete,create,attrib,move $SRCDIR | while read file do /usr/bin/rsync -avH --port=873 --progress --delete-before $SRCDIR $USER@$IP::$DESTDIR --password-file=/etc/rsync.pass echo " ${file} was rsynced" >> /tmp/rsync.log 2>&1 done
cat rsync_ssapp.prod_inotify.sh #!/bin/bash SRCDIR=/usr/local/nginx/html/ssapp.prod/ USER=RSYNC_USER IP=192.168.1.5 DESTDIR=ssapp.prod /usr/local/inotify/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,modify,delete,create,attrib,move $SRCDIR | while read file do /usr/bin/rsync -avH --port=873 --progress --delete-before $SRCDIR $USER@$IP::$DESTDIR --password-file=/etc/rsync.pass echo " ${file} was rsynced" >> /tmp/rsync.log 2>&1 done
nohup sh rsync_ssapp-prod_inotify.sh & nohup rsync_ssapp.prod_inotify.sh &
好比在源目录/Data/Static_img/webroot/ssapp-prod中建立一个文件或目录,会自动实时同步到目标机器192.168.1.5的目标目录/home/backup/image-back/ssapp-prod中。
若是在同步过程当中,发现中途报错!重复执行同步命令一直是报这个错误:
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at
main.c(1505)
最后发现缘由:
是由于在同步的时候,源目录下有软连接文件!
rsync同步软连接文件,应该加参数-l
因此,最好在使用rsync同步命令的时候,后面跟-avpgolr参数组合(将上面的-avH改为-avpgolr)
-a:递归 -v:打印详细过程 -p:保持文件属性 -g:文件所属组不变 -o:文件所属者不变 -l:软链接属性 -r:同步目录时的参数