1、简介
MHA 是一套优秀的做为MySQL高可用性环境下故障切换和主从提高的高可用软件。在MySQL故障切换过程当中,MHA能作到在0~30秒以内自动完成数据库的故障切换操做,而且在进行故障切换的过程当中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。html
该软件由两部分组成:MHA Manager(管理节点) 和 MHA Node(数据节点):node
MHA Manager能够单独部署在一台独立机器上管理多个master-slave集群,也能够部署在一台slave上.MHA Manager探测集群的node节点,当发现master出现故障的时候,它能够自动将具备最新数据的slave提高为新的master,而后将全部其它的slave导向新的master上.整个故障转移过程对应用程序是透明的。mysql
MHA node运行在每台MySQL服务器上(master/slave/manager),它经过监控具有解析和清理logs功能的脚原本加快故障转移的。linux
目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群中必须最少有三台数据库服务器,一主二从,即一台充当master,一台充当备用master,另一台充当从库sql
MHA 0.56 开始,就能够支持GTID了,shell
舒适提示:数据库
必定要关闭防火墙或者开放相关策略 - -!vim
更新系统时间(不是必须):
查看是不是CST上海时区,若不是执行更改: ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
同步网络时间:ntpdate -u asia.pool.ntp.orgbash
注意:在作MHA以前必定要确保主从的数据是一致的,就是两边的执行环境一致。不然故障切换时slave没法自动指向新master!这时就须要手动清除指向老master的记录了。因此为了不这个问题的出现,必定要确保slave与master的binlog日志执行环境时一致的。服务器
环境 | 版本 |
CentOS 7 | |
MHA | 0.57 |
Master | 50.116 | 写入 |
Candicate master | 50.115 | 读 |
slave_Manager | 50.28 | 读 |
警告:搭建过程均参考这两位大神的博客:(有些日志的输出信息,懒得重现了,就直接copy的他们的。)
http://www.cnblogs.com/gomysql/p/3675429.html
http://www.cnblogs.com/xuanzhi201111/p/4231412.html?spm=5176.100239.blogcont52048.7.HR7na7
二 、安装MHA
1.)安装MHA node节点所依赖的perl模块(DBD:mysql):
rpm -ivh http://dl.fedoraproject.org/pub/epel/7Server/x86_64/e/epel-release-7-10.noarch.rpm yum install perl-DBD-MySQL -y
2.)安装MHA node所需依赖:(全部节点)
yum install -y perl-devel yum install -y perl-CPAN
3.)由谷歌维护的MHA 页面自12年就不在更新了,想要最新版本的MHA须要去本身找资源。(上方有给出)
在安装manager以前先安装这些依赖:
yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes -y
下载Node与Manager 软件包,解压后进入目录中执行以下操做安装:
perl Makefile.PL make && make install
Node安装完成后会在/usr/local/bin/下生成这些文件:
-r-xr-xr-x 1 root root 15498 1月 18 11:02 apply_diff_relay_logs -r-xr-xr-x 1 root root 4807 1月 18 11:02 filter_mysqlbinlog -r-xr-xr-x 1 root root 7401 1月 18 11:02 purge_relay_logs -r-xr-xr-x 1 root root 7263 1月 18 11:02 save_binary_logs Node脚本说明:(这些工具一般由MHA Manager的脚本触发,无需人为操做) save_binary_logs //保存和复制master的二进制日志 apply_diff_relay_logs //识别差别的中继日志事件并将其差别的事件应用于其余的slave filter_mysqlbinlog //去除没必要要的ROLLBACK事件(MHA已再也不使用这个工具) purge_relay_logs //清除中继日志(不会阻塞SQL线程)
Manager安装完成后会在/usr/local/bin 下生成以下文件:
-r-xr-xr-x. 1 root root 15498 1月 11 22:55 apply_diff_relay_logs -r-xr-xr-x. 1 root root 4807 1月 11 22:55 filter_mysqlbinlog -r-xr-xr-x. 1 root root 1995 1月 11 22:55 masterha_check_repl -r-xr-xr-x. 1 root root 1779 1月 11 22:55 masterha_check_ssh -r-xr-xr-x. 1 root root 1865 1月 11 22:55 masterha_check_status -r-xr-xr-x. 1 root root 3201 1月 11 22:55 masterha_conf_host -r-xr-xr-x. 1 root root 2517 1月 11 22:55 masterha_manager -r-xr-xr-x. 1 root root 2165 1月 11 22:55 masterha_master_monitor -r-xr-xr-x. 1 root root 2373 1月 11 22:55 masterha_master_switch -r-xr-xr-x. 1 root root 3749 1月 11 22:55 masterha_secondary_check -r-xr-xr-x. 1 root root 1739 1月 11 22:55 masterha_stop -r-xr-xr-x. 1 root root 7401 1月 11 22:55 purge_relay_logs -r-xr-xr-x. 1 root root 7263 1月 11 22:55 save_binary_logd
复制mha4mysql-manager-0.53/samples/scripts/目录下的脚本到/usr/local/bin目录:
-rwxr-xr-x. 1 root root 3443 1月 8 2012 master_ip_failover //自动切换时vip管理的脚本,不是必须,若是咱们使用keepalived的,咱们能够本身编写脚本完成对vip的管理,好比监控mysql,若是mysql异常,咱们中止keepalived就行,这样vip就会自动漂移 -rwxr-xr-x. 1 root root 9186 1月 8 2012 master_ip_online_change //在线切换时vip的管理,不是必须,一样能够能够自行编写简单的shell完成 -rwxr-xr-x. 1 root root 11867 1月 8 2012 power_manager //故障发生后关闭主机的脚本,不是必须 -rwxr-xr-x. 1 root root 1360 1月 8 2012 send_report //因故障切换后发送报警的脚本,不是必须,可自行编写简单的shell完成
3、配置MHA
1.)配置SSH登陆无密码验证(使用key登陆,工做中经常使用,最好不要禁掉密码登陆,若是禁了,可能会有问题)
manager_slave:(当管理节点在本地时须要ssh免秘钥到本地)
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.115
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.116
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.28
slave:
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.116
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.28
master:
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.115
ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.50.28
2.)建立MHA的工做目录 mkdir -p /etc/masterha 、而且建立相关配置文件: cp mha4mysql-manager-0.56/samples/conf/app1.cnf /etc/masterha/
[server default] manager_workdir=/var/log/masterha/app1 manager_log=/var/log/masterha/app1/manager.log master_binlog_dir=/opt/mysql/log master_ip_failover_script= /usr/local/bin/master_ip_failover master_ip_online_change_script= /usr/local/bin/master_ip_online_change password=123456 user=root ping_interval=2 remote_workdir=/tmp repl_password=123456 repl_user=root report_script=/usr/local/bin/send_report secondary_check_script= /usr/local/bin/masterha_secondary_check -s server03 -s server02 shutdown_script="" ssh_user=root [server1] hostname=192.168.50.115 port=3306 [server2] hostname=192.168.50.116 candidate_master=1 check_repl_delay=0 [server3] hostname=192.168.50.28 port=3306
3.)设置relay log的清除方式(在每一个slave节点上)
在50.28与50.116 执行:mysql -uroot -p123456 -e "set global relay_log_purge=0"
注意:
MHA在发生切换的过程当中,从库的恢复过程当中依赖于relay log的相关信息,因此这里要将relay log的自动清除设置为OFF,采用手动清除relay log的方式。在默认状况下,从服务器上的中继日志会在SQL线程执行完毕后被自动删除。可是在MHA环境中,这些中继日志在恢复其余从服务器时可能会被用到,所以须要禁用中继日志的自动删除功能。按期清除中继日志须要考虑到复制延时的问题。在ext3的文件系统下,删除大的文件须要必定的时间,会致使严重的复制延时。为了不复制延时,须要暂时为中继日志建立硬连接,由于在linux系统中经过硬连接删除大文件速度会很快。(在mysql数据库中,删除大表时,一般也采用创建硬连接的方式)
设置按期清理relay脚本(两台slave服务器):
[root@bogon ~]# vim purge_relay_log.sh
#!/bin/bash user=root passwd=123456 port=3306 log_dir='/data/masterha/log' work_dir='/data' purge='/usr/local/bin/purge_relay_logs' if [ ! -d $log_dir ] then mkdir $log_dir -p fi $purge --user=$user --password=$passwd --disable_relay_log_purge --port=$port --workdir=$work_dir >> $log_dir/purge_relay_logs.log 2>&1
参数说明:
--user mysql //用户名 --password mysql //密码 --port //端口号 --workdir //指定建立relay log的硬连接的位置,默认是/var/tmp,因为系统不一样分区建立硬连接文件会失败,故须要执行硬连接具体位置,成功执行脚本后,硬连接的中继日志文件被删除 --disable_relay_log_purge //默认状况下,若是relay_log_purge=1,脚本会什么都不清理,自动退出,经过设定这个参数,当relay_log_purge=1的状况下会将relay_log_purge设置为0。清理relay log以后,最后将参数设置为OFF。
[root@bogon ~]# crontab -l
0 6 * * * /bin/bash /root/purge_relay_log.sh #两台slave的清除时间不要是一致的,否则等到要恢复的时候就尴尬了。
purge_relay_logs脚本删除中继日志不会阻塞SQL线程。下面咱们手动执行看看什么状况:
[root@bogon ~]# purge_relay_logs --user=root --password=123456 --port=3306 -disable_relay_log_purge --workdir=/data/
2015-01-18 12:30:51: purge_relay_logs script started. Found relay_log.info: /data/mysql/relay-log.info Removing hard linked relay log files localhost-relay-bin* under /data/.. done. Current relay log file: /data/mysql/localhost-relay-bin.000002 Archiving unused relay log files (up to /data/mysql/localhost-relay-bin.000001) ... Creating hard link for /data/mysql/localhost-relay-bin.000001 under /data//localhost-relay-bin.000001 .. ok. Creating hard links for unused relay log files completed. Executing SET GLOBAL relay_log_purge=1; FLUSH LOGS; sleeping a few seconds so that SQL thread can delete older relay log files (if it keeps up); SET GLOBAL relay_log_purge=0; .. ok. Removing hard linked relay log files localhost-relay-bin* under /data/.. done. 2015-01-18 12:30:54: All relay log purging operations succeeded.
4.)检查SSH配置(server01 192.168.50.28 Monitor 监控节点上操做),以下:
[root@bogon ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
Fri Nov 3 15:29:01 2017 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Nov 3 15:29:01 2017 - [info] Reading application default configurations from /etc/masterha/app1.cnf..
Fri Nov 3 15:29:01 2017 - [info] Reading server configurations from /etc/masterha/app1.cnf..
Fri Nov 3 15:29:01 2017 - [info] Starting SSH connection tests..
Fri Nov 3 15:29:03 2017 - [debug]
Fri Nov 3 15:29:02 2017 - [debug] Connecting via SSH from root@192.168.50.28(192.168.50.28:22) to root@192.168.50.116(192.168.50.116:22)..
Fri Nov 3 15:29:03 2017 - [debug] ok.
Fri Nov 3 15:29:03 2017 - [debug] Connecting via SSH from root@192.168.50.28(192.168.50.28:22) to root@192.168.50.115(192.168.50.115:22)..
Fri Nov 3 15:29:03 2017 - [debug] ok.
Fri Nov 3 15:29:03 2017 - [debug]
Fri Nov 3 15:29:01 2017 - [debug] Connecting via SSH from root@192.168.50.115(192.168.50.115:22) to root@192.168.50.116(192.168.50.116:22)..
Fri Nov 3 15:29:03 2017 - [debug] ok.
Fri Nov 3 15:29:03 2017 - [debug] Connecting via SSH from root@192.168.50.115(192.168.50.115:22) to root@192.168.50.28(192.168.50.28:22)..
Fri Nov 3 15:29:03 2017 - [debug] ok.
Fri Nov 3 15:29:03 2017 - [debug]
Fri Nov 3 15:29:01 2017 - [debug] Connecting via SSH from root@192.168.50.116(192.168.50.116:22) to root@192.168.50.115(192.168.50.115:22)..
Fri Nov 3 15:29:02 2017 - [debug] ok.
Fri Nov 3 15:29:02 2017 - [debug] Connecting via SSH from root@192.168.50.116(192.168.50.116:22) to root@192.168.50.28(192.168.50.28:22)..
Fri Nov 3 15:29:03 2017 - [debug] ok.
Fri Nov 3 15:29:03 2017 - [info] All SSH connection tests passed successfully.
5.)检查整个复制环境情况(192.168.50.28 Monitor 监控节点上操做),以下:
[root@bogon ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf (错误环境我就不重现了,直接复制的网上的案例,因此别太在乎IP什么的,只看搭建流程和error的解决方法就好。)
Sun Jan 18 13:08:11 2015 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data/mysql --output_file=/tmp/save_binary_logs_test --manager_version=0.56 --start_file=mysql-bin.000004 Sun Jan 18 13:08:11 2015 - [info] Connecting to root@192.168.2.128(192.168.2.128).. Creating /tmp if not exists.. ok. Checking output directory is accessible or not.. ok. Binlog found at /data/mysql, up to mysql-bin.000004 Sun Jan 18 13:08:11 2015 - [info] Master setting check done. Sun Jan 18 13:08:11 2015 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers.. Sun Jan 18 13:08:11 2015 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user=root --slave_host=192.168.2.129 --slave_ip=192.168.2.129 --slave_port=3306 --workdir=/tmp --target_version=5.5.60-log --manager_version=0.56 --relay_log_info=/data/mysql/relay-log.info --relay_dir=/data/mysql/ --slave_pass=xxx Sun Jan 18 13:08:11 2015 - [info] Connecting to root@192.168.2.129(192.168.2.129:22).. Can't exec "mysqlbinlog": 没有那个文件或目录 at /usr/local/share/perl5/MHA/BinlogManager.pm line 99. mysqlbinlog version not found! at /usr/local/bin/apply_diff_relay_logs line 463 Sun Jan 18 13:08:12 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln193] Slaves settings check failed! Sun Jan 18 13:08:12 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln372] Slave configuration failed. Sun Jan 18 13:08:12 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln383] Error happend on checking configurations. at /usr/local/bin/masterha_check_repl line 48 Sun Jan 18 13:08:12 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln478] Error happened on monitoring servers. Sun Jan 18 13:08:12 2015 - [info] Got exit code 1 (Not master dead). MySQL Replication Health is NOT OK!
若是发现以下错误:
Can't exec "mysqlbinlog": No such file or directory at /usr/local/share/perl5/MHA/BinlogManager.pm line 99. mysqlbinlog version not found!
Testing mysql connection and privileges..sh: mysql: command not found mysql command failed with rc 127:0!
能够经过如下方法解决(在全部节点上执行):
ln -s /usr/local/mysql/bin/mysqlbinlog /usr/local/bin/mysqlbinlog ln -s /usr/local/mysql/bin/mysql /usr/local/bin/mysql
仍是报错,纠结N久,才发现缘由是:原来Failover两种方式:一种是虚拟IP地址,一种是全局配置文件。MHA并无限定使用哪种方式,而是让用户本身选择,虚拟IP地址的方式会牵扯到其它的软件,好比keepalive软件,并且还要修改脚本master_ip_failover。
因此先暂时注释master_ip_failover_script= /usr/local/bin/master_ip_failover这个选项。后面引入keepalived后和修改该脚本之后再开启该选项,以下:
192.168.2.131 [root ~]$ grep master_ip_failover /etc/masterha/app1.cnf #master_ip_failover_script= /usr/local/bin/master_ip_failover
192.168.2.131 [root ~]$ masterha_check_repl --conf=/etc/masterha/app1.cnf Sun Jan 18 13:23:57 2015 - [info] Slaves settings check done. Sun Jan 18 13:23:57 2015 - [info] 192.168.2.128 (current master) +--192.168.2.129 +--192.168.2.130 Sun Jan 18 13:23:57 2015 - [info] Checking replication health on 192.168.2.129.. Sun Jan 18 13:23:57 2015 - [info] ok. Sun Jan 18 13:23:57 2015 - [info] Checking replication health on 192.168.2.130.. Sun Jan 18 13:23:57 2015 - [info] ok. Sun Jan 18 13:23:57 2015 - [warning] master_ip_failover_script is not defined. Sun Jan 18 13:23:57 2015 - [warning] shutdown_script is not defined. Sun Jan 18 13:23:57 2015 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.
6.)控制MHA Manager的运行状态
查询:masterha_check_status --conf=/etc/masterha/app1.cnf
启动:nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1 &
关闭:masterha_stop --conf=/etc/masterha/app1.cnf
查看日志:
tail -f /var/log/masterha/app1/manager.log
Sun Jan 18 13:27:22 2015 - [warning] master_ip_failover_script is not defined. Sun Jan 18 13:27:22 2015 - [warning] shutdown_script is not defined. Sun Jan 18 13:27:22 2015 - [info] Set master ping interval 1 seconds. Sun Jan 18 13:27:22 2015 - [info] Set secondary check script: /usr/local/bin/masterha_secondary_check -s server03 -s server02 Sun Jan 18 13:27:22 2015 - [info] Starting ping health check on 192.168.50.116(192.168.50.116:3306).. Sun Jan 18 13:27:22 2015 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
启动参数说明:
--remove_dead_master_conf //该参数表明当发生主从切换后,老的主库的ip将会从配置文件中移除。 --manger_log //日志存放位置 --ignore_last_failover //在缺省状况下,若是MHA检测到连续发生宕机,且两次宕机间隔不足8小时的话,则不会进行Failover,之因此这样限制是为了不ping-pong效应。该参数表明忽略上次MHA触发切换产生的文件,默认状况下,MHA发生切换后会在日志目录,也就是上面我设置的/data产生app1.failover.complete文件,下次再次切换的时候若是发现该目录下存在该文件将不容许触发切换,除非在第一次切换后收到删除该文件,为了方便,这里设置为--ignore_last_failover。
7.)配置VIP
vip配置能够采用两种方式,一种经过keepalived的方式管理虚拟ip的浮动;另一种经过脚本方式启动虚拟ip的方式(即不须要keepalived或者heartbeat相似的软件)。
下面先介绍经过安装keepalived来管理虚拟IP的浮动:
(1)下载软件进行并进行安装(两台master,准确的说一台是master,另一台是备选master,在没有切换之前是slave)
http://www.keepalived.org/software/keepalived-1.3.8.tar.gz
进入解压完后的目录执行:
./configure --prefix=/usr/local/keepalived ;make && make install
制做快捷启动:
cp keepalived/etc/init.d/keepalived /etc/init.d/ cp keepalived/etc/sysconfig/keepalived /etc/sysconfig mkdir /etc/keepalived cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/ cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
主master的配置:
! Configuration File for keepalived global_defs { notification_email { 1*******@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id MySQL-HA } vrrp_instance VI_1 { state backup interface ens192 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.50.123 } }
备用master的配置:
! Configuration File for keepalived global_defs { notification_email { 11*******@qq.com } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id MySQL-HA } vrrp_instance VI_1 { state BACKUP interface ens160 virtual_router_id 33 priority 120 advert_int 1 nopreempt authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.50.123 } }
其中router_id MySQL HA表示设定keepalived组的名称,将192.168.50.123这个虚拟ip绑定到该主机的ens的网卡上,而且设置了状态为backup模式。priority 150表示设置的优先级为150。nopreempt 容许一个priority比较低的节点做为master,即便有priority更高的节点启动,nopreemt必须在state为BACKUP的节点上才生效。(还有一个细节要注意的,要看清楚本身的网卡是eth0作模拟VIP,仍是eth1)
在master与备用master依次启动:
/etc/init.d/keepalived start
执行命令 ip a (注意ifconfig命令没法查看到配置的虚拟IP)
注意:
上面两台服务器的keepalived都设置为了BACKUP模式,在keepalived中2种模式,分别是master->backup模式和backup->backup模式。这两种模式有很大区别。在master->backup模式下,一旦主库宕机,虚拟ip会自动漂移到从库,当主库修复后,keepalived启动后,还会把虚拟ip抢占过来,即便设置了非抢占模式(nopreempt)抢占ip的动做也会发生。在backup->backup模式下,当主库宕机后虚拟ip会自动漂移到从库上,当原主库恢复和keepalived服务启动后,并不会抢占新主的虚拟ip,即便是优先级高于从库的优先级别,也不会发生抢占。为了减小ip漂移次数,一般是把修复好的主库当作新的备库。
(8)MHA引入keepalived(MySQL服务进程挂掉时经过MHA 中止keepalived):
要想把keepalived服务引入MHA,咱们只须要修改切换是触发的脚本文件master_ip_failover便可,在该脚本中添加在master发生宕机时对keepalived的处理。
一、编辑脚本/usr/local/bin/master_ip_failover,修改后以下
#!/usr/bin/env perl
use strict; use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); my $vip = '192.168.0.88'; my $ssh_start_vip = "/etc/init.d/keepalived start"; my $ssh_stop_vip = "/etc/init.d/keepalived stop"; GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ($@) { warn "Got Error: $@\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { my $exit_code = 10; eval { print "Enabling the VIP - $vip on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ($@) { warn $@; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n";
exit 0; } else { &usage(); exit 1; } }
sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; }
sub stop_vip() {
return 0 unless ($ssh_user); `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; }
sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; }
把#master_ip_failover_script= /usr/local/bin/master_ip_failover打开
[root ~]$ grep 'master_ip_failover_script' /etc/masterha/app1.cnf master_ip_failover_script= /usr/local/bin/master_ip_failover
而后再次检测:
masterha_check_repl --conf=/etc/masterha/app1.cnf
IN SCRIPT TEST====/etc/init.d/keepalived stop==/etc/init.d/keepalived start=== Checking the Status of the script.. OK Tue Nov 7 13:48:16 2017 - [info] OK. Tue Nov 7 13:48:16 2017 - [warning] shutdown_script is not defined. Tue Nov 7 13:48:16 2017 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.
查看日志以下面同样表示正在监控:tail -f /var/log/masterha/app1/manager.log
Tue Nov 7 12:46:18 2017 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
9.)在管理节点更改hosts
[root@bogon ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.50.115 server01
192.168.50.116 server02
192.168.50.28 server03
10.)停掉Master的MySQL服务模拟宕机,而后到管理节点查看日志:
tail -f /var/log/masterha/app1/manager.log (最后一段的展现)
----- Failover Report -----
app1: MySQL Master failover 192.168.50.116(192.168.50.116:3306) to 192.168.50.115(192.168.50.115:3306) succeeded
Master 192.168.50.116(192.168.50.116:3306) is down!
Check MHA Manager logs at bogon:/var/log/masterha/app1/manager.log for details.
Started automated(non-interactive) failover.
Invalidated master IP address on 192.168.50.116(192.168.50.116:3306)
Selected 192.168.50.115(192.168.50.115:3306) as a new master.
192.168.50.115(192.168.50.115:3306): OK: Applying all logs succeeded.
192.168.50.115(192.168.50.115:3306): OK: Activated master IP address.
192.168.50.28(192.168.50.28:3306): OK: Slave started, replicating from 192.168.50.115(192.168.50.115:3306)
192.168.50.115(192.168.50.115:3306): Resetting slave info succeeded.
Master failover to 192.168.50.115(192.168.50.115:3306) completed successfully.
Tue Nov 21 16:32:05 2017 - [info] Sending mail..
Unknown option: conf
11.)在以前的Master(192.168.50.116)上查看一下vip,发现已经消失。而后在候选master查看正常。完成切换。
4、报错案例:
1.
复制检测时的案例:ERROR 1142 (42000) at line 1: CREATE command denied to user 'root'@'192.168.50.28' for table 'apply_diff_relay_logs_test'
解决:grant all privileges on *.* to 'root'@'192.168.50.%' identified by '12345678';
2.mha0.53版本BUG
1.)模拟故障发生,进行切换的日志报错:Got ERROR: Use of uninitialized value $msg in scalar chomp at /usr/local/share/perl5/MHA/ManagerConst.pm line 90.
解决方法有两个:
(1.1)网上说这是一个0.53的bug,建议更换0.56
(1.2)/usr/local/share/perl5/MHA/ManagerConst.pm 在此段中添加______的代码
our $log_fmt = sub {
my %args = @_;
my $msg = $args{message};
+ $msg = "" unless($msg);
chomp $msg;
if ( $args{level} eq "error" ) {
my ( $ln, $script ) = ( caller(4) )[ 2, 1 ];
2.)采用GTID主从复制后,模拟故障发生,切换成功,但slave没法指向新master,这是mha0.53版本的一个BUG更换MHA0.56及以上便可
[error][/usr/local/share/perl5/MHA/Server.pm, ln714] Checking slave status failed on 192.168.50.28(192.168.50.28:3306). [error][/usr/local/share/perl5/MHA/Server.pm, ln817] Starting slave IO/SQL thread on 192.168.50.28(192.168.50.28:3306) failed! Mon Nov 20 10:37:30 2017 - [info] End of log messages from 192.168.50.28. Mon Nov 20 10:37:30 2017 - [error][/usr/local/share/perl5/MHA/MasterFailover.pm, ln1537] Master failover to 192.168.50.115(192.168.50.115:3306) done, but recovery on slave partially failed. ----- Failover Report ----- app1: MySQL Master failover 192.168.50.116 to 192.168.50.115 Master 192.168.50.116 is down! Check MHA Manager logs at bogon:/var/log/masterha/app1/manager.log for details. Started automated(non-interactive) failover. Invalidated master IP address on 192.168.50.116. The latest slave 192.168.50.115(192.168.50.115:3306) has all relay logs for recovery. Selected 192.168.50.115 as a new master. 192.168.50.115: OK: Applying all logs succeeded. 192.168.50.115: OK: Activated master IP address. 192.168.50.28: This host has the latest relay log events. Generating relay diff files from the latest slave succeeded. 192.168.50.28: WARN: Applying all logs succeeded. But starting slave failed. Master failover to 192.168.50.115(192.168.50.115:3306) done, but recovery on slave partially failed. Mon Nov 20 10:37:30 2017 - [info] Sending mail.. Option new_slave_hosts requires an argument Unknown option: conf
3.故障切换成功后,启动监控时的日志:
[warning] SQL Thread is stopped(no error) on 192.168.50.115(192.168.50.115:3306)
[error][/usr/local/share/perl5/MHA/ServerManager.pm, ln732] Multi-master configuration is Master configurations are as below:
Master 192.168.50.115(192.168.50.115:3306), replicating from 192.168.50.116(192.168.50.116:3306)
Master 192.168.50.116(192.168.50.116:3306), dead
解决:
使用启动时自动删除老主的参数(--remove_dead_master_conf ),但前提条件是,主库down机后切换的环境必须是正常的,无任何报错状况下可以使用。反之该参数无效,须要手动删除相应配置。
4.故障切换成功后,启动监控时的日志:
[warning] SQL Thread is stopped(no error) on 192.168.50.115(192.168.50.115:3306)
[error][/usr/local/share/perl5/MHA/ServerManager.pm, ln622] Master 192.168.50.116:3306 from which slave 192.168.50.115(192.168.50.115:3306) replicates is not defined in the configuration file!
因为新master此时仍是存有指向老master的slave状态的,因此mha将新master当成了一个slave,而新master的slave所指向的是旧的且已经宕机的老master,因此报错,解决办法是清除新master的salve信息。
stop salve;
reset slave all;
正常状况下MHA会自动清除新master的slave指向状态的。因此这多是主从的binlog执行环境不一致致使的,也多是mha0.53版本BUG致使的。
5.设置MHA为master->backup模式
老master恢复后抢占主,此时须要将manager关闭,而后在配置文件中将老master添加回[server]字段,接着清除老master的slave信息并将新master change到老master,而后开启manager监控,在将新master服务关闭,此时MHA会进行故障转移的操做。(须要将keeplived设置为master——》backup模式,VIP会自动飘到优先级高的去)