1、原理分析
node
一、MHA的简介mysql
MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就任于Facebook公司)开发,是一套优秀的做为MySQL高可用性环境下故障切换和主从提高的高可用软件。在MySQL故障切换过程当中,MHA能作到在0~30秒以内自动完成数据库的故障切换操做,而且在进行故障切换的过程当中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。sql
二、MHA组成数据库
该软件由两部分组成:MHA Manager(管理节点)和MHA Node(数据节点)。MHA Manager能够单独部署在一台独立的机器上管理多个master-slave集群,也能够部署在一台slave节点上。MHA Node运行在每台MySQL服务器上,MHA Manager会定时探测集群中的master节点,当master出现故障时,它能够自动将数据的slave提高为新的master,而后将全部其余的slave从新指向新的master。整个故障转移过程对应用程序彻底透明。vim
Manager工具包主要包括如下几个工具:centos
> masterha_check_ssh #检查MHA的SSH配置情况安全
masterha_check_repl #检查MySQL复制情况服务器
masterha_manger #启动MHA网络
masterha_check_status #检测当前MHA运行状态架构
masterha_master_monitor #检测master是否宕机
masterha_master_switch #控制故障转移(自动或者手动)
masterha_conf_host #添加或删除配置的server信息
Node工具包(这些工具一般由MHA Manager的脚本触发,无需人为操做)主要包括如下几个工具:
> save_binary_logs #保存和复制master的二进制日志
apply_diff_relay_logs #识别差别的中继日志事件并将其差别的事件应用于其余的slave
filter_mysqlbinlog #去除没必要要的ROLLBACK事件(MHA已再也不使用这个工具)
purge_relay_logs #清除中继日志(不会阻塞SQL线程)
注意:
为了尽量的减小主库硬件损坏宕机形成的数据丢失,所以在配置MHA的同时建议配置成MySQL 5.5的半同步复制。
异步复制(Asynchronous replication)
MySQL默认的复制便是异步的,主库在执行完客户端提交的事务后会当即将结果返给给客户端,并不关心从库是否已经接收并处理,这样就会有一个问题,主若是crash掉了,此时主上已经提交的事务可能并无传到从上,若是此时,强行将从提高为主,可能致使新主上的数据不完整。
全同步复制(Fully synchronous replication)
指当主库执行完一个事务,全部的从库都执行了该事务才返回给客户端。由于须要等待全部从库执行完该事务才能返回,因此全同步复制的性能必然会收到严重的影响。须要有超时时间。
半同步复制(Semisynchronous replication)
介于异步复制和全同步复制之间,主库在执行完客户端提交的事务后不是马上返回给客户端,而是等待至少一个从库接收到并写到relay log中才返回给客户端。相对于异步复制,半同步复制提升了数据的安全性,同时它也形成了必定程度的延迟,这个延迟最少是一个TCP/IP往返的时间。因此,半同步复制最好在低延时的网络中使用。
三、MHA工做原理
(1)从宕机崩溃的master保存二进制日志事件(binlog events);
(2)识别含有最新更新的slave;
(3)应用差别的中继日志(relay log) 到其余slave;
(4)应用从master保存的二进制日志事件(binlog events);
(5)提高一个slave为新master;
(6)使用其余的slave链接新的master进行复制。
在MHA自动故障切换过程当中,MHA试图从宕机的主服务器上保存二进制日志,较大程度的保证数据的不丢失,但这并不老是可行的。例如,若是主服务器硬件故障或没法经过ssh访问,MHA无法保存二进制日志,只进行故障转移而丢失了的数据。使用MySQL 5.5的半同步复制,能够大大下降数据丢失的风险。MHA能够与半同步复制结合起来。若是只有一个slave已经收到了的二进制日志,MHA能够将的二进制日志应用于其余全部的slave服务器上,所以能够保证全部节点的数据一致性。
目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群中必须最少有三台数据库服务器,一主二从,即一台充当master,一台充当备用master,另一台充当从库,由于至少须要三台服务器。
2、实验环境
1 系统版本
cat /etc/redhat-release
统一版本,统一规范。
2 内核参数:uname -r
3 主机配置参数:准备4台干净的主机node{1,2,3,4}
互相可以解析主机名,因为节点上配置文件,不少都是大致相同的,只须要修改一份让后使用for循环复制给其余节点便可,简单方便,因此这里实现主机名的认证。
角色 ip地址 主机名 server_id 类型
Master 192.168.159.11 node1 1 写入
Slave 192.168.159.151 node2 2 读
Slave 192.168.159.120 node3 3 读
MHA-Manager 192.168.159.121 node4 - 监控复制组
4 实现互相可以解析主机名
[root@vin ~]# cat /etc/hosts
192.168.159.11 node1.com node1
192.168.159.151 node2.com node2
192.168.159.120 node3.com node3
192.168.159.121 node4.com node4
5 实现主机间的互相无密钥通讯
因为使用MHA时,Manager须要验证各个节点之间的ssh连通性,因此咱们在这里须要实现给节点之间的无密钥通讯,这里采用了一个简单的方法,那就是在某个节点上生成ssh密钥对,实现本主机的认证,而后将认证文件以及公私钥都复制到其余节点,这样就不须要,每一个节点都去建立密钥对,再实现认证了。
> 操做在node4:Manager 节点上操做:
[root@node4 ~]# ssh-keygen -t rsa
[root@node4 ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.159.151
[root@node4 ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.159.120
[root@node4 ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.159.121
master节点:
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.159.120
ssh-copy_id -i /root/.ssh/id_rsa.pub 192.168.159.121
slave1节点:
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.159.151
ssh-copy_id -i /root/.ssh/id_rsa.pub 192.168.159.121
slave2节点:
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.159.120
ssh-copy_id -i /root/.ssh/id_rsa.pub 192.168.159.151
3、实现主从复制集群
一、初始主节点master配置:
vim /etc/my.cnf
[mysqld]
server-id = 1
log-bin = master-log
relay-log = relay-log
skip_name_resolve = ON
二、全部slave节点依赖的配置:
[mysqld]
server-id = 2 #复制集群中的各节点的id均必须惟一;
relay-log = relay-log
log-bin = master-log
read_only = ON
relay_log_purge = 0 #是否自动清空再也不须要中继日志
skip_name_resolve = ON
三、按上述要求分别配置好主从节点以后,按MySQL复制配置架构的配置方式将其配置完成
并启动master节点和各slave节点,以及为各slave节点启动其IO和SQL线程,确保主从复制运行无误。操做以下:
master节点上,为slave建立可用于复制同步数据的用户:
> MariaDB [(none)]>GRANT REPLICATION SLAVE,REPLICATION CLIENT ON *.* TO slave@'192.168.%.%' IDENTIFIED BY 'magedu';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> SHOW MASTER STATUS;
各slave节点上:
[root@node3 ~]# mysql
MariaDB [(none)]> CHANGE MASTER TO
MASTER_HOST='192.168.159.151',
MASTER_USER='slave',
MASTER_PASSWORD='magedu',
MASTER_LOG_FILE='master-bin.000003',
MASTER_LOG_POS=415;
MariaDB [(none)]> START SLAVE;
MariaDB [(none)]> SHOW SLAVE STATUS\G
四、启动slave
登陆数据库后执行:slave start;
查看slave状态:show slave status\G;
若是显示:
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
标志着slave启动成功,而且正常
五、在每一个slave上也须要建立,可供其余的slave进行数据同步的受权用户
这个受权用户是为了当master宕机后,每一台slave都有可能成为新的master,而这样一来就须要有可供其余slave进行数据同步的受权用户,因此,提早要在每台mysql服务器上设置可供其余slave同步数据的受权用户,这个受权用户在进行master转换的时候,是须要MHA manager进行管理转换指定给slave的,因此在MHA manager的配置文件中有个repl_slave=xxx 选项,指定的就是slave帐户,因此每台机器上须要指定与MHA配置文件中相同的slave用户名,本实验,指定的该用户是slave。
在master,slave1,slave2都要建立:
grant replication slave,replication client on *.* to slave@'%' identified by 'magedu';
4、安装配置MHA
一、进行MHA安装包安装
Manager 节点:
#yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-devel
#yum install mha4mysql-manager-0.56-0.el6.noarch.rpm
#yum install mha4mysql-node-0.56-0.el6.norch.rpm
在全部节点上:
#yum install perl-DBD-MySQL
#yum install mha4mysql-node-0.56-0.el6.norch.rpm
二、初始化MHA,进行配置
Manager 节点须要为每一个监控的master/slave集群提供一个专用的配置文件,而全部的master/slave集群也可共享全局配置。全局配置文件默认为/etc/masterha_default.cnf,其为可选配置。若是仅监控一组master/slave集群,也可直接经过application的配置来提供各服务器的默认配置信息。而每一个application的配置文件路径为自定义。
三、 定义MHA管理配置文件 为MHA专门建立一个管理用户,方便之后使用,在mysql的主节点上,三个节点自动同步
mkdir -p /etc/masterha 建立配置文件的放置目录(自定义)
vim /etc/masterha/app1.cnf 配置文件内容以下;
[server default] #适用于server1,2,3个server的配置
user=admin #mha管理用户
password=magedu #mha管理密码
manager_workdir=/etc/masterha/app1 #masterha本身的工做路径
manager_log=/etc/masterha/manager.log #masterha本身的日志文件
remote_workdir=/tmp/masterha/app1 #每一个远程主机的工做目录在何处
ssh_user=root #基于ssh的密钥认证
repl_user=slave #数据库用户名
repl_password=magedu #数据库密码
ping_interval=1 #ping间隔时长
[server1] #节点1
hostname=192.168.159.151 #节点1主机地址
ssh_port=22 #节点1的ssh端口
candidate_master=1 #未来可不能够成为master候选节点/主节点
[server2]
hostname=192.168.159.120
ssh_port=22
candidate_master=1
[server2]
hostname=192.168.159.121
ssh_port=22
candidate_master=1
四、 检测各节点间ssh互信通讯配置是否Ok:
[root@node4 ~]# masterha_check_ssh -conf=/etc/masterha/app1.cnf
输出信息最后一行相似以下信息,表示其经过检测。
[info]All SSH connection tests passed successfully. 检查管理的MySQL复制集群的链接配置参数是否OK:
[root@node4 ~]#masterha_check_repl -conf=/etc/masterha/app1.cnf
显示MySQL Replication Health is OK.
若是测试时会报错,多是从节点上没有帐号,由于这个架构,任何一个从节点,将有可能成为主节点,因此也须要建立帐号。
所以,这里只要在mater节点上再次执行如下操做便可:
MariaDB [(none)]>GRANT REPLICATION SLAVE,REPLICATION CLIENT ON *.* TO slave@'192.168.%.%' IDENTIFIED BY 'magedu';
MariaDB [(none)]> FLUSH PRIVILEGES;
Manager节点上再次运行,就显示Ok了。
5、启动MHA
[root@node4 ~]#nohup masterha_manager -conf=/etc/masterha/app1.cnf &> /etc/masterha/manager.log &
# 启动成功后,可用过以下命令来查看master节点的状态:
[root@node4 ~]#masterha_check_status -conf=/etc/masterha/app1.cnf
app1 (pid:4978)is running(0:PING_OK),master:192.168.159.11
上面的信息中“app1 (pid:4978)is running(0:PING_OK)”表示MHA服务运行OK,
不然,则会显示为相似“app1 is stopped(1:NOT_RUNNINg).”
若是要中止MHA,须要使用master_stop命令。
[root@node4 ~]#masterha_stop -conf=/etc/masterha/app1.cnf
6、测试MHA测试故障转移
(1)在master节点关闭mariadb服务,模拟主节点数据崩溃
#killall -9 mysqld mysqld_safe
#rm -rf /var/lib/mysql/*
(2)在manager节点查看日志:
/etc/masterha/manager.log 日志文件中出现以下信息,表示manager检测到192.168.159.151节点故障,然后自动执行故障转移,将192.168.159.121提高为主节
点。注意,故障转移完成后,manager将会自动中止,此时使用masterha_check_status命令检测将会遇到错误提示,以下所示:
#masterha_check_status -conf=/etc/masterha/app1.cnf
app1 is stopped(2:NOT_RINNING).
7、实验报错
报错记录1:
[root@data01 ~]# masterha_check_repl--conf=/etc/masterha/app1.cnf
Tue Apr 7 22:31:06 2015 - [warning] Global configuration file/etc/masterha_default.cnf not found. Skipping.
Tue Apr 7 22:31:07 2015 - [info] Reading application default configuration from/etc/masterha/app1.cnf..
Tue Apr 7 22:31:07 2015 - [info] Reading server configuration from/etc/masterha/app1.cnf..
Tue Apr 7 22:31:07 2015 - [info] MHA::MasterMonitor version 0.56.
Tue Apr 7 22:31:07 2015 - [error][/usr/local/share/perl5/MHA/Server.pm,ln303] Getting relay log directory orcurrent relay logfile from replication table failed on192.168.52.130(192.168.52.130:3306)!
Tue Apr 7 22:31:07 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln424] Error happened on checking configurations. at /usr/local/share/perl5/MHA/ServerManager.pmline 315
Tue Apr 7 22:31:07 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln523] Error happened on monitoring servers.
Tue Apr 7 22:31:07 2015 - [info] Got exit code 1 (Not master dead).
MySQL Replication Health is NOT OK!
[root@centos7 ~]#
解决办法:在192.168.159.151上面,vim /etc/my.cnf,在里面添加
relay-log=/home/data/mysql/binlog/mysql-relay-bin
而后重启mysql,再去从新设置slave链接。
STOP SLAVE;
RESET SLAVE;
CHANGE MASTER TO
MASTER_HOST='192.168.159.151',
MASTER_USER='slave',
MASTER_PASSWORD='magedu',
MASTER_LOG_FILE='master-bin.000003',MASTER_LOG_POS=415;
START SLAVE;
Ok,搞定了。
报错记录2:
[root@data01 perl]# masterha_check_repl--conf=/etc/masterha/app1.cnf
Thu Apr 9 00:54:32 2015 - [warning] Global configuration file/etc/masterha_default.cnf not found. Skipping.
Thu Apr 9 00:54:32 2015 - [info] Reading application default configuration from/etc/masterha/app1.cnf..
Thu Apr 9 00:54:32 2015 - [info] Reading server configuration from/etc/masterha/app1.cnf..
Thu Apr 9 00:54:32 2015 - [info] MHA::MasterMonitor version 0.56.
Thu Apr 9 00:54:32 2015 - [error][/usr/local/share/perl5/MHA/Server.pm,ln306] Getting relay log directory orcurrent relay logfile from replication table failed on 192.168.52.130(192.168.52.130:3306)!
Thu Apr 9 00:54:32 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln424] Error happened on checking configurations. at/usr/local/share/perl5/MHA/ServerManager.pm line 315
Thu Apr 9 00:54:32 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln523] Error happened on monitoring servers.
Thu Apr 9 00:54:32 2015 - [info] Got exit code 1 (Not master dead).
MySQL Replication Health is NOT OK!
[root@data01 perl]#
解决方法:
/etc/masterha/app1.cnf文件里面的参数配置,user和repl_user都是mysql帐号,须要建立好,这里是只建立了repl_user而没有建立好user帐号:
user=root
password=magedu
repl_user=repl
repl_password=magedu
在mysql节点上,创建容许manager 访问数据库的“ manager manager ”帐户,主要用于SHOW SLAVESTATUS,RESET SLAVE; 因此须要执行以下命令:
GRANT SUPER,RELOAD,REPLICATIONCLIENT,SELECT ON *.* TO slave@'192.168.%.%' IDENTIFIED BY 'magedu';
错误记录3:
[root@oraclem1 ~]# masterha_check_repl--conf=/etc/masterha/app1.cnf
Thu Apr 9 23:09:05 2015 - [warning] Global configuration file/etc/masterha_default.cnf not found. Skipping.
Thu Apr 9 23:09:05 2015 - [info] Reading application default configuration from/etc/masterha/app1.cnf..
Thu Apr 9 23:09:05 2015 - [info] Reading server configuration from/etc/masterha/app1.cnf..
Thu Apr 9 23:09:05 2015 - [info] MHA::MasterMonitor version 0.56.
Thu Apr 9 23:09:05 2015 - [error][/usr/local/share/perl5/MHA/ServerManager.pm,ln781] Multi-master configuration is detected, but two or more masters areeither writable (read-only is not set) or dead! Check configurations fordetails. Master configurations are as below:
Master 192.168.52.130(192.168.52.130:3306),replicating from 192.168.52.129(192.168.52.129:3306)
Master 192.168.52.129(192.168.52.129:3306),replicating from 192.168.52.130(192.168.52.130:3306)
Thu Apr 9 23:09:05 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln424] Error happened on checking configurations. at/usr/local/share/perl5/MHA/MasterMonitor.pm line 326
Thu Apr 9 23:09:05 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln523] Error happened on monitoring servers.
Thu Apr 9 23:09:05 2015 - [info] Got exit code 1 (Not master dead)
MySQL Replication Health is NOT OK!
[root@oraclem1 ~]
解决办法:
mysql> set global read_only=1;
Query OK, 0 rows affected (0.00 sec)
报错记录4:
Thu Apr 9 23:54:32 2015 - [info] Checking SSH publickey authentication andchecking recovery script configurations on all alive slave servers..
Thu Apr 9 23:54:32 2015 - [info] Executing command : apply_diff_relay_logs --command=test--slave_user='manager' --slave_host=192.168.52.130 --slave_ip=192.168.52.130--slave_port=3306 --workdir=/var/tmp --target_version=5.6.12-log--manager_version=0.56 --relay_dir=/home/data/mysql/data--current_relay_log=mysqld-relay-bin.000011 --slave_pass=xxx
Thu Apr 9 23:54:32 2015 - [info] Connecting to root@192.168.52.130(192.168.52.130:22)..
Can't exec "mysqlbinlog": No suchfile or directory at /usr/local/share/perl5/MHA/BinlogManager.pm line 106.
mysqlbinlog version command failed with rc1:0, please verify PATH, LD_LIBRARY_PATH, and client options
at/usr/local/bin/apply_diff_relay_logs line 493
Thu Apr 9 23:54:32 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln205] Slaves settings check failed!
Thu Apr 9 23:54:32 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln413] Slave configuration failed.
Thu Apr 9 23:54:32 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln424] Error happened on checking configurations. at /usr/local/bin/masterha_check_repl line 48
Thu Apr 9 23:54:32 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln523] Error happened on monitoring servers.
Thu Apr 9 23:54:32 2015 - [info] Got exit code 1 (Not master dead).
MySQL Replication Health is NOT OK!
[root@oraclem1 ~]#
解决办法:
[root@data02 ~]# type mysqlbinlog
mysqlbinlog is/usr/local/mysql/bin/mysqlbinlog
[root@data02 ~]#
[root@data02 ~]# ln -s/usr/local/mysql/bin/mysqlbinlog /usr/bin/mysqlbinlog
报错记录5:
Thu Apr 9 23:57:24 2015 - [info] Connecting to root@192.168.52.130(192.168.52.130:22)..
Checking slave recovery environment settings..
Relay log found at /home/data/mysql/data, up to mysqld-relay-bin.000013
Temporary relay log file is /home/data/mysql/data/mysqld-relay-bin.000013
Testing mysql connection and privileges..sh: mysql: command not found
mysql command failed with rc 127:0!
at/usr/local/bin/apply_diff_relay_logs line 375
main::check()called at /usr/local/bin/apply_diff_relay_logs line 497
eval{...} called at /usr/local/bin/apply_diff_relay_logs line 475
main::main()called at /usr/local/bin/apply_diff_relay_logs line 120
Thu Apr 9 23:57:24 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln205] Slaves settings check failed!
Thu Apr 9 23:57:24 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln413] Slave configuration failed.
Thu Apr 9 23:57:24 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln424] Error happened on checking configurations. at /usr/local/bin/masterha_check_repl line 48
Thu Apr 9 23:57:24 2015 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm,ln523] Error happened on monitoring servers.
Thu Apr 9 23:57:24 2015 - [info] Got exit code 1 (Not master dead).
MySQL Replication Health is NOT OK!
解决办法:
ln -s /usr/local/mysql/bin/mysql/usr/bin/mysql