一.MHA环境以下node
角色 ip地址 主机名 server_id 类型 Monitor host 192.168.0.20 server01 - 监控复制组 Master 192.168.0.50 server02 1 写入 Candicate master 192.168.0.60 server03 2 读 Slave 192.168.0.70 server04 3 读
其中master对外提供写服务,备选master(实际的slave,主机名server03)提供读服务,slave也提供相关的读服务,一旦master宕机,将会把备选master提高为新的master,slave指向新的mastermysql
一、安装依赖包sql
yum install -y gcc ntpdate wget lrzsz vim net-tools openssh-clients*
二、安装epel源数据库
yum install -y epel-release
三、安装组件vim
一、在全部节点安装MHA node所需的perl模块(DBD:mysql) yum install perl-DBD-MySQL -y 二、在全部的节点安装mha node: 下载连接: https://code.google.com/archive/p/mysql-master-ha/downloads wget https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/mysql-master-ha/mha4mysql-node-0.54.tar.gz tar xf mha4mysql-node-0.54.tar.gz cd mha4mysql-node-0.54 perl Makefile.PL make && make install
报错一api
解决办法:服务器
root># yum install perl-ExtUtils-MakeMaker
报错二app
解决办法:ssh
root># yum install perl-CPAN -y
2.安装MHA Manageride
MHA Manager中主要包括了几个管理员的命令行工具,例如master_manger,master_master_switch等。MHA Manger也依赖于perl模块,具体以下:
(1)安装MHA Node软件包以前须要安装依赖。我这里使用yum完成,没有epel源的可使用上面提到的脚本(epel源安装也简单)。注意:在MHA Manager的主机也是须要安装MHA Node。
rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm yum install perl-DBD-MySQL -y (2)安装MHA Manager。首先安装MHA Manger依赖的perl模块(我这里使用yum安装): yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes -y 安装MHA Manager软件包: wget https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/mysql-master-ha/mha4mysql-manager-0.54.tar.gz tar xf mha4mysql-manager-0.54.tar.gz cd mha4mysql-manager-0.54 perl Makefile.PL make && make install 复制相关脚本到/usr/local/bin目录 root># cp -rp /root/mha4mysql-manager-0.54/samples/scripts/* /usr/local/bin/
免key配置
root># ssh-copy-id 192.168.56.131 root># ssh-copy-id 192.168.56.132 root># ssh-copy-id 192.168.56.133
数据库配置
1)在server02上执行备份(192.168.56.131) mysqldump --master-data=2 --single-transaction -R --triggers -A > all.sql 在server02上建立复制用户: mysql> grant replication slave on *.* to 'repl'@'192.168.56.%' identified by '123456'; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) #修改root密码 mysql> grant all privileges on *.* to root@"%" identified by '123456'; 一、在主库上查看偏移量 root># head -n 30 all.sql | grep 'CHANGE MASTER TO' -- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000003', MASTER_LOG_POS=214; 二、在2个从库上执行 CHANGE MASTER TO MASTER_HOST='192.168.56.131',MASTER_USER='repl', MASTER_PASSWORD='123456',MASTER_LOG_FILE='mysql-bin.000003',MASTER_LOG_POS=214; 三、检查主从是否一致 root># mysql -e 'show slave status\G' | egrep 'Slave_IO|Slave_SQL' Slave_IO_State: Waiting for master to send event Slave_IO_Running: Yes Slave_SQL_Running: Yes Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it 四、两台slave服务器设置read_only(从库对外提供读服务,只因此没有写进配置文件,是由于随时slave会提高为master) server03<2018-08-14 17:25:23> /data root># mysql -e 'set global read_only=1' server04<2018-08-14 17:24:42> /data root># mysql -e 'set global read_only=1' 五、建立监控用户(在master上执行,也就是192.168.56.131): mysql> grant all privileges on *.* to 'mha'@'192.168.56.%' identified by '123456'; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.01 sec) mysql> 六、在master主机上查看master的状态。 mysql> SHOW SLAVE HOSTS; +-----------+------+------+-----------+--------------------------------------+ | Server_id | Host | Port | Master_id | Slave_UUID | +-----------+------+------+-----------+--------------------------------------+ | 3 | | 3306 | 1 | b9543701-a052-11e8-b2f6-000c29c77f26 | | 2 | | 3306 | 1 | bcdbef2c-a052-11e8-b2f6-000c293612be | +-----------+------+------+-----------+--------------------------------------+ 2 rows in set (0.00 sec) 到这里整个集群环境已经搭建完毕,剩下的就是配置MHA软件了。
5.配置MHA
1)建立MHA的工做目录,而且建立相关配置文件(在软件包解压后的目录里面有样例配置文件)。 server01<2018-08-14 17:30:36> ~ root># mkdir -p /etc/masterha root># cp /root/mha4mysql-manager-0.54/samples/conf/app1.cnf /etc/masterha/ ###使用proxysql的mha配置文件 root># cat /etc/masterha/proxy_app_default.cnf [server default] manager_workdir=/etc/masterha/app1 manager_log=/etc/masterha/app1/manager_proxy.log ssh_user=root ssh_port=22 user=mha password=123456 ping_type=connect repl_user=repl repl_password=123456 master_binlog_dir=/var/lib/mysql/ ping_interval=3 #master failover时执行,不配置vip时不用配 master_ip_failover_script=/etc/masterha/app1/scripts/master_ip_failover_proxy #shutdown_script=/etc/masterha/power_manager report_script=/etc/masterha/app1/scripts/send_report_proxy master_ip_online_change_script=/etc/masterha/app1/scripts/master_ip_online_change_proxy #secondary_check_script=masterha_secondary_check -s 192.168.56.130 [server1] hostname=192.168.56.131 port=3306 candidate_master=1 #check_repl_delay=0 [server2] hostname=192.168.56.132 port=3306 candidate_master=1 #check_repl_delay=0 [server3] hostname=192.168.56.133 port=3306 candidate_master=1 #check_repl_delay=0 ###使用vip的mha配置文件 root># cat vip_app_default.cnf [server default] manager_workdir=/etc/masterha/app1 manager_log=/etc/masterha/app1/manager_vip.log ssh_user=root ssh_port=22 user=mha password=123456 ping_type=connect repl_user=repl repl_password=123456 master_binlog_dir=/var/lib/mysql/ ping_interval=3 #master failover时执行,不配置vip时不用配 master_ip_failover_script=/etc/masterha/app1/scripts/master_ip_failover #shutdown_script=/etc/masterha/power_manager report_script=/etc/masterha/app1/scripts/send_report master_ip_online_change_script=/etc/masterha/app1/scripts/master_ip_online_change #secondary_check_script=masterha_secondary_check -s 192.168.56.130 [server1] hostname=192.168.56.131 port=3306 candidate_master=1 #check_repl_delay=0 [server2] hostname=192.168.56.132 port=3306 candidate_master=1 #check_repl_delay=0 [server3] hostname=192.168.56.133 port=3306 candidate_master=1 #check_repl_delay=0