部署MySQL高可用集群

 

  一.简介html

    本文将介绍如何使用mysql-mmm搭建数据库的高可用架构.node

  二.环境mysql

          

服务器linux

主机名sql

Ip数据库

Severedvim

Mysql版本服务器

系统架构

Master1less

master1

192.168.4.10

10

5.6.15

Centos6.9

Master2

master2

192.168.4.11

11

5.6.15

 

Slave1

slave1

192.168.4.12

12

5.6.15

 

Slave2

slave2

192.168.4.13

13

5.6.15

 

Monitor

monitor

192.168.4.100

 

Client

client

192.168.4.120

5.6.15

 

 

         虚拟IP  

      

虚拟ip

功能

描述

192.168.4.200

Write

主用master写入虚拟Ip

192.168.4.201

read

读服务器虚拟Ip

192.168.4.202

Read

读服务器虚拟Ip

 

                案例图谱

 

 

  三.mmm架构

                服务器角色

类型

服务进程

主要用途

管理节点

mmm-monitor

负责全部的监控工做的监控守护进程,决定故障节点的移除或恢复。

数据库节点

mmm-agent

运行所在MySQL服务器殇的代理守护进程,提供简单远程服务集、提供给监控节点(可用来更改只读模式、复制的主服务器等 )

 

    

                   核心软件包及应用

软件包名

包做用

Net-ARP-1.0.8.tgz

分配虚拟ip

mysql-mmm-2.2.1.tar.gz

MySQL-MMM架构核心进程,安装完成后便可启动管理进程也可启动客户端进程。

 

  四.部署集群基本结构

   咱们将部署集群的工做分为两大块,第一块就是部署集群基础环境。使用4台RHEL6服务器,以下图所示。其中192.168.4.十、192.168.4.11做为MySQL双主服务器,192.168.4.十二、192.168.4.13做为主服务器的从服务器。

  安装服务器时建议管理防火墙及SELINUX.

 

 

4.1 mysql服务器的安装

      

下面我会介绍MySQL的安装方式。本文将使用64位的RHEL 6操做系统,MySQL数据库的版本是5.6.15。

访问http://dev.mysql.com/downloads/mysql/,找到MySQL Community Server下载页面,平台选择“Red Hat Enterprise Linux 6 / Oracle Linux 6”,而后选择64位的bundle整合包下载,以下图所示。

 

 

           注意:下载MySQL软件时须要以Oracle网站帐户登陆,若是没有请根据页面提示先注册一个(免费) 。

    

    4.1.1 卸载系统自带的mysql-server、mysql软件包(若是有的话)   

yum -y remove mysql-server mysql

 

    4.1.2 释放MySQL-bundle整合包         

[root@master1 ~]#tar xvf MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar
MySQL-shared-5.6.15-1.el6.x86_64.rpm              //共享库
MySQL-shared-compat-5.6.15-1.el6.x86_64.rpm      //兼容包
MySQL-server-5.6.15-1.el6.x86_64.rpm              //服务端程序
MySQL-client-5.6.15-1.el6.x86_64.rpm              //客户端程序
MySQL-devel-5.6.15-1.el6.x86_64.rpm              //库和头文件
MySQL-embedded-5.6.15-1.el6.x86_64.rpm          //嵌入式版本
MySQL-test-5.6.15-1.el6.x86_64.rpm              //测试包

    4.1.3 安装MySQL数据库

[root@master1]# rpm -Uvh MySQL-*.rpm

    4.1.4 启动MySQL数据库

[root@master1 ~]# service mysql start && chkconfig --list mysql
Starting MySQL SUCCESS! 
mysql              0:关闭    1:关闭    2:启用    3:启用    4:启用    5:启用    6:关闭

    4.1.5 MySQL密码

    在安装完后会自动生成在root目录下.mysql_secret文件内 查询后可以使用此密码登陆Mysql

[root@master1 ~]# cat .mysql_secret 
# The random password set for the root user at Mon Jan  1 16:48:31 2001 (local time): kZ5j71cyZiKKhSeX          // 密码文件

    4.1.6 登陆MySQL,并修改密码  使用刚查到的密码进行登陆

[root@master1 ~]# mysql -u root -p
Enter password:

mysql> SET PASSWORD FOR 'root'@'localhost'=PASSWORD('123456');    

        

              修改后再次登陆时既可使用新密码了。

              按照上述方法 将4台服务器均装好MySQL。

 

    4.2 部署双主多从结构

      1.数据库受权(4台数据库主机master1,master2,slave1,slave2执行如下操做)

  部署主从同步只须要受权一个主从同步用户便可,可是咱们要部署MySQL-MMM架构,因此在这里咱们将MySQL-MMM所需用户一并进行受权设置。再受权一个测试用户,在架构搭建完成时测试使用。

 mysql> grant   replication  slave  on  *.*  to  slaveuser@"%" identified by  "pwd123";
Query OK, 0 rows affected (0.01 sec)         //主从同步受权
mysql> grant  replication  client  on *.*  to  monitor@"%" identified by "monitor";  
Query OK, 0 rows affected (0.00 sec)       //MMM所需架构用户受权
mysql> grant  replication client,process,super   on *.*  to  agent@"%" identified by "agent"; 
Query OK, 0 rows affected (0.00 sec)         //MMM所需架构用户受权
mysql> grant  all  on *.*  to  root@"%" identified by "123456";
Query OK, 0 rows affected (0.00 sec)              //测试用户受权

      2.开启主数据库binlog日志、设置server_id(master1,master2)

        master1设置

[root@master1 ~]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
server_id=10                          //指定服务器ID
log_bin                               //启用binlog日志
log_slave_updates=1 //启用链式复制 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [root@master1 ~]# [root@master1 ~]# service mysql restart                //重启MySQL服务 Shutting down MySQL.. [肯定] Starting MySQL.. [肯定] [root@master1 ~]# ls /var/lib/mysql/master1-bin*        //查看binlog日志是否生成 /var/lib/mysql/master1-bin.000001 /var/lib/mysql/master1-bin.index

          master2设置:

[root@master2 mysql]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
server_id=11
log_slave_updates=1
log-bin [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [root@master2 mysql]# /etc/init.d/mysql restart Shutting down MySQL.. SUCCESS! Starting MySQL. SUCCESS! [root@master2 mysql]# ls /var/lib/mysql/master2-bin.* /var/lib/mysql/master2-bin.000001 /var/lib/mysql/master2-bin.000002 /var/lib/mysql/master2-bin.index

 

      

从库设置serverid    

slave1设置

[root@slave1 mysql]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
server_id=12


 

[root@slave1 ~]# service mysql restart

 

slave2设置

[root@slave2 mysql]# cat /etc/my.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
server_id=13
…

[root@slave2~]# service mysql restart

 

3.配置主从从从关系

配置master二、slave一、slave2成为master1的从服务器

查看master1的master状态 

mysql> show master status\G
*************************** 1. row ***************************
             File: master1-bin.000002
         Position: 120
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

依照上面参数配置master2为master1的从服务器

mysql>    change  master  to                         
    ->      master_host="192.168.4.10",                
    ->      master_user="slaveuser",                
    ->      master_password="pwd123",               
    ->      master_log_file="master1-bin.000002",     
->      master_log_pos=120; 
Query OK, 0 rows affected, 2 warnings (0.01 sec)

mysql> start slave;    
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G 
Slave_IO_Running: Yes                //IO节点正常
Slave_SQL_Running: Yes                //SQL节点正常

用一样的方法设置slave1以及slave2为master1的从服务器

4.配置主主从从关系,将master1配置为master2的从服务器

查看master2的master信息:

mysql> show master status \G
*************************** 1. row ***************************
             File: master2-bin.000002
         Position: 120
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: 
1 row in set (0.00 sec)

设置master1成为master2的从服务器

mysql>    change  master  to                         
    ->      master_host="192.168.4.11",                
    ->      master_user="slaveuser",                
    ->      master_password="pwd123",               
    ->      master_log_file="master2-bin.000002",     
    ->      master_log_pos=120; 
Query OK, 0 rows affected, 2 warnings (0.01 sec)
mysql> start slave ;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status \G
*************************** 1. row ***************************
             Slave_IO_Running: Yes      //IO节点正常
            Slave_SQL_Running: Yes      //SQL节点正常

5.测试主从架构是否成功

 master1更新数据库,查看其它主机是否成功 ,当全部主机分别在本机都能看到刚刚创建的数据库db1,则正常。

mysql> create database db1;
Query OK, 1 row affected (0.00 sec)
mysql>  show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema  |
| db1                |
| mysql               |
| performance_schema |
| test                 |
+--------------------+
5 rows in set (0.00 sec)

 

    至此,基本环境构建完成。

 

五.MySQL-MMM架构部署

  5.1 MMM集群方案

     使用第4章的架构,192.168.4.十、192.168.4.11做为MySQL双主服务器,192.168.4.十二、192.168.4.13做为主服务器的从服务器,添加192.168.4.100做为MySQL-MMM架构中管理监控服务器,实现监控MySQL主从服务器的工做状态及决定故障节点的移除或恢复工做,架构搭建完成后使用客户机192.168.4.120进行访问,客户机须要安装MySQL-client软件包。拓扑见图2

 

5.2 步骤

 实现此案例须要按照以下步骤进行。

    步骤一:安装MySQL-MMM

1.安装依赖关系(MySQL集群内5台服务器master1,master2,slave1,slave2,monitor)均需安装

[root@master2 mysql]# yum -y install gcc* perl-Date-Manip  perl-Date-Manip  perl-Date-Manip perl-XML-DOM-XPath perl-XML-Parser perl-XML-RegExp rrdtool perl-Class-Singleton perl perl-DBD-MySQL perl-Params-Validate perl-MailTools perl-Time-HiRes perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker

 

 

2.安装MySQL-MMM软件依赖包(MySQL集群内5台服务器master1,master2,slave1,slave2,monitor)均需安装。

  1. 安装Log-Log4perl 类

    [root@master1 mysql-mmm]# rpm -ivh perl-Log-Log4perl-1.26-1.el6.rf.noarch.rpm
    warning: perl-Log-Log4perl-1.26-1.el6.rf.noarch.rpm: Header V3 DSA/SHA1 Signature, key ID 6b8d79e6: NOKEY
    error: Failed dependencies:
    perl(Test::More) >= 0.45 is needed by perl-Log-Log4perl-1.26-1.el6.rf.noarch
    [root@master1 mysql-mmm]# rpm -ivh perl-Log-Log4perl-1.26-1.el6.rf.noarch.rpm           --force --nodeps

    我在安装过程当中报了错误显示nokey ,因为是安装了旧版本GPGkeys形成,增长参数--force –nodeps强制安装便可跳过

  2. 安装Algorithm-Diff类
[root@master1 mysql-mmm]#  tar -zxvf Algorithm-Diff-1.1902.tar.gz  
Algorithm-Diff-1.1902/
Algorithm-Diff-1.1902/diffnew.pl
Algorithm-Diff-1.1902/t/
Algorithm-Diff-1.1902/t/oo.t
Algorithm-Diff-1.1902/t/base.t
Algorithm-Diff-1.1902/htmldiff.pl
Algorithm-Diff-1.1902/lib/
Algorithm-Diff-1.1902/lib/Algorithm/
Algorithm-Diff-1.1902/lib/Algorithm/Diff.pm
Algorithm-Diff-1.1902/lib/Algorithm/DiffOld.pm
Algorithm-Diff-1.1902/META.yml
Algorithm-Diff-1.1902/Changes
Algorithm-Diff-1.1902/cdiff.pl
Algorithm-Diff-1.1902/MANIFEST
Algorithm-Diff-1.1902/diff.pl
Algorithm-Diff-1.1902/Makefile.PL
Algorithm-Diff-1.1902/README
[root@master1 mysql-mmm]# cd Algorithm-Diff-1.1902
[root@master1 Algorithm-Diff-1.1902]#  perl  Makefile.PL 
Checking if your kit is complete...
Looks good
Writing Makefile for Algorithm::Diff
[root@master1 Algorithm-Diff-1.1902]# make && make install

3.安装Proc-Daemon类

root@master1 mysql-mmm]# tar -zxvf Proc-Daemon-0.03.tar.gz
Proc-Daemon-0.03/
Proc-Daemon-0.03/t/
Proc-Daemon-0.03/t/00modload.t
Proc-Daemon-0.03/t/01filecreate.t
Proc-Daemon-0.03/README
Proc-Daemon-0.03/Makefile.PL
Proc-Daemon-0.03/Daemon.pm
Proc-Daemon-0.03/Changes
Proc-Daemon-0.03/MANIFEST
[root@master1 mysql-mmm]# cd Proc-Daemon-0.03                
[root@master1 Proc-Daemon-0.03]# perl    Makefile.PL 
Checking if your kit is complete...
Looks good
Writing Makefile for Proc::Daemon
[root@master1 Proc-Daemon-0.03]# make && make install
cp Daemon.pm blib/lib/Proc/Daemon.pm
Manifying blib/man3/Proc::Daemon.3pm
Installing /usr/local/share/perl5/Proc/Daemon.pm
Installing /usr/local/share/man/man3/Proc::Daemon.3pm
Appending installation info to /usr/lib64/perl5/perllocal.pod
[root@master1 Proc-Daemon-0.03]#

 4.安装Net-ARP虚拟IP分配工具:

[root@mysql-master1 ~]# gunzip Net-ARP-1.0.8.tgz    
[root@mysql-master1 ~]# tar xvf Net-ARP-1.0.8.tar       
.. ..
[root@mysql-master1 ~]# cd Net-ARP-1.0.8                    
[root@mysql-master1 Net-ARP-1.0.8]# perl Makefile.PL        
Module Net::Pcap is required for make test!
Checking if your kit is complete...
Looks good
Writing Makefile for Net::ARP
[root@mysql-master1 Net-ARP-1.0.8]# make && make install    
.. ..
[root@mysql-master1 Net-ARP-1.0.8]# cd                        
[root@mysql-master1 ~]#

 5. 安装Mysql-MMM软件包:

[root@mysql-master1 ~]# tar xvf mysql-mmm-2.2.1.tar.gz       
.. ..
[root@mysql-master1 ~]# cd mysql-mmm-2.2.1                    
[root@mysql-master1 mysql-mmm-2.2.1]# make && make install    
.. ..
[root@mysql-master1 mysql-mmm-2.2.1]#



  

步骤二:修改配置文件

  1. 修改公共配置文件

    本案例中MySQL集群的5台服务器(master一、master二、slave一、slave二、monitor)都须要配置,能够先配好一台后使用scp复制。

root@master1 ~]# vim  /etc/mysql-mmm/mmm_common.conf 
active_master_role    writer
<host default>
    cluster_interface        eth0                //设置主从同步的用户
    pid_path                /var/run/mmm_agentd.pid
    bin_path                /usr/lib/mysql-mmm/
 replication_user        slaveuser            //设置主从同步的用户
 replication_password    pwd123            //设置主从同步用户密码
    agent_user            agent       //mmm-agent控制数据库用户
    agent_password        agent         //mmm-agent控制数据库用户密码
</host>
<host master1>                            //设置第一个主服务器
    ip                    192.168.4.10            //master1 IP 地址
    mode                    master
    peer                    master2        //指定另一台主服务器
</host>
<host master2>                            //指定另一台主服务器
    ip                    192.168.4.11
    mode                    master
    peer                    master1
</host>
<host slave1>                                //设置第一台从服务器
    ip                    192.168.4.12            //slave1 IP 地址
    mode                slave          //本段落配置的是slave服务器
</host>
<host slave2>
    ip                    192.168.4.13
    mode                    slave
</host>
<role writer>                              //设置写入服务器工做模式
    hosts                master1,master2        //提供写的主服务器
    ips                    192.168.4.200        //设置VIP地址
    mode                    exclusive            //排他模式
</role>
<role reader>                             //设置读取服务器工做模式
    hosts                slave1,slave2        //提供读的服务器信息
    ips                 192.168.4.201,192.168.4.202    //多个虚拟IP
    mode                    balanced                  //均衡模式
</role>

 

  2.修改管理主机配置文件(monitor主机配置)

[root@monitor ~]# vim /etc/mysql-mmm/mmm_mon.conf 
include mmm_common.conf
<monitor>
    ip                        192.168.4.100        //设置管理主机IP地址
    pid_path                /var/run/mmm_mond.pid
    bin_path                /usr/lib/mysql-mmm/
    status_path                /var/lib/misc/mmm_mond.status
    ping_ips                192.168.4.10,192.168.4.11,192.168.4.12,192.168.4.13
                                                //设置被监控数据库
</monitor>
<host default>
    monitor_user            monitor        //监控数据库MySQL用户
    monitor_password        monitor        //监控数据库MySQL用户密码
</host>
debug 0
[root@monitor ~]#

 

  3.修改客户端配置文件

   master一、master二、slave一、slave2,都要配置相应名称

root@master1 /]# cat /etc/mysql-mmm/mmm_agent.conf 
include mmm_common.conf
this master1
[root@master2 /]# cat /etc/mysql-mmm/mmm_agent.conf 
include mmm_common.conf
this master2
[root@slave2 /]# cat /etc/mysql-mmm/mmm_agent.conf 
include mmm_common.conf
this slave2
[root@slave2 /]# cat /etc/mysql-mmm/mmm_agent.conf 
include mmm_common.conf
this slave2    

 

六.MySQL-MMM架构使用

6.1.启动MySQL-MMM架构

  1.启动mmm-agent

  master1,master2,slave1,slave2均执行如下操做。

[root@master1 ~]# /etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok

  2.启动mmm-monitor

[root@moitor ~]#  /etc/init.d/mysql-mmm-monitor start
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Ok

 

6.2 设置集群中服务器为online状态。

  控制命令只能在管理端Monitor上运行。开用命令查看当前个服务器状态。  默认全部服务器为waiting状态,若有异常,检查各服务器SELinux及iptabless

[root@localhost ~]# mmm_control show
  master1(192.168.4.10) master/AWAITING_RECOVERY. Roles: 
  master2(192.168.4.11) master/AWAITING_RECOVERY. Roles: 
  slave1(192.168.4.12) slave/ AWAITING_RECOVERY. Roles:   slave2(192.168.4.13) slave/AWAITING_RECOVERY. Roles: 

经过命令设置4台数据库主机为online

[root@monitor ~]# mmm_control set_online master1
OK: State of 'master1' changed to ONLINE. Now you can wait some time and check its new roles!
[root@monitor ~]# mmm_control set_online master2
OK: State of 'master2' changed to ONLINE. Now you can wait some time and check its new roles!
[root@monitor ~]# mmm_control set_online slave1
OK: State of 'slave1' changed to ONLINE. Now you can wait some time and check its new roles!
[root@monitor ~]# mmm_control set_online slave2
OK: State of 'slave2' changed to ONLINE. Now you can wait some time and check its new roles!
[root@monitor ~]#

再次查看当前集群中各服务器状态

[root@monitor ~]# mmm_control show
  master1(192.168.4.10) master/ONLINE. Roles: writer(192.168.4.200)
  master2(192.168.4.11) master/ONLINE. Roles: 
  slave1(192.168.4.12) slave/ONLINE. Roles: reader(192.168.4.201)
  slave2(192.168.4.13) slave/ONLINE. Roles: reader(192.168.4.202)

经过状态,能够看到4台主机全是online状态,写入服务器为master1,ip为虚拟ip192.168.4.200.从服务器为slave1,slave2.

6.3 测试MySQL-MMM架构

  客户端安装MySQL-client

[root@client ~]# tar xvf MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar
.. ..
[root@client ~]# rpm -ivh MySQL-client-5.6.15-1.el6.x86_64.rpm

   MySQL-MMM虚拟IP访问测试,同时可测试创建插入查看等功能。

[root@client /]#  mysql -h192.168.4.200 -uroot -p123456 -e "show databases"
Warning: Using a password on the command line interface can be insecure.
+--------------------+
| Database           |
+--------------------+
| information_schema |
| db1                |
| db2                |
| mysql              |
| performance_schema |
| test               |
+--------------------+
[root@client /]# 

6.4 主数据库宕机测试

  咱们能够认为将主数据库停用达到测试集群的目的。

    [root@master1 ~]# /etc/init.d/mysql stop
Shutting down MySQL.. SUCCESS! 
[root@master1 ~]# 

此时咱们查看monitor日志能够看到详细的检测及切换过程。

017/10/24 01:37:07  WARN Check 'rep_backlog' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.4.10:3306, user = monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111
2017/10/24 01:37:07  WARN Check 'rep_threads' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.4.10:3306, user = monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111
2017/10/24 01:37:15 ERROR Check 'mysql' on 'master1' has failed for 10 seconds! Message: ERROR: Connect error (host = 192.168.4.10:3306, user = monitor)! Lost connection to MySQL server at 'reading initial communication packet', system error: 111
2017/10/24 01:37:16 FATAL State of host 'master1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)
2017/10/24 01:37:16  INFO Removing all roles from host 'master1':
2017/10/24 01:37:16  INFO     Removed role 'writer(192.168.4.200)' from host 'master1'
2017/10/24 01:37:16  INFO Orphaned role 'writer(192.168.4.200)' has been assigned to 'master2'

在monitor上再次查看数据库服务器状态,能够发现此时master1已经为offline状态,写入服务器及虚拟ip192.168.4.200已经变动为master2

[root@monitor ~]# mmm_control show
  master1(192.168.4.10) master/HARD_OFFLINE. Roles: 
  master2(192.168.4.11) master/ONLINE. Roles: writer(192.168.4.200)
  slave1(192.168.4.12) slave/ONLINE. Roles: reader(192.168.4.201)
  slave2(192.168.4.13) slave/ONLINE. Roles: reader(192.168.4.202)

 查看slave1.slave2的的从属关系,主服务器以变动为master2

mysql> show slave status \G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.4.11
                  Master_User: slaveuser
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: master2-bin.000002
          Read_Master_Log_Pos: 211
               Relay_Log_File: slave1-relay-bin.000002
                Relay_Log_Pos: 285
        Relay_Master_Log_File: master2-bin.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

  

注意服务器恢复后由offline状态转换为waiting状态,但不会变为Online状态,需手动启用

 

 至此,咱们的MySQL高可用集群已经部署完毕。

 

7.简化版集群。

    上面咱们已经部署完了5台服务器组成的MySQL集群,但根据各公司实际状况,可能访问量并不须要如此多的服务器进行集群化。而主从服务器的形式又不能实现主备用之间的热备。下面我将上例作了更改,使用3台服务器搭建集群。此例即不须要太多服务器,又能实现数据库的热备份。

 

 

咱们如今要作的就是对monitor配置的更改,首先调整被监控服务器IP,修改MMM主配置文件。

 [root@monitor ~]# cat /etc/mysql-mmm/mmm_mon.conf 
include mmm_common.conf
<monitor>
    ip                        192.168.4.100
    pid_path                /var/run/mmm_mond.pid
    bin_path                /usr/lib/mysql-mmm/
    status_path                /var/lib/misc/mmm_mond.status
    ping_ips                192.168.4.10,192.168.4.11  //此处是被监控服务器IP
</monitor>
    
<host default>
    monitor_user            monitor
    monitor_password        monitor
</host>

debug 0
[root@monitor ~]# 

而后修改公共配置文件,注意master1,master2,monior保持一致,都要修改。

[root@monitor ~]# cat /etc/mysql-mmm/mmm_common.conf
active_master_role    writer
<host default>
    cluster_interface        eth0

    pid_path                /var/run/mmm_agentd.pid
    bin_path                /usr/lib/mysql-mmm/

    replication_user        slaveuser
    replication_password    pwd123

    agent_user                agent
    agent_password            agent
</host>

<host master1>
    ip                      192.168.4.10
    mode                    master
    peer                    master2
</host>

<host master2>
    ip                    192.168.4.11
    mode                    master
    peer                    master1
</host>
<role writer>
    hosts                    master1,master2
    ips                192.168.4.200
    mode                    exclusive
</role>

<role reader>
    hosts                    master1,master2
    ips                                   192.168.4.201,192.168.4.202
    mode                    balanced
</role>
[root@monitor ~]# 

 

 

   配置完成后 ,其它配置与5台拓扑相似,启用master1和master2 的 mmm进程 ,在 monitor服务器启用online模式后 查看mmm状态 

 [root@monitor ~]# mmm_control show
  master1(192.168.4.10) master/ONLINE. Roles: reader(192.168.4.201), writer(192.168.4.200)
  master2(192.168.4.11) master/ONLINE. Roles: reader(192.168.4.202)

 

 能够看到master1和master2都承担读的工做,而master1又单独承担写的任务。下面测试将master1的数据库关闭看结果。

 [root@monitor ~]# mmm_control show
  master1(192.168.4.10) master/HARD_OFFLINE. Roles: 
  master2(192.168.4.11) master/ONLINE. Roles: reader(192.168.4.201), reader(192.168.4.202), writer(192.168.4.200)

能够看到 ,当关闭master1时,master2即承担读的工做又承担写的工做 。用户始终能够经过连接192.168.4.200进行数据库操做,实现了双击热备。

 

八.故障分析

试验中遇到了两次问题 。列出仅供参考

问题1 

mysql> show slave status \G
*************************** 1. row ***************************
…………..
             Slave_IO_Running: Connecting
            Slave_SQL_Running: Yes
 ………….
                Last_IO_Errno: 2003
                Last_IO_Error: error connecting to master 'slaveuser@192.168.4.10:3306' - retry-time: 60  retries: 2
        
1 row in set (0.00 sec)

关闭matse selinux  清空防火墙 ,关闭防火墙后重启slave解决。

 

 

问题2 

搭建过程当中,屡次更改配置后 ,遇到以下错误

mysql>  start slave;

ERROR 1872 (HY000): Slave failed to initialize relay log info structure from the repository

此问题使用reset slave all清空全部的复制信息,而后重置master.infor start slave后复制正常。

相关文章
相关标签/搜索