master-slave的数据库结构解决了许多问题,特别是读写应用
1.写操做所有在master节点执行,slave每隔60s读取master的binlog
2.将众多的用户请求分散到更多 的节点,从而减轻单点的压力
缺点:
1.slave实时性不太好。
2.高可用问题,mater就是致命点(SPOF:Single point of failure);
master-master replication
1.使用两个mysql数据库db01,db02,互为master和slave
2.从app来讲只有一个做为master
3.若是扮演slave的数据库节点db02failed
此时app会将全部的read,write分配给的db01,除非slave活过来了
4.若是扮演slave的数据库节点db01failed
此时app会将全部的read,write分配给的db02,除非slave活过来了
其中3.4是由mmm来配置的
MMM有三个组件
MMMD_MON->监控那些正在工做的而且指定角色等等的脚本
MMM_AGENT-->远程服务器管理代理脚本[提供一套service来使server管理跟容易,更有弹性的监控node]
MMM_CONTROL-->可以经过命令和脚原本管理MMMD_MON
每个mysql服务器节点须要运行MMM_AGENT,同时在另外一台机器(能够是独立的一台机器,也能够是app server共享一台机器)
运行mmmd_mon造成1*mmmd_mon +n*mmmd-agent部署架构
MMM利用虚拟ip技术:一个网卡能够同时使用多个ip
(因此使用mmm时,须要2*n+1个ip)n为mysql节点个数(包括master和slave节点_个数)
当有数据库节点fail时mmmd_mon检测不到mmmmd_agent的心跳或者服务状态。mmmd_mon将进行决定,并下指令给某个正常的数据库节点的mmmd_agent,
使得该mmmd_agent篡位使用刚才fail掉的那个节点的虚拟ip,使得虚拟ip实际指向fail那个机器
mmm对mysql master-slave replicastion有很好的补充
webclient 数据库请求至proxry -proxy进行读写分发-转至mmm机制,在检测存货的机器读与写操做
规划
主机名 ip port app 目录 备注
Node1 192.168.88.149 3306 mysql 数据库服务器1
Node2 192.168.88.150 3306 mysql 数据库服务器2
MON 192.168.88.191 3306 mysql 数据库管理服务器
PROXY 192.168.88.192 4040 PROXY 数据库代理NLB
node1,node 2数据库服务器replication双向master-master虚拟机有限,只能开四台,由于node1,node2便可以读又能够写
db1 192.168.88.149
db2 192.168.88.150
MON 192.168.88.191
PROXY 192.168.88.192
配置步骤
网络的配置修改成静态ip而且ifcongfig -a 和目录下的网卡名一致,确保硬件地址一致
node1和node2 replication 双向master-master
node1 和node2安装mmm并配置mmm_regent.conf
Mon安装mmm并配置mmm_mon.conf
proxy安装mysql-proxy
安装前准备
1.卸载mysql[mon,node1,node2,mysql-proxy]
[root@localhost ~]#yum remove mysql-libs-5.1.71-1.el6.x86_64
2.关闭防火墙[mon,node1,node2,mysql-proxy]
[root@localhost ~]# service iptables stop
3.安装mysql[]
[root@localhost ~]# yum install mysql*
yum install perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker
Algorithm-Diff-1.1902.tar.gz :http://ipkg.nslu2-linux.org/sources/Algorithm-Diff-1.1902.tar.gz
Proc-Daemon-0.03.tar.gz:ftp://ftp.auckland.ac.nz/pub/perl/CPAN/modules/by-module/Proc/Proc-Daemon-0.03.tar.gz
perl包的安裝過程都是:
perl Makefile.PL
make
make test
make install
yum install cpan
安装MMM须要的perl模块,
cpan Proc::Daemon Log::Log4perl Algorithm::Diff DBD::mysql (少此模块,slave对应的master不会自动切换,主master还会出现漂移不到写VIP,没法写)Net::ARP(少此模块,会出现没法漂移VIP,
分配不到虚拟IP),2台master和2台slave机启用agent程序,或把它作成服务直接启用服务,如还缺乏其它模块启动时会有提示,或用/usr/lib/mysql-mmm/agent/configure_ip 虚IP,测试也会有相应的
报错提示。
一,配置node1 node2数据库服务器replication双向master-master注意防火墙
1.配置node1同步
my.cnf
server-id= 1
log_bin =mysql-bin
[root@localhost ~]# /etc/rc.d/init.d/mysqld restart
node1上:
mysql> show master status;#node2
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000006 | 106 | | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
mysql>grant replication slave on *.* to 'replication'@'%' identified by 'slave';
node1:执行命令
change master to master_host='192.168.88.150',master_user='replication',master_password='slave', master_log_file='mysql-bin.000006',master_log_pos=106;
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.88.150
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000006
Read_Master_Log_Pos: 106
Relay_Log_File: mysqld-relay-bin.000002
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.000006
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
node2上
2.配置node2同步
my.cnf
server-id= 2
log_bin =mysql-bin
mysql>grant replication slave on *.* to 'replication'@'%' identified by 'slave';
mysql> show master status;#node1
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000003 | 550 | | |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)
change master to master_host='192.168.88.150',master_user='replication',master_password='slave', master_log_file='mysql-bin.000005',master_log_pos=106;
show slave status\G;结果
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.88.149
Master_User: replication
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 106
Relay_Log_File: mysqld-relay-bin.000002
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
二,安装部署MMM
目标主机
Node1 192.168.88.149
Node2 192.168.88.150
MON 192.168.88.191
1.安装mon主机包
一、安装mon主机软件包
wget http://download.fedoraproject.org/pub/epel/5/x86_64/epel-release-5-4.noarch.rpm
rpm -ivh epel-release-5-4.noarch.rpm
yum -y install mysql-mmm*
三:置MMM监控、代理服务
grant process, super, replication client on *.* to 'rep_monitor'@'%' identified by 'RepMonitor';#第二个帐号mmm_agent(代理帐号),是mmm agent用来变成只读模式和同步master等
grant replication client on *.* to 'mmm_monitor'@'%' identified by 'mmm_monitor'#第三个帐号mmm_monitor(监听帐号),是mmm monitor服务器用来对mysql服务器作健康检查的
#SET PASSWORD FOR 'mmm_agent'@'%' = PASSWORD('');
flush privileges;
1. 在三台服务器修改mmm_common.conf配置文件(三台服务器此配置文件内容相同)
active_master_role writer
<host default>
cluster_interface eth1
pid_path /var/run/mmm_agentd.pid
bin_path /usr/lib/mysql-mmm/
replication_user replication
replication_password slave
agent_user mmm_agent
agent_password agent_password
</host>
<host db1>
ip 192.168.88.149
mode master
peer db2
</host>
<host db2>
ip 192.168.88.150
mode master
peer db1
</host>
<role writer>
hosts db1, db2
ips 192.168.88.101
mode exclusive
</role>
<role reader>
hosts db1, db2
ips 192.168.88.102,192.168.88.103
mode balanced
</role>
2. 在Node1服务器上修改mmm_agent.conf配置文件
include mmm_common.conf
this db1
3. 在Node2服务器上修改mmm_agent.conf配置文件
include mmm_common.conf
this db2
4. 在MON服务器上配置mmm_mon.conf配置文件####################mmm_mon.conf
include mmm_common.conf
<monitor>
ip 127.0.0.1
pid_path /var/run/mmm_mond.pid
bin_path /usr/lib/mysql-mmm/
status_path /var/lib/misc/mmm_mond.status
ping_ips 192.168.88.149,192.168.88.150
auto_set_online 10 #发现节点丢失则过10秒进行切换
</monitor>
<host default>
monitor_user mmm_monitor
monitor_password monitor_password
</host>
debug 1
修改mon的host
5. 启动代理(默认是启用,这里只是说明下)
[root@MySQL-M1 mysql-mmm]# cat /etc/default/mysql-mmm-agent
# mysql-mmm-agent defaults
ENABLED=1
[root@MySQL-M2 mysql-mmm]# cat /etc/default/mysql-mmm-agent
# mysql-mmm-agent defaults
ENABLED=1
在mon节点
[root@localhost mysql-mmm]# pwd
/etc/mysql-mmm
[root@localhost mysql-mmm]# vi mmm_mon_log.conf
#log4perl.logger = FATAL, MMMLog, MailFatal
# MailFatal为邮件报警的模块,FATAL定义了记录日志的级别
log4perl.logger = FATAL, MMMLog
log4perl.appender.MMMLog = Log::Log4perl::Appender::File
log4perl.appender.MMMLog.Threshold = INFO
log4perl.appender.MMMLog.filename = /var/log/mysql-mmm/mmm_mond.log
log4perl.appender.MMMLog.recreate = 1
log4perl.appender.MMMLog.layout = PatternLayout
log4perl.appender.MMMLog.layout.ConversionPattern = %d %5p %m%n
#log4perl.appender.MailFatal = Log::Dispatch::Email::MailSender
#log4perl.appender.MailFatal.Threshold = FATAL
#log4perl.appender.MailFatal.from = mmm@example.com
# 指定发件人
#log4perl.appender.MailFatal.to = root,mmm@example.com
# 指定收件人
#log4perl.appender.MailFatal.buffered = 0
# 0为当即发送
#log4perl.appender.MailFatal.subject = FATAL error in mysql-mmm-monitor
# 定义邮件主题
#log4perl.appender.MailFatal.layout = PatternLayout
#log4perl.appender.MailFatal.layout.ConversionPattern = %d %m%n
root@localhost mysql-mmm]# cd /root/soft/mysql-mmm-2.2.1/etc/mysql-mmm
[root@localhost mysql-mmm]# cp * /etc/mysql-mmm
四启动相关服务:
在node1和node2上:
[root@localhost mysql-mmm]# vi /etc/default/mysql-mmm-agent
ENABLED=1
[root@localhost init.d]# /root/soft/mysql-mmm-2.2.1/etc/init.d/mysql-mmm-agent start
[root@localhost Proc-Daemon-0.03]# /root/soft/mysql-mmm-2.2.1/etc/init.d/mysql-mmm-agent restart
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok
install Class::Singleton
MySQL-MON服务器上启动
/root/soft/mysql-mmm-2.2.1/etc/init.d/mysql-mmm-monitor start
[root@localhost Proc-Daemon-0.03]# /root/soft/mysql-mmm-2.2.1/etc/init.d/mysql-mmm-monitor restart
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Ok
五:测试MMM
/root/soft/mysql-mmm-2.2.1/sbin/mmm_control show
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control show
db1(192.168.88.149) master/HARD_OFFLINE. Roles: 硬件没链接上, 帐号有问题, 最好用mysql命令测试
db2(192.168.88.150) master/HARD_OFFLINE. Roles:
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control show
db1(192.168.88.149) master/AWAITING_RECOVERY. Roles:
db2(192.168.88.150) master/AWAITING_RECOVERY. Roles:
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control mode
ACTIVE
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control set_online db1激活主机
OK: State of 'db1' changed to ONLINE. Now you can wait some time and check its new roles!
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control set_online db2
OK: State of 'db2' changed to ONLINE. Now you can wait some time and check its new roles!激活主机
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control show checks all
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control show 查看分配状况
db1(192.168.88.149) master/ONLINE. Roles: reader(192.168.88.103), writer(192.168.88.101)
db2(192.168.88.150) master/ONLINE. Roles: reader(192.168.88.102)
以上证实成功了
如下不成功
db1(192.168.88.149): master/ONLINE. Roles: reader(192.168.1.7;), writer(192.168.1.9;)
db2(192.168.88.150): master/REPLICATION_FAIL. Roles: None
检测出1.3出了故障.
等一会..进行了切换!由于读写是轮循的.这时写切到了3
# mmm_control show
Servers status:
db1(192.168.88.149): master/ONLINE. Roles: reader(192.168.1.7;)
db2(192.168.88.150): master/ONLINE. Roles: reader(192.168.1.8;), writer(192.168.1.9;)
Telnet 任何一个虚拟IP 3306都是通的
[root@localhost ~]# ps aux |grep mmm
root 12273 0.0 0.1 106064 1396 pts/1 S+ 03:29 0:00 /bin/sh /root/soft/mysql-mmm-2.2.1/etc/init.d/mysql-mmm-monitor restart
root 12276 0.0 0.1 106068 1452 pts/1 S+ 03:29 0:00 /bin/sh /root/soft/mysql-mmm-2.2.1/etc/init.d/mysql-mmm-monitor start
root 12278 0.0 1.4 161684 14964 pts/1 S+ 03:29 0:00 mmm_mond
root 12279 0.4 6.9 700808 70624 pts/1 Sl+ 03:29 0:56 mmm_mond
root 12287 0.1 0.9 150700 10064 pts/1 S+ 03:29 0:13 perl /usr/lib/mysql-mmm//monitor/checker ping_ip
root 12290 0.1 1.2 181776 12512 pts/1 S+ 03:29 0:13 perl /usr/lib/mysql-mmm//monitor/checker mysql
root 12292 0.0 0.9 150700 10060 pts/1 S+ 03:29 0:05 perl /usr/lib/mysql-mmm//monitor/checker ping
root 12294 0.1 1.2 181776 12548 pts/1 S+ 03:29 0:15 perl /usr/lib/mysql-mmm//monitor/checker rep_backlog
root 12296 0.1 1.2 181776 12552 pts/1 S+ 03:29 0:15 perl /usr/lib/mysql-mmm//monitor/checker rep_threads
root 13335 0.0 0.0 103244 864 pts/0 S+ 06:48 0:00 grep mmm
node1停机
@localhost mysql-mmm]# service mysqld stop
Stopping mysqld: [ OK ]
db1(192.168.88.149) master/HARD_OFFLINE. Roles: 停掉了
db2(192.168.88.150) master/ONLINE. Roles: reader(192.168.88.102), reader(192.168.88.103), writer(192.168.88.101)
[root@localhost mysql-mmm]# service mysqld start从新启动node1
Starting mysqld: [ OK ]
[root@localhost mysql-mmm]#
localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control show
db1(192.168.88.149) master/AWAITING_RECOVERY. Roles:
db2(192.168.88.150) master/ONLINE. Roles: reader(192.168.88.102), reader(192.168.88.103), writer(192.168.88.101)
mmm_control help
Valid commands are:
help - show this message #查看帮助信息
ping - ping monitor #ping监控,用于监控检测agent服务器
show - show status #查看状态信息
checks [<host>|all [<check>|all]] - show checks status #显示检查状态,包括(ping、mysql、rep_threads、rep_backlog)
set_online <host> - set host <host> online #设置某host为online状态
set_offline <host> - set host <host> offline #设置某host为offline状态
mode - print current mode. #打印当前的模式,是ACTIVE、MANUAL、PASSIVE(默认是ACTIVE模式)
set_active - switch into active mode. #更改成active模式
set_manual - switch into manual mode. #更改成manual模式
set_passive - switch into passive mode. #更改成passive模式
move_role [--force] <role> <host> - move exclusive role <role> to host <host> #更改host的模式,好比更改处于slave的mysql数据库角色为writer
(Only use --force if you know what you are doing!)
set_ip <ip> <host> - set role with ip <ip> to host <host> help #为host设置ip,只有passive模式的
5、mysql_proxy与mysql MMM集成的必要性
一、实现mysql数据库层的负载均衡
二、数据库节点实现HA动态切换
三、读写分离,下降主数据库负载
MySQL Proxy就是这么一个中间层代理,简单的说,MySQL Proxy就是一个链接池,负责将前台应用的链接请求转发给后台的数据库,而且经过使用lua脚本,能够实现复杂的链接控制和过滤,
从而实现读写分离和负 载平衡。对于应用来讲,MySQL Proxy是彻底透明的,应用则只须要链接到MySQL Proxy的监听端口便可。固然,这样proxy机器可能成为单点失效,但彻底可使用多个proxy
机器作为冗余,在应用服务器的链接池配置中配置到多 个proxy的链接参数便可。
6、安装mysql proxy
一、安装mysql客户端
[root@localhost soft]# tar -zxf mysql-5.1.45.tar.gz
[root@localhost soft]# cd mysql-5.1.45
[root@localhost mysql-5.1.45]# yum install ncurses
yum -y install ncurses-devel
[root@localhost mysql-5.1.45]# yum install gcc-c++ -y
[root@localhost mysql-5.1.45]# ./configure --without-server
[root@localhost mysql-5.1.45]# make &&make install
2,安装LUA
wget http://www.lua.org/ftp/lua-5.1.4.tar.gz
tar zxvf lua-5.1.4.tar.gz
cd lua-5.1.4
vim修改Makefile,使"INSTALL_TOP=/usr/local/lua",这样作的目的是为了是lua的全部文件都安装在目录/usr/local/lua/
make posix
make install
3,安装libevent
wget http://monkey.org/~provos/libevent-1.4.13-stable.tar.gz
tar zxvf libevent-1.4.13-stable.tar.gz
cd libevent-1.4.13
./configure --prefix=/usr/local/libevent
make && make install
4,设置mysql-proxy所需的环境变量,把下面的内容追加到/etc/profile
export LUA_CFLAGS="-I/usr/local/lua/include" LUA_LIBS="-L/usr/local/lua/lib -llua -ldl" LDFLAGS="-L/usr/local/libevent/lib -lm"
export CPPFLAGS="-I/usr/local/libevent/include"
export CFLAGS="-I/usr/local/libevent/include"
执行 source /etc/profile
5安装mysql-proxy
wget http://mysql.cdpa.nsysu.edu.tw/Downloads/MySQL-Proxy/mysql-proxy-0.6.1.tar.gz#此连接404本地上传
cd mysql-proxy-0.6.1
yum install glib*
./configure --prefix=/usr/local/mysql-proxy --with-mysql --with-lua
make && make install
6启动mysql-proxy
mon节点
[root@localhost ~]# /root/soft/mysql-mmm-2.2.1/sbin/mmm_control show
db1(192.168.88.149) master/ONLINE. Roles: reader(192.168.88.103), writer(192.168.88.101)
db2(192.168.88.150) master/ONLINE. Roles: reader(192.168.88.102)
本次对两台数据库实现了读写分离;mysql-master为可读可写,mysql-slave为只读
#/usr/local/mysql-proxy/sbin/mysql-proxy
--proxy-address=192.168.88.192:4040
--proxy-read-only-backend-addresses=192.168.1.102:3306
--proxy-read-only-backend-addresses=192.168.1.103:3306
--proxy-backend-addresses=192.168.1.101:3306
--proxy-lua-script=/usr/local/share/mysql-proxy/rw-splitting.lua &
注:若是正常状况下启动后终端不会有任何提示信息,mysql-proxy启动后会启
动两个端口4040和4041,4040用于SQL转发,4041用于管理mysql-proxy。若有多个mysql-slave能够依次在后面添加
oot@localhost ~]# netstat -tlp | grep mysql-proxy
tcp 0 0 *:yo-main *:* LISTEN 23846/mysql-proxy
tcp 0 0 *:houston *:* LISTEN 23846/mysql-proxy
[root@localhost ~]#
mysql> grant all on *.* to 'proxy1'@'%' identified by '123456';
#SET PASSWORD FOR 'proxy1'@'%' = PASSWORD('');
sql> use tt;
Database changed
mysql> create table first_tb(int id,varchar(30));
mysql> insert into first_tb values (7,"first");
insert into first_tb values (8,"second");
Query OK, 1 row affected (0.00 sec)
node1停掉slave
mysql> stop slave;
Query OK, 0 rows affected (0.00 sec)
mysqld : ALL : ALLOW
[root@localhost ~]# vi /etc/hosts.allow
mysqld : ALL : ALLOW
mysqld-max : ALL :ALLOW
mysql -uproxy1 -P4040 -h192.168.88.192
--proxy-backend-addresses=192.168.1.9:3306 指定mysql写主机的端口
--proxy-read-only-backend-addresses=192.168.1.7:3306 指定只读的mysql主机端口
--proxy-read-only-backend-addresses=192.168.1.8:3306 指定另外一个只读的mysql主机端口
--proxy-lua-script=/usr/local/share/mysql-proxy/rw-splitting.lua 指定lua脚本,在这里,使用的是rw-splitting脚本,用于读写分离
完整的参数能够运行如下命令查看:
mysql-proxy --help-all
运行如下命令启动/中止/重启mysql proxy:
# /etc/init.d/mysql-proxy start
# /etc/init.d/mysql-proxy stop
# /etc/init.d/mysql-proxy restart
Ps -ef | grep mysql-proxy
7、测试结果
将web server 如apache 中部署的网站,数据库链接地址改成----〉proxy的ip端口为4040
一、往数据库db1里写入数据,查看2个数据库同步状况
二、使用mon服务器mmm_control show 查看状态
简单的测试能够链接proxy 4040 查看读写状况
方法我就再也不详细写了。
编译过程有可能会遇到一些错误。下面是错误的总结:
http://wenku.baidu.com/view/77897ad53186bceb19e8bb55.html?from=search
[url]http://dev.mysql.com/doc/refman/5.0/en/perl-support-problems.html[/url]
http://my.oschina.net/barter/blog/89858html