本文基于2个角度进行node 1:mysql主从复制,读写分离部分mysql 2:RHCS实现mysql-proxy、mysql-master、lvs高可用linux |
架构图web
可能会用到的yum源sql
http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm数据库
http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpmbash
系统环境:服务器
CentOS release 6.4 (Final) 2.6.32-358.el6.x86_64 |
地址分配 |
mysql-proxy-fip:10.0.0.100多线程 mysql-master-fip:10.0.0.101架构 lvs-vip:10.0.0.15 ha1:10.0.0.11 ha2:10.0.0.12 ha3:10.0.0.13 ha4:10.0.0.14 real1:10.0.0.15 real2:10.0.0.16 |
背景:
在数据库服务器很是繁忙的状况下,实现mysql读离分离是扩展性能的一个不错的方案,理由为如下几点 1:考虑在mysql双主中,很难保证数据一致性(mysql-mmm能够实现,但有待生产环境的考验),并且并不能提升写性能 2:在从服务器中,使用lvs为从进行负载分摊,在mysql-proxy上将读操做都指向lvs的集群地址上,能大大提升读性能 3:在这中场景中,有3个单点故障,mysql-proxy,mysql主服务器,和从服务器集群的lvs负载均衡器,这里RHCS的高可用方案解决单点故障 a:使用drbd镜像解决mysql-master的单点故障 b:使用keepalived解决lvs的单点故障 c:使用RHCS自带的L实现mysql-proxy故障的自动切换 此套方案中,将mysql-proxy 、mysql-master、lvs分别以ha4节点做为各自的故障转移域,出现服务中断后自动转移到备份主机上。 |
具体部署
安装mysql-5.6.10-linux-glibc2.5-x86_64.tar.gz
在mysql-5.6版本引进了Gtid的机制,使得其复制功能的配置、监控及管理变得更加易于实现,且更加健壮。 简单说明Gtid的功能,主要实现了如下2个做用 1:从服务器到主服务器复制数据后,应用数据的时候能够启用多线程,加快复制的速度,下降主从之间的延迟。 2:经过全局事务 ID,能够自动识别上次同步的点,能够很是简单的追踪比较复制事务信息,可以实现主服务器down掉后快速恢复,甚至自动讲从服务器提高为主服务器并能保持各从服务器数据一致,能够实现mysql复制自身的ha(此文的ha基于RHCS而不使用Gtid的方式)。 |
配置DRBD
首先安装drbd,使用elrepo提供的yum扩展包,直接yum安装drbd
rpm -ivh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
在ha2,ha4上安装drbd工具包和drbd的内核模块
yum install drbd84-utils kmod-drbd84 –y
drbd全局配置
[root@ha4 ~]# grep '[[:space:]]*#.*$' -v /etc/drbd.d/global_common.conf global { usage-countno; } common { handlers{ pri-on-incon-degr"/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;reboot -f"; pri-lost-after-sb"/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ;reboot -f"; local-io-error"/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh;echo o > /proc/sysrq-trigger ; halt -f"; } startup{ } options{ } disk { on-io-errordetach; } net { cram-hmac-alg"sha1"; shared-secret"lust" } syncer{ rate1000M; } } |
drbd资源配置,使用双主模型,由RHCS控制哪一个节点挂载
[root@ha4 ~]# cat /etc/drbd.d/mysql.res resource mysql { net{ protocol C; allow-two-primaries yes; } startup{ become-primary-on both; } disk { fencing resource-and-stonith; } handlers { # Make sure the other node is confirmed # dead after this! outdate-peer "/sbin/kill-other-node.sh"; } on ha2 { device /dev/drbd0; disk /dev/sda5; address 10.0.0.12:7789; meta-diskinternal; } on ha4 { device /dev/drbd0; disk /dev/sda5; address 10.0.0.14:7789; meta-diskinternal; } } |
将配置文件复制一份到ha2,保持相同便可
ha2,ha4上分别依次执行
初始化资源
drbdadm create-md web
启动服务
service drbd start
格式化磁盘
mkfs.ext4 /dev/drbd0
安装好drbd后安装mysql数据库,并初始化mysql的时候将datadir目录指向将要挂载drbd的目录
先挂载drbd设备
mkdir /mysql/data
mount /dev/drbd0 /mysql/data/
配置mysql复制
安装mysql,具体安装过程再也不列出,这里列出2个注意点
注意点1:
在ha4测试安装mysql的时候,只能在一个节点上挂载drbd设备,不然会引发文件系统崩溃
注意点2:
必定要将mysql设置为开机不启动,让RHCS来管理mysql的start和stop,这对于配置其它节点的资源也是同样的,这点下面将再也不说明
master:[mysqld]段的配置 (ha2和ha4保持一致,须要改动的部分看配置的中注释)
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES binlog-format=ROW log-bin=master-bin log-slave-updates=true gtid-mode=on enforce-gtid-consistency=true master-info-repository=TABLE relay-log-info-repository=TABLE sync-master-info=1 slave-parallel-workers=2 binlog-checksum=CRC32 master-verify-checksum=1 slave-sql-verify-checksum=1 binlog-rows-query-log_events=1 server-id=1 report-port=3306 port=3306 datadir=/mysql/data socket=/tmp/mysql.sock report-host=ha2 |
real1:[mysqld]的配置
binlog-format=ROW log-slave-updates=true gtid-mode=on enforce-gtid-consistency=true master-info-repository=TABLE relay-log-info-repository=TABLE sync-master-info=1 slave-parallel-workers=2 binlog-checksum=CRC32 master-verify-checksum=1 slave-sql-verify-checksum=1 binlog-rows-query-log_events=1 server-id=11 #在real2上配置为12 report-port=3306 port=3306 log-bin=mysql-bin.log datadir=/mysql/data socket=/tmp/mysql.sock report-host=slave1 #在real2上配置为slave2 |
在master上建立复制用户
mysql> GRANT REPLICATION SLAVEON *.* TO "repluser"@"10.0.0.%" IDENTIFIED BY '123456';
在slave上指定master
mysql>CHANGE MASTER TOMASTER_HOST='10.0.0.101', MASTER_USER='repluser', MASTER_PASSWORD='123456',MASTER_AUTO_POSITION=1;
启动复制进程
mysql>START SLAVE;
查看复制状态
配置mysql-proxy
先安装EPEL源,此源包含mysql-proxy的rpm包
rpm –ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
安装mysql-proxy
yum install mysql-proxy –y
配置mysql-proxy
/etc/init.d/mysql-proxy启动脚本
#!/bin/bash # # mysql-proxy This script starts and stopsthe mysql-proxy daemon # # chkconfig: - 78 30 # processname: mysql-proxy # description: mysql-proxy is a proxydaemon for mysql # Source function library. . /etc/rc.d/init.d/functions prog="/usr/bin/mysql-proxy" # Source networking configuration. if [ -f /etc/sysconfig/network ]; then ./etc/sysconfig/network fi # Check that networking is up. [ ${NETWORKING} = "no" ]&& exit 0 # Set default mysql-proxy configuration. ADMIN_USER="admin" ADMIN_PASSWD="admin" ADMIN_LUA_SCRIPT="/usr/lib64/mysql-proxy/lua/admin.lua" PROXY_OPTIONS="--daemon" PROXY_PID=/var/run/mysql-proxy.pid PROXY_USER="mysql-proxy" # Source mysql-proxy configuration. if [ -f /etc/sysconfig/mysql-proxy ]; then ./etc/sysconfig/mysql-proxy fi RETVAL=0 start() { echo -n $"Starting $prog: " daemon $prog $PROXY_OPTIONS --pid-file=$PROXY_PID--proxy-address="$PROXY_ADDRESS" --user=$PROXY_USER--admin-username="$ADMIN_USER"--admin-lua-script="$ADMIN_LUA_SCRIPT"--admin-password="$ADMIN_PASSWORD" RETVAL=$? echo if [ $RETVAL -eq 0 ]; then touch /var/lock/subsys/mysql-proxy fi } stop() { echo -n $"Stopping $prog: " killproc -p $PROXY_PID -d 3 $prog RETVAL=$? echo if [ $RETVAL -eq 0 ]; then rm -f /var/lock/subsys/mysql-proxy rm -f $PROXY_PID fi } # See how we were called. case "$1" in start) start ;; stop) stop ;; restart) stop start ;; condrestart|try-restart) if status -p $PROXY_PIDFILE $prog >&/dev/null; then stop start fi ;; status) status -p $PROXY_PID $prog ;; *) echo "Usage: $0{start|stop|restart|reload|status|condrestart|try-restart}" RETVAL=1 ;; esac exit $RETVAL
配置参数
cat /etc/sysconfig/mysql-proxy ADMIN_USER="admin" ADMIN_PASSWORD="admin" ADMIN_ADDRESS="" ADMIN_LUA_SCRIPT="/usr/lib64/mysql-proxy/lua/admin.lua" PROXY_ADDRESS="" PROXY_USER="mysql-proxy" PROXY_OPTIONS="--daemon --log-level=info--log-use-syslog --plugins=proxy --plugins=admin--proxy-backend-addresses=10.0.0.101:3306--proxy-read-only-backend-addresses=10.0.0.15:3306--proxy-lua-script=/usr/lib64/mysql-proxy/lua/proxy/balance.lua" |
在ha4上也用相同的方式部署mysql-proxy
LVS的安装
LVS对mysql从服务器进行读操做的负载均衡,对于lvs的高可用,因为keepalived比较轻量级,因此直接使用keepalived来实现,这里keepalived只是为了方便经过启动keepalived应用ipvs规则,因此并直接配置一个keepalived主,而后用RHCS实现keepalived的高可用,而不是keepalived自身的vrrp实现高可用 |
安装keepalived
yum install keepalived –y
配置文件
cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from root smtp_server127.0.0.1 smtp_connect_timeout 30 router_idLVS_DEVEL } vrrp_instance VI_1 { stateMASTER interfaceeth0 virtual_router_id 51 priority100 advert_int1 authentication{ auth_type PASS auth_pass lust } virtual_ipaddress { 10.0.0.15 } } virtual_server 10.0.0.15 443 { delay_loop6 lb_algo wlc lb_kind DR nat_mask255.255.255.0 persistence_timeout 50 protocolTCP real_server10.0.0.21 3306 { weight1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } } real_server10.0.0.22 3306 { weight 1 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 3306 } } } |
在每一个mysql-slave的real服务器建立tc/init.d/rs,内容以下
#!/bin/bash # # Script to start LVS DR real server. # description: LVS DR real server # . /etc/rc.d/init.d/functions VIP=10.0.0.15 host=`/bin/hostname` case "$1" in start) # StartLVS-DR real server on this machine. /sbin/ifconfig lo down /sbin/ifconfig lo up echo 1> /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2> /proc/sys/net/ipv4/conf/lo/arp_announce echo 1> /proc/sys/net/ipv4/conf/all/arp_ignore echo 2> /proc/sys/net/ipv4/conf/all/arp_announce /sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up /sbin/route add -host $VIP dev lo:0 ;; stop) # StopLVS-DR real server loopback device(s). /sbin/ifconfig lo:0 down echo 0> /proc/sys/net/ipv4/conf/lo/arp_ignore echo 0> /proc/sys/net/ipv4/conf/lo/arp_announce echo 0> /proc/sys/net/ipv4/conf/all/arp_ignore echo 0> /proc/sys/net/ipv4/conf/all/arp_announce ;; status) #Status of LVS-DR real server. islothere=`/sbin/ifconfig lo:0 | grep $VIP` isrothere=`netstat -rn | grep "lo:0" | grep $VIP` if [ !"$islothere" -o ! "isrothere" ];then #Either the route or the lo:0 device #not found. echo "LVS-DR real server Stopped." else echo "LVS-DR real server Running." fi ;; *) #Invalid entry. echo "$0: Usage: $0 {start|status|stop}" exit 1 ;; esac
提供执行权限
chmod +x /etc/init.d/rs
在2台realserver中启动rs
service rs start
至此全部关于RHCS集群的前期工做都准备完毕,接下来开始配置基于RHCS的高可用
在RedHat最新版本的RHCS集群套件中,采用了corosync为底层信息的传递,但cman的机制依然存在,cman如今是以corosync的插件运行,因此service cman start也是能够启动的 |
编辑ha1-ha4的/etc/hosts文件
ha4主机同时做为luci的跳板机,在ha4上安装luci
yum install luci -y
在ha1,ha2,ha3,ha4三台主机上分别执行
yum install ricci –y
为各节点建立密码
echo 123456 | passwd ricci --stdin
在ha4上启动luci
service luci start
在ha1-4节点上启动ricci
service ricci start
打开luci的管理页面建立mysql集群(账号为系统root账号)
将各个节点加入一个新的集群
建立3个故障转移域(注意点:在故障转移域中,prioritized为1的优先级最高,其次优先级数值越高,优先级越高)
添加2个ip资源,分别为10.0.0.100和10.0.0.101,lvs的ip由keepalived进行自我控制,这里提供一个样例
建立挂载资源
建立mysqld资源
建立mysql-proxy资源
建立lvs资源
建立3个service group
mysql-proxy-ha 包含的资源 ip地址:10.0.0.11 mysql-proxy服务 |
mysql-m-ha包含的资源 ip地址10.0.0.12 drbd-mount mysql |
mysql-lvs-ha包含的资源 lvs-keepalived |
至此配置完成