几个月没有更新博客了,已经长草了,特地来除草。本次主要分享如何利用consul来实现redis以及mysql的高可用。之前的公司mysql是单机单实例,高可用MHA加vip就能搞定,新公司mysql是单机多实例,那么显然这个方案不适用,后来也实现了故障切换调用dns api来修改域名记录,可是仍是没有利用consul来实现高可用方便,后面会说明优点。redis单机多实例最正常不过了,那么redis单机多实例高可用也不太好作,固然也能够利用sentinel来实现,当failover之后调用脚本调用dns api修改域名解析也是能够的。也不是那么的优雅,有人会说怎么不用codis,redis cluster,这些方案当然好,但不适合咱们,这些方案不够灵活,不能很好的处理热点数据的问题。那么consul是什么呢,接下慢慢说:html
consul是HashiCorp公司(曾经开发过vgrant) 推出的一款开源工具, 基于go语言开发, 轻量级, 用于实现分布式系统的服务发现与配置。 与其余相似产品相比, 提供更“一站式”的解决方案。 consul内置有KV存储, 服务注册/发现, 健康检查, HTTP+DNS API, Web UI等多种功能。官网: https://www.consul.io/其余同类服务发现与配置的主流开源产品有:zookeeper和ETCD。node
consul的优点:mysql
1. 支持多数据中心, 内外网的服务采用不一样的端口进行监听。 多数据中心集群能够避免单数据中心的单点故障, zookeeper和 etcd 均不提供多数据中心功能的支持web
2. 支持健康检查. etcd 不提供此功能.redis
3. 支持 http 和 dns 协议接口. zookeeper 的集成较为复杂,etcd 只支持 http 协议. 有DNS功能, 支持REST API算法
4. 官方提供web管理界面, etcd 无此功能.sql
5. 部署简单, 运维友好, 无依赖, go的二进制程序copy过来就能用了, 一个程序搞定, 能够结合ansible来推送。shell
Consul和其余服务发现工具的对比表:json
Consul 架构和角色bootstrap
1. Consul Cluster由部署和运行了Consul Agent的节点组成。 在Cluster中有两种角色:Server和 Client。
2. Server和Client的角色和Consul Cluster上运行的应用服务无关, 是基于Consul层面的一种角色划分.
3. Consul Server: 用于维护Consul Cluster的状态信息, 实现数据一致性, 响应RPC请求。官方建议是: 至少要运行3个或者3个以上的Consul Server。 多个server之中须要选举一个leader, 这个选举过程Consul基于Raft协议实现. 多个Server节点上的Consul数据信息保持强一致性。 在局域网内与本地客户端通信,经过广域网与其余数据中心通信。Consul Client: 只维护自身的状态, 并将HTTP和DNS接口请求转发给服务端。
4. Consul 支持多数据中心, 多个数据中心要求每一个数据中心都要安装一组Consul cluster,多个数据中心间基于gossip protocol协议来通信, 使用Raft算法实现一致性
基础知识就介绍这么多了,更加详细的能够参考官网。下面咱们来搭建一下consul,以及如何利用consul实现redis以及mysql的高可用。
测试环境(生产环境consul server部署3个或者5个):
consul server:192.168.0.10
consul client:192.168.0.20,192.168.0.30,192.168.0.40
consul的安装很是容易,从https://www.consul.io/downloads.html这里下载之后,解压便可使用,就是一个二进制文件,其余的都没有了。我这里使用的是0.92版本。文件下载之后解压放到/usr/local/bin。就可使用了。不依赖任何东西。上面的4台服务器都安装。
4台机器都建立目录,分别是放配置文件,以及存放数据的。以及存放redis,mysql的健康检查脚本
mkdir /etc/consul.d/ -p && mkdir /data/consul/ -p
mkidr /data/consul/shell -p
而后把相关配置参数写入配置文件,其实也能够不用写,直接跟在命令后面就行,那样不方便管理。
consul server(192.168.0.10)配置文件(具体参数的意思请查询官网或者文章给的参考连接):
[root@db-server-yayun-01 ~]# cat /etc/consul.d/server.json { "data_dir": "/data/consul", "datacenter": "dc1", "log_level": "INFO", "server": true, "bootstrap_expect": 1, "bind_addr": "192.168.0.10", "client_addr": "192.168.0.10", "ui":true } [root@db-server-yayun-01 ~]#
consul client(192.168.0.20,192.168.0.30,192.168.0.40)
[root@db-server-yayun-02 ~]# cat /etc/consul.d/client.json { "data_dir": "/data/consul", "enable_script_checks": true, "bind_addr": "192.168.0.20", "retry_join": ["192.168.0.10"], "retry_interval": "30s", "rejoin_after_leave": true, "start_join": ["192.168.0.10"] } [root@db-server-yayun-02 ~]#
3台服务器的配置文件差别不大,惟一有区别的就是bind_addr地方,自行修改成你本身服务器的ip。我测试环境是虚拟机,有多快网卡,因此必须指定,不然能够绑定0.0.0.0。
下面咱们先启动consul server:
nohup consul agent -config-dir=/etc/consul.d > /data/consul/consul.log &
查看日志:
[root@db-server-yayun-01 consul]# cat consul.log ==> WARNING: BootstrapExpect Mode is specified as 1; this is the same as Bootstrap mode. ==> WARNING: Bootstrap mode enabled! Do not enable unless necessary ==> Starting Consul agent... ==> Consul agent running! Version: 'v0.9.2' Node ID: '5e612623-ec5b-386c-19be-d38876a9a46f' Node name: 'db-server-yayun-01' Datacenter: 'dc1' Server: true (bootstrap: true) Client Addr: 192.168.0.10 (HTTP: 8500, HTTPS: -1, DNS: 8600) Cluster Addr: 192.168.0.10 (LAN: 8301, WAN: 8302) Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false ==> Log data will now stream in as it occurs: 2017/12/09 09:49:53 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:192.168.0.10:8300 Address:192.168.0.10:8300}] 2017/12/09 09:49:53 [INFO] raft: Node at 192.168.0.10:8300 [Follower] entering Follower state (Leader: "") 2017/12/09 09:49:53 [INFO] serf: EventMemberJoin: db-server-yayun-01.dc1 192.168.0.10 2017/12/09 09:49:53 [INFO] serf: EventMemberJoin: db-server-yayun-01 192.168.0.10 2017/12/09 09:49:53 [INFO] agent: Started DNS server 192.168.0.10:8600 (udp) 2017/12/09 09:49:53 [INFO] consul: Adding LAN server db-server-yayun-01 (Addr: tcp/192.168.0.10:8300) (DC: dc1) 2017/12/09 09:49:53 [INFO] consul: Handled member-join event for server "db-server-yayun-01.dc1" in area "wan" 2017/12/09 09:49:53 [INFO] agent: Started DNS server 192.168.0.10:8600 (tcp) 2017/12/09 09:49:53 [INFO] agent: Started HTTP server on 192.168.0.10:8500 2017/12/09 09:50:00 [ERR] agent: failed to sync remote state: No cluster leader 2017/12/09 09:50:00 [WARN] raft: Heartbeat timeout from "" reached, starting election 2017/12/09 09:50:00 [INFO] raft: Node at 192.168.0.10:8300 [Candidate] entering Candidate state in term 2 2017/12/09 09:50:00 [INFO] raft: Election won. Tally: 1 2017/12/09 09:50:00 [INFO] raft: Node at 192.168.0.10:8300 [Leader] entering Leader state 2017/12/09 09:50:00 [INFO] consul: cluster leadership acquired 2017/12/09 09:50:00 [INFO] consul: New leader elected: db-server-yayun-01 2017/12/09 09:50:00 [INFO] consul: member 'db-server-yayun-01' joined, marking health alive 2017/12/09 09:50:03 [INFO] agent: Synced node info
能够从日志中看到(HTTP: 8500, HTTPS: -1, DNS: 8600),http端口默认8500,在reload以及web ui会用到,dns端口是8600,在使用dns解析的时候会用到。还能够看到这台机器就是leader,consul: New leader elected: db-server-yayun-01。由于只有一台机器。因此生产环境必定要3个或者5个server。
下面启动3台client,3台client启动命令是同样的。而后查看其中一台client的日志:
nohup consul agent -config-dir=/etc/consul.d > /data/consul/consul.log &
[root@db-server-yayun-02 consul]# cat /data/consul/consul.log ==> Starting Consul agent... ==> Joining cluster... Join completed. Synced with 1 initial agents ==> Consul agent running! Version: 'v0.9.2' Node ID: '0ec901ab-6c66-2461-95e6-50a77a28ed72' Node name: 'db-server-yayun-02' Datacenter: 'dc1' Server: false (bootstrap: false) Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600) Cluster Addr: 192.168.0.20 (LAN: 8301, WAN: 8302) Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false ==> Log data will now stream in as it occurs: 2017/12/09 10:06:10 [INFO] serf: EventMemberJoin: db-server-yayun-02 192.168.0.20 2017/12/09 10:06:10 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp) 2017/12/09 10:06:10 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp) 2017/12/09 10:06:10 [INFO] agent: Started HTTP server on 127.0.0.1:8500 2017/12/09 10:06:10 [INFO] agent: (LAN) joining: [192.168.0.10] 2017/12/09 10:06:10 [INFO] agent: Retry join is supported for: aws azure gce softlayer 2017/12/09 10:06:10 [INFO] agent: Joining cluster... 2017/12/09 10:06:10 [INFO] agent: (LAN) joining: [192.168.0.10] 2017/12/09 10:06:10 [INFO] serf: EventMemberJoin: db-server-yayun-01 192.168.0.10 2017/12/09 10:06:10 [INFO] agent: (LAN) joined: 1 Err: <nil> 2017/12/09 10:06:10 [INFO] consul: adding server db-server-yayun-01 (Addr: tcp/192.168.0.10:8300) (DC: dc1) 2017/12/09 10:06:10 [INFO] agent: (LAN) joined: 1 Err: <nil> 2017/12/09 10:06:10 [INFO] agent: Join completed. Synced with 1 initial agents 2017/12/09 10:06:10 [INFO] agent: Synced node info
能够看到提示agent: Join completed. Synced with 1 initial agents,以及Server: false (bootstrap: false)。这也是client和server的区别。
咱们继续执行命令看一下集群:
[root@db-server-yayun-02 ~]# consul members Node Address Status Type Build Protocol DC db-server-yayun-01 192.168.0.10:8301 alive server 0.9.2 2 dc1 db-server-yayun-02 192.168.0.20:8301 alive client 0.9.2 2 dc1 db-server-yayun-03 192.168.0.30:8301 alive client 0.9.2 2 dc1 db-server-yayun-04 192.168.0.40:8301 alive client 0.9.2 2 dc1 [root@db-server-yayun-02 ~]#
[root@db-server-yayun-02 ~]# consul operator raft list-peers Node ID Address State Voter RaftProtocol db-server-yayun-01 192.168.0.10:8300 192.168.0.10:8300 leader true 2 [root@db-server-yayun-02 ~]#
咱们看看web ui,consul自带的ui,很是轻便。访问:http://192.168.0.10:8500/ui/
到这来consul集群就搭建完成了,是否是很简单。对就是这么简单,可是从上面能够看到,client节点并无注册服务,显示0 services。这也就是接下来须要讲解的。那么到底如何实现redis及mysql的高可用呢?正式开始:
Consul 使用场景一(redis sentinel)
(1)Redis 哨兵架构下,服务器部署了哨兵,但业务部门没有在app 层面,使用jedis 哨兵驱动来自动发现Redis master,而使用直连IP master。当master挂掉,其余redis节点担当新master后,应用须要手工修改配置,指向新master。
(2)Redis 客户端驱动,尚未读写分离的配置,若想slave的读负载均衡,暂时没好的办法。咱们程序都是支持读写分离,因此没影响
(3)Consul 能够知足以上需求,配置两个DNS服务,一个是master的服务,利用consul自身的服务健康检查和探测功能, 自动发现新的master。 而后定义一个slave的服务,基于DNS自己, 可以对slave角色的redis IP作轮询。
架构图以下:
一样也能够对mysql作高可用,mha和sentinel的角色同样,架构图以下:
下面就说说redis高可用的实现过程,mysql的我就不说了,mysql用到的健康检查脚本我会贴出来。思路都是同样的。
Consul 服务定义(Redis)
上面已经搭建好了consul集群,server是192.168.0.10 client是20到40. 那么20咱们就拿来当redis master,30,40拿来当redis slave。下面定义服务(20,30,40都要存在):
20,30,40的配置文件以下,除了address要修改成对应的服务器地址,其余同样。
[root@db-server-yayun-02 consul.d]# pwd /etc/consul.d [root@db-server-yayun-02 consul.d]# ll total 12 -rw-r--r--. 1 root root 221 Dec 9 09:44 client.json -rw-r--r--. 1 root root 319 Dec 9 10:48 r-6029-redis-test.json -rw-r--r--. 1 root root 321 Dec 9 10:48 w-6029-redis-test.json [root@db-server-yayun-02 consul.d]#
master的服务定义配置文件:
[root@db-server-yayun-02 consul.d]# cat w-6029-redis-test.json { "services": [ { "name": "w-6029-redis-test", "tags": [ "master-test-6029" ], "address": "192.168.0.20", "port": 6029, "checks": [ { "script": "/data/consul/shell/check_redis_master.sh 6029 ", "interval": "15s" } ] } ] } [root@db-server-yayun-02 consul.d]#
slave的服务定义配置文件:
[root@db-server-yayun-02 consul.d]# cat r-6029-redis-test.json { "services": [ { "name": "r-6029-redis-test", "tags": [ "slave-test-6029" ], "address": "192.168.0.20", "port": 6029, "checks": [ { "script": "/data/consul/shell/check_redis_slave.sh 6029 ", "interval": "15s" } ] } ] } [root@db-server-yayun-02 consul.d]#
每一个agent都注册后, 对应有两个域名:
w-6029-redis-test.service.consul (对应惟一一个master IP)
r-6029-redis-test.service.consul (对应两个slave IP, 客户端请求时, 随机分配一个)
其中"script": "/data/consul/shell/check_redis_slave.sh 6029 "表明对redis 6029端口进行健康检查,关于更多健康检查请查看官网介绍。
[root@db-server-yayun-03 shell]# pwd /data/consul/shell [root@db-server-yayun-03 shell]# ll total 16 -rwxr-xr-x. 1 root root 480 Dec 9 10:56 check_mysql_master.sh -rwxr-xr-x. 1 root root 3004 Dec 9 10:55 check_mysql_slave.sh -rwxr-xr-x. 1 root root 254 Dec 9 10:51 check_redis_master.sh -rwxr-xr-x. 1 root root 379 Dec 9 10:51 check_redis_slave.sh [root@db-server-yayun-03 shell]#
/data/consul/shell目录下面有4个脚本,是对redis和mysql进行健康检查用的。脚本比较简单,大概就是若是只有一个master,那么读写都在master,若是有slave可用,那么读会在slave进行。若是slave复制不正常,或者复制延时,那么slave服务将不会注册。
[root@db-server-yayun-03 shell]# cat check_redis_master.sh #!/bin/bash myport=$1 auth=$2 if [ ! -n "$auth" ] then auth='\"\"' fi comm="/usr/local/bin/redis-cli -p $myport -a $auth " role=`echo 'INFO Replication'|$comm |grep -Ec 'role:master'` echo 'INFO Replication'|$comm if [ $role -ne 1 ] then exit 2 fi [root@db-server-yayun-03 shell]#
[root@db-server-yayun-03 shell]# cat check_redis_slave.sh #!/bin/bash myport=$1 auth=$2 if [ ! -n "$auth" ] then auth='\"\"' fi comm="/usr/local/bin/redis-cli -p $myport -a $auth " role=`echo 'INFO Replication'|$comm |grep -Ec '^role:slave|^master_link_status:up'` single=`echo 'INFO Replication'|$comm |grep -Ec '^role:master|^connected_slaves:0'` echo 'INFO Replication'|$comm if [ $role -ne 2 -a $single -ne 2 ] then exit 2 fi [root@db-server-yayun-03 shell]#
[root@db-server-yayun-02 shell]# cat check_mysql_master.sh #!/bin/bash port=$1 user="root" passwod="123" comm="/usr/local/mysql/bin/mysql -u$user -h 127.0.0.1 -P $port -p$passwod" slave_info=`$comm -e "show slave status" |wc -l` value=`$comm -Nse "select 1"` # 判断是否是从库 if [ $slave_info -ne 0 ] then echo "MySQL $port Instance is Slave........" $comm -e "show slave status\G" | egrep -w "Master_Host|Master_User|Master_Port|Master_Log_File|Read_Master_Log_Pos|Relay_Log_File|Relay_Log_Pos|Relay_Master_Log_File|Slave_IO_Running|Slave_SQL_Running|Exec_Master_Log_Pos|Relay_Log_Space|Seconds_Behind_Master" exit 2 fi # 判断mysql是否存活 if [ -z $value ] then exit 2 fi echo "MySQL $port Instance is Master........" $comm -e "select * from information_schema.PROCESSLIST where user='repl' and COMMAND like '%Dump%'" [root@db-server-yayun-02 shell]#
[root@db-server-yayun-02 shell]# cat check_mysql_slave.sh #!/bin/bash port=$1 user="root" passwod="123" repl_check_user="root" repl_check_pwd="123" master_comm="/usr/local/mysql/bin/mysql -u$user -h 127.0.0.1 -P $port -p$passwod" slave_comm="/usr/local/mysql/bin/mysql -u$repl_check_user -P $port -p$repl_check_pwd" # 判断mysql是否存活 value=`$master_comm -Nse "select 1"` if [ -z $value ] then echo "MySQL Server is Down....." exit 2 fi get_slave_count=0 is_slave_role=0 slave_mode_repl_delay=0 master_mode_repl_delay=0 master_mode_repl_dead=0 slave_mode_repl_status=0 max_delay=120 get_slave_hosts=`$master_comm -Nse "select substring_index(HOST,':',1) from information_schema.PROCESSLIST where user='repl' and COMMAND like '%Binlog Dump%';" ` get_slave_count=`$master_comm -Nse "select count(1) from information_schema.PROCESSLIST where user='repl' and COMMAND like '%Binlog Dump%';" ` is_slave_role=`$master_comm -e "show slave status\G"|grep -Ewc "Slave_SQL_Running|Slave_IO_Running"` ### 单点模式(若是 get_slave_count=0 and is_slave_role=0) function single_mode { if [ $get_slave_count -eq 0 -a $is_slave_role -eq 0 ] then echo "MySQL $port Instance is Single Master........" exit 0 fi } ### 从节点模式(若是 get_slave_count=0 and is_slave_role=2 ) function slave_mode { #若是是从节点,必须知足不延迟, if [ $is_slave_role -ge 2 ] then echo "MySQL $port Instance is Slave........" $master_comm -e "show slave status\G" | egrep -w "Master_Host|Master_User|Master_Port|Master_Log_File|Read_Master_Log_Pos|Relay_Log_File|Relay_Log_Pos|Relay_Master_Log_File|Slave_IO_Running|Slave_SQL_Running|Exec_Master_Log_Pos|Relay_Log_Space|Seconds_Behind_Master" slave_mode_repl_delay=`$master_comm -e "show slave status\G" | grep -w "Seconds_Behind_Master" | awk '{print $NF}'` slave_mode_repl_status=`$master_comm -e "show slave status\G"|grep -Ec "Slave_IO_Running: Yes|Slave_SQL_Running: Yes"` if [ X"$slave_mode_repl_delay" == X"NULL" ] then slave_mode_repl_delay=99999 fi if [ $slave_mode_repl_delay != "NULL" -a $slave_mode_repl_delay -lt $max_delay -a $slave_mode_repl_status -ge 2 ] then exit 0 fi fi } function master_mode { ###若是是主节点,必须知足从节点为延迟或复制错误。才可读 if [ $get_slave_count -gt 0 -a $is_slave_role -eq 0 ] then echo "MySQL $port Instance is Master........" $master_comm -e "select * from information_schema.PROCESSLIST where user='repl' and COMMAND like '%Dump%'" for my_slave in $get_slave_hosts do master_mode_repl_delay=`$slave_comm -h $my_slave -e "show slave status\G" | grep -w "Seconds_Behind_Master" | awk '{print $NF}' ` master_mode_repl_thread=`$slave_comm -h $my_slave -e "show slave status\G"|grep -Ec "Slave_IO_Running: Yes|Slave_SQL_Running: Yes"` if [ X"$master_mode_repl_delay" == X"NULL" ] then master_mode_repl_delay=99999 fi if [ $master_mode_repl_delay -lt $max_delay -a $master_mode_repl_thread -ge 2 ] then exit 2 fi done exit 0 fi } single_mode slave_mode master_mode exit 2 [root@db-server-yayun-02 shell]#
"name": "r-6029-redis-test",这个就是域名了,默认后缀是servers.consul,consul能够利用domain参数修改。配置文件生成之后安装redis,搭建主从复制(省略)。主从复制完成之后就能够从新reload consul了。redis info信息:
127.0.0.1:6029> info replication # Replication role:master connected_slaves:2 slave0:ip=192.168.0.40,port=6029,state=online,offset=6786,lag=0 slave1:ip=192.168.0.30,port=6029,state=online,offset=6786,lag=1 master_repl_offset:6786 repl_backlog_active:1 repl_backlog_size:67108864 repl_backlog_first_byte_offset:2 repl_backlog_histlen:6785 127.0.0.1:6029>
reload consul(3台client,也就是20-40):
[root@db-server-yayun-02 ~]# consul reload Configuration reload triggered [root@db-server-yayun-02 ~]#
在其中一台服务器查看consul日志(20):
[root@db-server-yayun-02 consul]# tail -f consul.log 2017/12/09 10:09:59 [INFO] serf: EventMemberJoin: db-server-yayun-04 192.168.0.40 2017/12/09 11:14:55 [INFO] Caught signal: hangup 2017/12/09 11:14:55 [INFO] Reloading configuration... 2017/12/09 11:14:55 [INFO] agent: Synced service 'r-6029-redis-test' 2017/12/09 11:14:55 [INFO] agent: Synced service 'w-6029-redis-test' 2017/12/09 11:14:55 [INFO] agent: Synced check 'service:w-6029-redis-test' 2017/12/09 11:15:00 [WARN] agent: Check 'service:r-6029-redis-test' is now critical 2017/12/09 11:15:15 [WARN] agent: Check 'service:r-6029-redis-test' is now critical 2017/12/09 11:15:30 [WARN] agent: Check 'service:r-6029-redis-test' is now critical 2017/12/09 11:15:45 [WARN] agent: Check 'service:r-6029-redis-test' is now critical
能够看到r-6029-redis-test,w-6029-redis-test服务都已经注册,可是只有w-6029-redis-test注册成功,也就是写的,由于服务器20上面的redis是master,slave的服务固然没法注册成功。咱们经过web ui看看。
能够看到3个client节点每一个节点都已经注册了2个服务。还能够看到咱们自定义的输出:
下面咱们使用dns来解析看看,是不是咱们想要的。咱们注册两个服务。r-6029-redis-test,w-6029-redis-test,那么就是就产生了2个域名,分别是r-6029-redis-test.service.consul和w-6029-redis-test.service.consul。咱们使用dig来看看:
[root@db-server-yayun-02 ~]# dig @192.168.0.10 -p 8600 r-6029-redis-test.service.consul ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> @192.168.0.10 -p 8600 r-6029-redis-test.service.consul ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34508 ;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;r-6029-redis-test.service.consul. IN A ;; ANSWER SECTION: r-6029-redis-test.service.consul. 0 IN A 192.168.0.30 r-6029-redis-test.service.consul. 0 IN A 192.168.0.40 ;; Query time: 1 msec ;; SERVER: 192.168.0.10#8600(192.168.0.10) ;; WHEN: Sat Dec 9 11:26:38 2017 ;; MSG SIZE rcvd: 82 [root@db-server-yayun-02 ~]#
咱们能够看到读的域名r-6029-redis-test.service.consul解析到了两台服务器。那么咱们就可以对从库进行负载均衡了。那么写的域名呢?
[root@db-server-yayun-02 ~]# dig @192.168.0.10 -p 8600 w-6029-redis-test.service.consul ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> @192.168.0.10 -p 8600 w-6029-redis-test.service.consul ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7451 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;w-6029-redis-test.service.consul. IN A ;; ANSWER SECTION: w-6029-redis-test.service.consul. 0 IN A 192.168.0.20 ;; Query time: 1 msec ;; SERVER: 192.168.0.10#8600(192.168.0.10) ;; WHEN: Sat Dec 9 11:27:59 2017 ;; MSG SIZE rcvd: 66 [root@db-server-yayun-02 ~]#
和咱们预料的没错,解析在了20上面。那么咱们若是关闭其中一个从库会是怎样的?
[root@db-server-yayun-03 ~]# ifconfig eth1 | grep -oP '(?<=inet addr:)\S+' 192.168.0.30 [root@db-server-yayun-03 ~]# pgrep -fl redis-server | awk '{print $1}' | xargs kill [root@db-server-yayun-03 ~]#
127.0.0.1:6029> info replication # Replication role:master connected_slaves:1 slave0:ip=192.168.0.40,port=6029,state=online,offset=8200,lag=0 master_repl_offset:8200 repl_backlog_active:1 repl_backlog_size:67108864 repl_backlog_first_byte_offset:2 repl_backlog_histlen:8199 127.0.0.1:6029>
能够看到只有一个从了,咱们再次dig 读域名看看:
[root@db-server-yayun-02 ~]# dig @192.168.0.10 -p 8600 r-6029-redis-test.service.consul ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> @192.168.0.10 -p 8600 r-6029-redis-test.service.consul ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41984 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;r-6029-redis-test.service.consul. IN A ;; ANSWER SECTION: r-6029-redis-test.service.consul. 0 IN A 192.168.0.40 ;; Query time: 8 msec ;; SERVER: 192.168.0.10#8600(192.168.0.10) ;; WHEN: Sat Dec 9 11:32:46 2017 ;; MSG SIZE rcvd: 66 [root@db-server-yayun-02 ~]#
能够看到踢掉了另一台机器。若是我再次关闭40这个从呢?
[root@db-server-yayun-04 shell]# ifconfig eth1 | grep -oP '(?<=inet addr:)\S+' 192.168.0.40 [root@db-server-yayun-04 shell]# pgrep -fl redis-server | awk '{print $1}' | xargs kill [root@db-server-yayun-04 shell]#
那么咱们的redis就没有可用从库了,那么读写都将在master上面。
[root@db-server-yayun-02 ~]# dig @192.168.0.10 -p 8600 r-6029-redis-test.service.consul ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> @192.168.0.10 -p 8600 r-6029-redis-test.service.consul ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58564 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;r-6029-redis-test.service.consul. IN A ;; ANSWER SECTION: r-6029-redis-test.service.consul. 0 IN A 192.168.0.20 ;; Query time: 4 msec ;; SERVER: 192.168.0.10#8600(192.168.0.10) ;; WHEN: Sat Dec 9 11:35:11 2017 ;; MSG SIZE rcvd: 66 [root@db-server-yayun-02 ~]# dig @192.168.0.10 -p 8600 w-6029-redis-test.service.consul ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> @192.168.0.10 -p 8600 w-6029-redis-test.service.consul ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56965 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;w-6029-redis-test.service.consul. IN A ;; ANSWER SECTION: w-6029-redis-test.service.consul. 0 IN A 192.168.0.20 ;; Query time: 5 msec ;; SERVER: 192.168.0.10#8600(192.168.0.10) ;; WHEN: Sat Dec 9 11:35:16 2017 ;; MSG SIZE rcvd: 66 [root@db-server-yayun-02 ~]#
这里测试的就差很少了,下面结合sentinel来实现高可用。我会恢复刚才的环境。也就是20是master,30,40是slave。10是sentinel。生产环境sentinel也要部署3个或5个。个人10上面已经有sentinel,端口是36029,我直接添加对20的6029监控。
127.0.0.1:36029> sentinel monitor my-test-6029 192.168.0.20 6029 1 OK 127.0.0.1:36029>
127.0.0.1:36029> info Sentinel # Sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 master0:name=my-test-6029,status=ok,address=192.168.0.20:6029,slaves=2,sentinels=1 127.0.0.1:36029>
再次看看读写域名是否正常了,我已经恢复环境:
[root@db-server-yayun-02 ~]# dig @192.168.0.10 -p 8600 w-6029-redis-test.service.consul ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> @192.168.0.10 -p 8600 w-6029-redis-test.service.consul ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62669 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;w-6029-redis-test.service.consul. IN A ;; ANSWER SECTION: w-6029-redis-test.service.consul. 0 IN A 192.168.0.20 ;; Query time: 2 msec ;; SERVER: 192.168.0.10#8600(192.168.0.10) ;; WHEN: Sat Dec 9 11:43:04 2017 ;; MSG SIZE rcvd: 66 [root@db-server-yayun-02 ~]# dig @192.168.0.10 -p 8600 r-6029-redis-test.service.consul ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> @192.168.0.10 -p 8600 r-6029-redis-test.service.consul ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41305 ;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;r-6029-redis-test.service.consul. IN A ;; ANSWER SECTION: r-6029-redis-test.service.consul. 0 IN A 192.168.0.30 r-6029-redis-test.service.consul. 0 IN A 192.168.0.40 ;; Query time: 2 msec ;; SERVER: 192.168.0.10#8600(192.168.0.10) ;; WHEN: Sat Dec 9 11:43:08 2017 ;; MSG SIZE rcvd: 82 [root@db-server-yayun-02 ~]#
能够看到已经正常,如今关闭redis master:
[root@db-server-yayun-02 ~]# ifconfig eth1 | grep -oP '(?<=inet addr:)\S+' 192.168.0.20 [root@db-server-yayun-02 ~]# pgrep -fl redis-server | awk '{print $1}' | xargs kill
看看sentinel信息:
127.0.0.1:36029> info Sentinel # Sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 master0:name=my-test-6029,status=ok,address=192.168.0.30:6029,slaves=2,sentinels=1 127.0.0.1:36029>
能够看到master已是30了,dig域名看看:
[root@db-server-yayun-02 ~]# dig @192.168.0.10 -p 8600 w-6029-redis-test.service.consul ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> @192.168.0.10 -p 8600 w-6029-redis-test.service.consul ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55527 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;w-6029-redis-test.service.consul. IN A ;; ANSWER SECTION: w-6029-redis-test.service.consul. 0 IN A 192.168.0.30 ;; Query time: 2 msec ;; SERVER: 192.168.0.10#8600(192.168.0.10) ;; WHEN: Sat Dec 9 11:45:46 2017 ;; MSG SIZE rcvd: 66 [root@db-server-yayun-02 ~]# dig @192.168.0.10 -p 8600 r-6029-redis-test.service.consul ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> @192.168.0.10 -p 8600 r-6029-redis-test.service.consul ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11563 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;r-6029-redis-test.service.consul. IN A ;; ANSWER SECTION: r-6029-redis-test.service.consul. 0 IN A 192.168.0.40 ;; Query time: 1 msec ;; SERVER: 192.168.0.10#8600(192.168.0.10) ;; WHEN: Sat Dec 9 11:45:50 2017 ;; MSG SIZE rcvd: 66 [root@db-server-yayun-02 ~]#
ok,能够看到已是咱们想要的结果了。最后说说dns的问题。
App端配置域名服务器IP来解析consul后缀的域名,DNS解析及跳转, 有三个方案:
1. 原内网dns服务器,作域名转发,consul后缀的,都转到consul server上(咱们线上是采用这个)
2. dns所有跳到consul DNS服务器上,非consul后缀的,使用 recursors 属性跳转到原DNS服务器上
3. dnsmaq 转: server=/consul/10.16.X.X#8600 解析consul后缀的
咱们内网dns是用的bind,对于bind的如何作域名转发consul官网也有栗子:https://www.consul.io/docs/guides/forwarding.html,另外也对consul的dns进行了压力测试,不存在性能问题:
参考资料:
https://book-consul-guide.vnzmi.com/
http://www.liangxiansen.cn/2017/04/06/consul/
总结:
对于单机多实例的mysql以及redis,利用consul可以很好的实现高可用,固然要结合mha或者sentinel,最大的好处是consul足够轻量,方便,简单。若是程序支持读写分离的,那么用起来更加方便。从挂掉一个或者多个也不会影响服务。