目录node
统计的操做
建立redis使用的conf,logs,pid目录
建立redis数据目录
编辑redis_6380和6381配置文件
启动redis_6380和6381redis
mkdir -p /opt/redis_cluster/redis_{6380,6381}/{conf,logs,pid} mkdir –p /data/redis_cluster/redis_{6380,6381} cat >/opt/redis_cluster/redis_6380/conf/redis_6380.conf<<EOF bind 10.0.0.51 port 6380 daemonize yes pidfile "/opt/redis_cluster/redis_6380/pid/redis_6380.pid" logfile "/opt/redis_cluster/redis_6380/logs/redis_6380.log" dbfilename "redis_6380.rdb" dir "/data/redis_cluster/redis_6380/" cluster-enabled yes cluster-config-file nodes_6380.conf cluster-node-timeout 15000 EOF cat >/opt/redis_cluster/redis_6381/conf/redis_6381.conf<<EOF bind 10.0.0.51 port 6381 daemonize yes pidfile "/opt/redis_cluster/redis_6381/pid/redis_6381.pid" logfile "/opt/redis_cluster/redis_6381/logs/redis_6381.log" dbfilename "redis_6381.rdb" dir "/data/redis_cluster/redis_6381/" cluster-enabled yes cluster-config-file nodes_6381.conf cluster-node-timeout 15000 EOF redis-server /opt/redis_cluster/redis_6380/conf/redis_6380.conf redis-server /opt/redis_cluster/redis_6381/conf/redis_6381.conf
mkdir -p /opt/redis_cluster/redis_{6380,6381}/{conf,logs,pid} mkdir –p /data/redis_cluster/redis_{6380,6381} cat >/opt/redis_cluster/redis_6380/conf/redis_6380.conf<<EOF bind 10.0.0.52 port 6380 daemonize yes pidfile "/opt/redis_cluster/redis_6380/pid/redis_6380.pid" logfile "/opt/redis_cluster/redis_6380/logs/redis_6380.log" dbfilename "redis_6380.rdb" dir "/data/redis_cluster/redis_6380/" cluster-enabled yes cluster-config-file nodes_6380.conf cluster-node-timeout 15000 EOF cat >/opt/redis_cluster/redis_6381/conf/redis_6381.conf<<EOF bind 10.0.0.52 port 6381 daemonize yes pidfile "/opt/redis_cluster/redis_6381/pid/redis_6381.pid" logfile "/opt/redis_cluster/redis_6381/logs/redis_6381.log" dbfilename "redis_6381.rdb" dir "/data/redis_cluster/redis_6381/" cluster-enabled yes cluster-config-file nodes_6381.conf cluster-node-timeout 15000 EOF redis-server /opt/redis_cluster/redis_6380/conf/redis_6380.conf redis-server /opt/redis_cluster/redis_6381/conf/redis_6381.conf
mkdir -p /opt/redis_cluster/redis_{6380,6381}/{conf,logs,pid} mkdir –p /data/redis_cluster/redis_{6380,6381} cat >/opt/redis_cluster/redis_6380/conf/redis_6380.conf<<EOF bind 10.0.0.53 port 6380 daemonize yes pidfile "/opt/redis_cluster/redis_6380/pid/redis_6380.pid" logfile "/opt/redis_cluster/redis_6380/logs/redis_6380.log" dbfilename "redis_6380.rdb" dir "/data/redis_cluster/redis_6380/" cluster-enabled yes cluster-config-file nodes_6380.conf cluster-node-timeout 15000 EOF cat >/opt/redis_cluster/redis_6381/conf/redis_6381.conf<<EOF bind 10.0.0.53 port 6381 daemonize yes pidfile "/opt/redis_cluster/redis_6381/pid/redis_6381.pid" logfile "/opt/redis_cluster/redis_6381/logs/redis_6381.log" dbfilename "redis_6381.rdb" dir "/data/redis_cluster/redis_6381/" cluster-enabled yes cluster-config-file nodes_6381.conf cluster-node-timeout 15000 EOF redis-server /opt/redis_cluster/redis_6380/conf/redis_6380.conf redis-server /opt/redis_cluster/redis_6381/conf/redis_6381.conf
[root@db01 ~]# netstat -lntup|grep redis tcp 0 0 10.0.0.51:6380 0.0.0.0:* LISTEN 32568/redis-server tcp 0 0 10.0.0.51:6381 0.0.0.0:* LISTEN 32564/redis-server tcp 0 0 10.0.0.51:16380 0.0.0.0:* LISTEN 32568/redis-server tcp 0 0 10.0.0.51:16381 0.0.0.0:* LISTEN 32564/redis-server
在分布式存储中须要提供维护节点元数据信息的机制,所谓元数据是指:节点负责哪些数据,是否
出现故障灯状态信息,redis 集群采用 Gossip(流言)协议,Gossip 协议工做原理就是节点彼此不断
交换信息,一段时间后全部的节点都会知道集群完整信息,这种方式相似流言传播。shell
meet
meet 消息:用于通知新节点加入,消息发送者通知接受者加入到当前集群,meet 消息通讯正常完成后,接收节
点会加入到集群中并进行 ping、 pong 消息交换ruby
fail
fail 消息:当节点断定集群内另外一个节点下线时,回向集群内广播一个 fail 消息,其余节点收到 fail 消息之
后把对应节点更新为下线状态。服务器
当前集群中只能看到自身,还未互相发现运维
[root@db01 ~]# sh redis_shell.sh login 6380 10.0.0.51:6380> cluster nodes 215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 myself,master - 0 0 0 connected
生成的配置文件tcp
[root@db01 ~]# tree /data/redis_cluster/redis_638* /data/redis_cluster/redis_6380 └── nodes_6380.conf /data/redis_cluster/redis_6381 └── nodes_6381.conf
当前配置文件中只有本身的ID,配置meet后会将其余节点的ID也写入,clester nodes中id和配置文件一致分布式
10.0.0.51:6380> cluster nodes 15158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 myself,master - 0 0 0 connected [root@db01 ~]# cat /data/redis_cluster/redis_6380/nodes_6380.conf 215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 myself,master - 0 0 0 connected vars currentEpoch 0 lastVoteEpoch 0
集群模式的 Redis 除了原有的配置文件以外又加了一份集群配置文件.当集群内节点
信息发生变化,如添加节点,节点下线,故障转移等.节点会自动保存集群状态到配置文件.
须要注意的是,Redis 自动维护集群配置文件,不须要手动修改,防止节点重启时产生错乱.工具
配置发现测试
[root@db01 ~]# sh redis_shell.sh login 6380 10.0.0.51:6380> CLUSTER MEET 10.0.0.51 6381 OK 10.0.0.51:6380> CLUSTER MEET 10.0.0.52 6380 OK 10.0.0.51:6380> CLUSTER MEET 10.0.0.52 6381 OK 10.0.0.51:6380> CLUSTER MEET 10.0.0.53 6380 OK 10.0.0.51:6380> CLUSTER MEET 10.0.0.53 6381 OK 10.0.0.51:6380> cluster nodes 68af205aad42909db61013ae2d0f9d2ec49cb5b9 10.0.0.52:6380 master - 0 1562242149129 2 connected 215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 myself,master - 0 0 1 connected f5248261ef32638fc11966cdeedff96ab197b812 10.0.0.53:6380 master - 0 1562242147114 0 connected c847d86eb040a5cbeaeddef225beecba22f401b2 10.0.0.51:6381 master - 0 1562242146109 3 connected 0815edc37378ce8c1bed1b46a460b410f231937d 10.0.0.53:6381 master - 0 1562242148121 5 connected eb67970ebb0eb3512f09b7be79128bf736eaccb5 10.0.0.52:6381 master - 0 1562242150135 4 connected
配置发现完成后,每一个配置文件中都会存在其余节点的信息
[root@db01 ~]# cat /data/redis_cluster/redis_6380/nodes_6380.conf 68af205aad42909db61013ae2d0f9d2ec49cb5b9 10.0.0.52:6380 master - 0 1562242117909 2 connected 215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 myself,master - 0 0 1 connected f5248261ef32638fc11966cdeedff96ab197b812 10.0.0.53:6380 master - 0 1562242116499 0 connected c847d86eb040a5cbeaeddef225beecba22f401b2 10.0.0.51:6381 master - 0 1562242118914 3 connected 0815edc37378ce8c1bed1b46a460b410f231937d 10.0.0.53:6381 master - 0 1562242118010 5 connected eb67970ebb0eb3512f09b7be79128bf736eaccb5 10.0.0.52:6381 master - 0 1562242119920 4 connected vars currentEpoch 5 lastVoteEpoch 0 [root@db01 ~]# cat /data/redis_cluster/redis_6381/nodes_6381.conf 215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 master - 0 1562242122540 1 connected 68af205aad42909db61013ae2d0f9d2ec49cb5b9 10.0.0.52:6380 master - 0 1562242118512 2 connected 0815edc37378ce8c1bed1b46a460b410f231937d 10.0.0.53:6381 master - 0 1562242123546 5 connected eb67970ebb0eb3512f09b7be79128bf736eaccb5 10.0.0.52:6381 master - 0 1562242121533 4 connected f5248261ef32638fc11966cdeedff96ab197b812 10.0.0.53:6380 master - 0 1562242120524 0 connected c847d86eb040a5cbeaeddef225beecba22f401b2 10.0.0.51:6381 myself,master - 0 0 3 connected vars currentEpoch 5 lastVoteEpoch 0
虽然节点之间已经互相发现了,可是此时集群仍是不可用的状态,由于并无给节点分配槽位,并且必
须是全部的槽位都分配完毕后整个集群才是可用的状态.
反之,也就是说只要有一个槽位没有分配,那么整个集群就是不可用的.
查看报错
[root@db01 ~]# sh redis_shell.sh login 6380 10.0.0.51:6380> set k1 v1 (error) CLUSTERDOWN Hash slot not served #集群未启动 hash槽位不提供服务,
配置槽位
虽然有 6 个节点,可是真正负责数据写入的只有 3 个节点,其余 3 个节点只是做为主节点的从节点,也就是说,只须要分配期中三个节点的槽位就能够了
分配槽位的方法:
分配槽位须要在每一个主节点上来配置,此时有 2 种方法执行:
1.分别登陆到每一个主节点的客户端来执行命令
2.在其中一台机器上用 redis 客户端远程登陆到其余机器的主节点上执行命令
redis-cli -h 10.0.0.51 -p 6380 cluster addslots {0..5461} redis-cli -h 10.0.0.52 -p 6380 cluster addslots {5462..10922} redis-cli -h 10.0.0.53 -p 6380 cluster addslots {10923..16383}
查看集群节点信息,槽位已经正常分配
10.0.0.51:6380> cluster nodes 68af205aad42909db61013ae2d0f9d2ec49cb5b9 10.0.0.52:6380 master - 0 1562243748164 2 connected 5462-10922 215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 myself,master - 0 0 1 connected 0-5461 f5248261ef32638fc11966cdeedff96ab197b812 10.0.0.53:6380 master - 0 1562243745141 0 connected 10923-16383 c847d86eb040a5cbeaeddef225beecba22f401b2 10.0.0.51:6381 master - 0 1562243742121 3 connected 0815edc37378ce8c1bed1b46a460b410f231937d 10.0.0.53:6381 master - 0 1562243749173 5 connected eb67970ebb0eb3512f09b7be79128bf736eaccb5 10.0.0.52:6381 master - 0 1562243746147 4 connected
查看集群信息,状态为ok
10.0.0.51:6380> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:5 cluster_my_epoch:1 cluster_stats_messages_sent:3478 cluster_stats_messages_received:3478
虽然这时候集群是可用的了,可是整个集群只要有一台机器坏掉了,那么整个集群都是不可用的.
因此这时候须要用到其余三个节点分别做为如今三个主节点的从节点,以应对集群主节点故障时能够
进行自动切换以保证集群持续可用.
注意:
1.不要让复制节点复制本机器的主节点, 由于若是那样的话机器挂了集群仍是不可用状态, 因此复制
节点要复制其余服务器的主节点.
2.使用 redis-trid 工具自动分配的时候会出现复制节点和主节点在同一台机器上的状况,须要注意
将cluster nodes结果粘贴到文本中,复制关系以下
db01:6381-->db02:6380
db02:6381-->db03:6380
db03:6381-->db01:6380
列出节点ID方便对比复制关系是否正确
[root@db01 ~]# redis-cli -c -h db01 -p 6381 cluster nodes|grep -v "6381"|awk '{print $1,$2}' 215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 68af205aad42909db61013ae2d0f9d2ec49cb5b9 10.0.0.52:6380 f5248261ef32638fc11966cdeedff96ab197b812 10.0.0.53:6380 [root@db01 ~]# redis-cli -c -h db01 -p 6381 cluster nodes|grep -v "6380"|awk '{print $1,$2}' 0815edc37378ce8c1bed1b46a460b410f231937d 10.0.0.53:6381 eb67970ebb0eb3512f09b7be79128bf736eaccb5 10.0.0.52:6381 c847d86eb040a5cbeaeddef225beecba22f401b2 10.0.0.51:6381 执行完成主从关系 redis-cli -c -h db01 -p 6381 cluster replicate 68af205aad42909db61013ae2d0f9d2ec49cb5b9 redis-cli -c -h db02 -p 6381 cluster replicate f5248261ef32638fc11966cdeedff96ab197b812 redis-cli -c -h db03 -p 6381 cluster replicate 215158ede75cadd1c9a8fccb99278d0da3c5de48
咱们使用常规插入 redis 数据的方式往集群里写入数据看看会发生什么
[root@db01 ~]# redis-cli -h db01 -p 6380 set k1 v1 (error) MOVED 12706 10.0.0.53:6380
结果提示 error, 可是给出了集群另外一个节点的地址,这里数据也不会写入到53上
使用集群后因为数据被分片了,因此并非说在那台机器上写入数据就会在哪台机
器的节点上写入,集群的数据写入和读取就涉及到了另一个概念,ASK 路由
在集群模式下,Redis 接受任何键相关命令时首先会计算键对应的槽,再根据槽找出所对应的节点
若是节点是自身,则处理键命令;
不然回复 MOVED 重定向错误,通知客户端请求正确的节点,这个过程称为 Mover 重定向.
使用-c参数,redis会自动路由到正确节点
[root@db01 ~]# redis-cli -h db01 -c -p 6380 db01:6380> set k1 v1 -> Redirected to slot [12706] located at 10.0.0.53:6380 OK
批量插入一些数据,数据量很会很均匀的分布,数据差不超过%2就正常
for i in {0..1000};do redis-cli -c -h db01 -p 6380 set 58NB_${i} 58V5_${i};done [root@db01 ~]# redis-cli -h db01 -c -p 6380 DBSIZE (integer) 670 [root@db01 ~]# redis-cli -h db02 -c -p 6380 DBSIZE (integer) 660 [root@db01 ~]# redis-cli -h db03 -c -p 6380 DBSIZE (integer) 672
请求的数据被分配到了不一样服务器的槽位上
10.0.0.51:6380> get 58NB_1000 "58V5_1000" 10.0.0.51:6380> get 58NB_999 -> Redirected to slot [6540] located at 10.0.0.52:6380 "58V5_999" 10.0.0.52:6380> get 58NB_699 -> Redirected to slot [13757] located at 10.0.0.53:6380 "58V5_699"
这里咱们就模拟故障,停掉期中一台主机的 redis 节点,而后查看一下集群的变化
咱们使用暴力的 kill 杀掉 db02 上的 redis 集群节点,而后观察节点状态
理想状况应该是 db01 上的 6381 从节点升级为主节点
[root@db02 ~]# ps -ef | grep redis root 8257 1 0 19:55 ? 00:00:07 redis-server 10.0.0.52:6380 [cluster] root 8261 1 0 19:55 ? 00:00:07 redis-server 10.0.0.52:6381 [cluster] root 8576 8526 0 22:49 pts/0 00:00:00 grep --color=auto redis [root@db02 ~]# kill 8257
切换后查看节点信息,不方便阅读,将ID列删除了
能够看到db01的6381节点以及成为主节点
[root@db01 ~]# redis-cli -h db01 -c -p 6380 cluster nodes 10.0.0.52:6380 master,fail - 1562251836868 1562251833945 2 disconnected 10.0.0.51:6380 myself,master - 0 0 1 connected 0-5461 10.0.0.53:6380 master - 0 1562251999666 0 connected 10923-16383 10.0.0.51:6381 master - 0 1562251998652 6 connected 5462-10922 10.0.0.53:6381 slave 215158ede75cadd1c9a8fccb99278d0da3c5de48 0 1562251999159 5 connected 10.0.0.52:6381 slave f5248261ef32638fc11966cdeedff96ab197b812 0 1562252000675 4 connected
虽然咱们已经测试了故障切换的功能,可是节点修复后仍是须要从新上线
因此这里测试节点从新上线后的操做
从新启动 db02 的 6380,而后观察日志
[root@db02 ~]# redis-server /opt/redis_cluster/redis_6380/conf/redis_6380.conf [root@db02 ~]# ps -ef | grep redis root 8261 1 0 19:55 ? 00:00:08 redis-server 10.0.0.52:6381 [cluster] root 8619 1 0 22:59 ? 00:00:00 redis-server 10.0.0.52:6380 [cluster]
能够看到db02的6380上线后跟51的6381进行了同步
[root@db02 ~]# sh redis_shell.sh tail 6380 8619:S 04 Jul 22:59:54.217 * Connecting to MASTER 10.0.0.51:6381 8619:S 04 Jul 22:59:54.217 * MASTER <-> SLAVE sync started 8619:S 04 Jul 22:59:54.217 * Non blocking connect for SYNC fired the event. 8619:S 04 Jul 22:59:54.218 * Master replied to PING, replication can continue... 8619:S 04 Jul 22:59:54.218 * Partial resynchronization not possible (no cached master) 8619:S 04 Jul 22:59:54.223 * Full resync from master: c75849d817eac34104812970ea0d80ef06448e91:1 8619:S 04 Jul 22:59:54.303 * MASTER <-> SLAVE sync: receiving 10524 bytes from master 8619:S 04 Jul 22:59:54.303 * MASTER <-> SLAVE sync: Flushing old data 8619:S 04 Jul 22:59:54.303 * MASTER <-> SLAVE sync: Loading DB in memory 8619:S 04 Jul 22:59:54.304 * MASTER <-> SLAVE sync: Finished with success
再观察一下db01的日志
清除了db02的6380的故障状态,slave 10.0.0.52:6380同步成功
[root@db01 ~]# sh redis_shell.sh tail 6381 32564:M 04 Jul 22:50:53.868 # Cluster state changed: ok 32564:M 04 Jul 22:59:53.294 * Clear FAIL state for node 68af205aad42909db61013ae2d0f9d2ec49cb5b9: master without slots is reachable again. 32564:M 04 Jul 22:59:54.217 * Slave 10.0.0.52:6380 asks for synchronization 32564:M 04 Jul 22:59:54.217 * Full resync requested by slave 10.0.0.52:6380 32564:M 04 Jul 22:59:54.217 * Starting BGSAVE for SYNC with target: disk 32564:M 04 Jul 22:59:54.219 * Background saving started by pid 37006 37006:C 04 Jul 22:59:54.230 * DB saved on disk 37006:C 04 Jul 22:59:54.230 * RDB: 6 MB of memory used by copy-on-write 32564:M 04 Jul 22:59:54.300 * Background saving terminated with success 32564:M 04 Jul 22:59:54.301 * Synchronization with slave 10.0.0.52:6380 succeeded
这时假如咱们想让修复后的节点从新上线,能够在想变成主库的从库执行 CLUSTER FAILOVER 命令
这里咱们在 db02 的 6380 上执行
[root@db02 ~]# sh redis_shell.sh login 6380 10.0.0.52:6380> CLUSTER FAILOVER OK [root@db02 ~]# sh redis_shell.sh tail 6380 8619:M 04 Jul 23:10:31.899 * Caching the disconnected master state. 8619:M 04 Jul 23:10:31.899 * Discarding previously cached master state. 8619:M 04 Jul 23:10:32.404 * Slave 10.0.0.51:6381 asks for synchronization 8619:M 04 Jul 23:10:32.404 * Full resync requested by slave 10.0.0.51:6381 8619:M 04 Jul 23:10:32.404 * Starting BGSAVE for SYNC with target: disk 8619:M 04 Jul 23:10:32.405 * Background saving started by pid 8686 8686:C 04 Jul 23:10:32.410 * DB saved on disk 8686:C 04 Jul 23:10:32.411 * RDB: 6 MB of memory used by copy-on-write 8619:M 04 Jul 23:10:32.499 * Background saving terminated with success 8619:M 04 Jul 23:10:32.500 * Synchronization with slave 10.0.0.51:6381 succeede
观察日志信息能够看出10.0.0.51:6381已经变为了10.0.0.52:6380的从库
手动搭建集群便于理解集群建立的流程和细节,不过手动搭建集群须要不少步骤,当集群节点众多
时,必然会加大搭建集群的复杂度和运维成本,所以官方提供了 redis-trib.rb 的工具方便咱们快速搭
建集群。
redis-trib.rb 是采用 Ruby 实现的 redis 集群管理工具,内部经过 Cluster 相关命令帮咱们简化集群
建立、检查、槽迁移和均衡等常见运维操做,使用前要安装 ruby 依赖环境
安装命令:
yum makecache fast yum install rubygems gem sources --remove https://rubygems.org/ gem sources -a http://mirrors.aliyun.com/rubygems/ gem update –system gem install redis -v 3.3.5
咱们能够停掉全部的节点,而后清空数据,恢复成一个全新的集群,全部节点服务器执行命令
pkill redis rm -rf /data/redis_cluster/redis_6380/* rm -rf /data/redis_cluster/redis_6381/*
所有清空以后启动全部的节点,节点服务器执行
sh redis_shell.sh start 6380 sh redis_shell.sh start 6381
db01 执行建立集群命令
cd /opt/redis_cluster/redis/src/ ./redis-trib.rb create --replicas 1 10.0.0.51:6380 10.0.0.52:6380 10.0.0.53:6380 10.0.0.51:6381 10.0.0.52:6381 10.0.0.53:6381
检查集群完整性及槽位状态
软件存在一个bug,总会有一台节点同步本身,因此咱们还须要修改一下同步关系
[root@db01 /opt/redis_cluster/redis/src]# ./redis-trib.rb check 10.0.0.51:6380 >>> Performing Cluster Check (using node 10.0.0.51:6380) M: ac14a416ef65d4d03fb4ad528ecbd7271296ba3a 10.0.0.51:6380 slots:0-5460 (5461 slots) master 1 additional replica(s) S: 37e728a2d12aedc1e5b7732d88d2aed9f684fd73 10.0.0.51:6381 slots: (0 slots) slave replicates 876e7ced4441cda59aa19d51051af6459a5c90d4 M: c2349ca206f3747c140a83cfef10e78845bed2b3 10.0.0.53:6380 slots:10923-16383 (5461 slots) master 1 additional replica(s) M: 876e7ced4441cda59aa19d51051af6459a5c90d4 10.0.0.52:6380 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: aa10948d4289aa3eabaf661ba5dc7459eac37adf 10.0.0.53:6381 slots: (0 slots) slave replicates c2349ca206f3747c140a83cfef10e78845bed2b3 S: 3ca828a23de48c997ce3d6515bde225016c57b68 10.0.0.52:6381 slots: (0 slots) slave replicates ac14a416ef65d4d03fb4ad528ecbd7271296ba3a [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. 检查槽位状态 [root@db01 /opt/redis_cluster/redis/src]# ./redis-trib.rb rebalance 10.0.0.51:6380 >>> Performing Cluster Check (using node 10.0.0.51:6380) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. *** No rebalancing needed! All nodes are within the 2.0% threshold.
最终发现53的6381复制53的6380
S: aa10948d4289aa3eabaf661ba5dc7459eac37adf 10.0.0.53:6381 slots: (0 slots) slave replicates c2349ca206f3747c140a83cfef10e78845bed2b3 M: c2349ca206f3747c140a83cfef10e78845bed2b3 10.0.0.53:6380 当前复制关系 10.0.0.51:6381-->10.0.0.52:6380 876e7ced4441cda59aa19d51051af6459a5c90d4 10.0.0.52:6381-->10.0.0.51:6380 ac14a416ef65d4d03fb4ad528ecbd7271296ba3a 10.0.0.53:6381-->10.0.0.53:6380 c2349ca206f3747c140a83cfef10e78845bed2b3
10.0.0.51:6381-->10.0.0.52:6380 复制关系正确不须要修改
redis-cli -c -h db02 -p 6381 cluster replicate c2349ca206f3747c140a83cfef10e78845bed2b3 redis-cli -c -h db03 -p 6381 cluster replicate ac14a416ef65d4d03fb4ad528ecbd7271296ba3a
当前复制结构梳理
10.0.0.51:6381-->10.0.0.52:6380 876e7ced4441cda59aa19d51051af6459a5c90d4 10.0.0.52:6381-->10.0.0.53:6380 c2349ca206f3747c140a83cfef10e78845bed2b3 10.0.0.53:6381-->10.0.0.51:6380 ac14a416ef65d4d03fb4ad528ecbd7271296ba3a
对比上面结果正常
[root@db01 ~]# redis-cli -c -h db03 -p 6381 cluster nodes|awk '$3~/slave/{print $2,$3,$4}' 10.0.0.51:6381 slave 876e7ced4441cda59aa19d51051af6459a5c90d4 10.0.0.52:6381 slave c2349ca206f3747c140a83cfef10e78845bed2b3 10.0.0.53:6381 myself,slave ac14a416ef65d4d03fb4ad528ecbd7271296ba3a
到此集群部署已经完成了