上一篇博客主要聊了下redis cluster的部署配置,以及使用redis.trib.rb工具所需ruby环境的搭建、使用redis.trib.rb工具建立、查看集群相关信息等,回顾请参考http://www.javashuo.com/article/p-fqvfxfou-mx.html;今天咱们接着来了解下redis.trib.rb这个工具来管理redis3/4 cluster 中的节点;html
新增节点到现有集群node
环境说明redis
新增节点到现有集群,首先咱们要和集群中redis的版本、验证密码相同,其次硬件配置都应该相同;而后启动两台redis server;我这里演示为了节省机器,在node03上直接启动两个实例来代替redis server。环境以下ruby
目录结构bash
[root@node03 redis]# ll total 12 drwxr-xr-x 5 root root 40 Aug 5 22:57 6379 drwxr-xr-x 5 root root 40 Aug 5 22:57 6380 drwxr-xr-x 2 root root 134 Aug 5 22:16 bin -rw-r--r-- 1 root root 175 Aug 8 08:35 dump.rdb -rw-r--r-- 1 root root 803 Aug 8 08:35 redis-cluster_6379.conf -rw-r--r-- 1 root root 803 Aug 8 08:35 redis-cluster_6380.conf [root@node03 redis]# mkdir {6381,6382}/{etc,logs,run} -p [root@node03 redis]# tree . ├── 6379 │ ├── etc │ │ ├── redis.conf │ │ └── sentinel.conf │ ├── logs │ │ └── redis_6379.log │ └── run ├── 6380 │ ├── etc │ │ ├── redis.conf │ │ └── sentinel.conf │ ├── logs │ │ └── redis_6380.log │ └── run ├── 6381 │ ├── etc │ ├── logs │ └── run ├── 6382 │ ├── etc │ ├── logs │ └── run ├── bin │ ├── redis-benchmark │ ├── redis-check-aof │ ├── redis-check-rdb │ ├── redis-cli │ ├── redis-sentinel -> redis-server │ └── redis-server ├── dump.rdb ├── redis-cluster_6379.conf └── redis-cluster_6380.conf 17 directories, 15 files [root@node03 redis]#
复制配置文件到对应目录的/etc/目录下服务器
[root@node03 redis]# cp 6379/etc/redis.conf 6381/etc/ [root@node03 redis]# cp 6379/etc/redis.conf 6382/etc/
修改配置文件中对应端口信息ssh
[root@node03 redis]# sed -ri 's@6379@6381@g' 6381/etc/redis.conf [root@node03 redis]# sed -ri 's@6379@6382@g' 6382/etc/redis.conf
确认配置文件信息工具
[root@node03 redis]# grep -E "^(port|cluster|logfile)" 6381/etc/redis.conf port 6381 logfile "/usr/local/redis/6381/logs/redis_6381.log" cluster-enabled yes cluster-config-file redis-cluster_6381.conf [root@node03 redis]# grep -E "^(port|cluster|logfile)" 6382/etc/redis.conf port 6382 logfile "/usr/local/redis/6382/logs/redis_6382.log" cluster-enabled yes cluster-config-file redis-cluster_6382.conf [root@node03 redis]#
提示:若是对应目录下的配置文件没有问题,接下来就能够直接启动redis服务了;ui
启动redis3d
提示:能够看到对应的端口已经处于监听状态了;接下来咱们就能够使用redis.trib.rb把两个节点添加到集群
添加新节点到集群
提示:add-node 表示添加节点到集群,须要先指定要添加节点的ip地址和端口,后面再跟集群中已有节点的任意一个ip地址和端口便可;从上面的信息能够看到192.168.0.43:6381已经成功加入到集群,可是上面没有槽位了,也没有slave;
分配槽位给新加的节点
提示:使用reshard 指定集群中任意节点的地址和端口便可启动对集群从新分片操做;从新分配槽位须要指定移动好多个槽位,接收指定数量槽位的节点id,从那些节点上移动指定数量的槽位,all表示集群中已有槽位的节点上;若是是手动指定,那么须要指定对应节点的ID,最后若是指定完成,须要使用done表示以上source node指定完成;接下来它会打印一个方案槽位移动方案,让咱们肯定。
Ready to move 4096 slots. Source nodes: M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379 slots:10923-16383 (5461 slots) master 1 additional replica(s) M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380 slots:5461-10922 (5462 slots) master 1 additional replica(s) Destination node: M: 0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381 slots: (0 slots) master 0 additional replica(s) Resharding plan: Moving slot 5461 from 91169e71359deed96f8778cf31c823dbd6ded350 Moving slot 5462 from 91169e71359deed96f8778cf31c823dbd6ded350 Moving slot 5463 from 91169e71359deed96f8778cf31c823dbd6ded350 Moving slot 5464 from 91169e71359deed96f8778cf31c823dbd6ded350 Moving slot 5465 from 91169e71359deed96f8778cf31c823dbd6ded350 Moving slot 5466 from 91169e71359deed96f8778cf31c823dbd6ded350 Moving slot 5467 from 91169e71359deed96f8778cf31c823dbd6ded350 Moving slot 5468 from 91169e71359deed96f8778cf31c823dbd6ded350 ……省略部份内容…… Moving slot 12281 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Moving slot 12282 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Moving slot 12283 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Moving slot 12284 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Moving slot 12285 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Moving slot 12286 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Moving slot 12287 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Do you want to proceed with the proposed reshard plan (yes/no)? yes
提示:输入yes就表示赞成上面的方案;
Moving slot 1177 from 192.168.0.41:6379 to 192.168.0.43:6381: Moving slot 1178 from 192.168.0.41:6379 to 192.168.0.43:6381: Moving slot 1179 from 192.168.0.41:6379 to 192.168.0.43:6381: Moving slot 1180 from 192.168.0.41:6379 to 192.168.0.43:6381: [ERR] Calling MIGRATE: ERR Syntax error, try CLIENT (LIST | KILL | GETNAME | SETNAME | PAUSE | REPLY) [root@node01 ~]#
提示:以上报错的缘由是192.168.0.41:6379上对应1180号槽位绑定的有数据;这里须要注意一点,在集群分配槽位的时候,必须是分配没有绑定数据的槽位,有数据是不行的,因此一般从新分配槽位须要停机把数据拷贝到其余服务器上,而后把槽位分配好了之后在添加进来便可;
清除数据
修复集群
再次分配槽位
[root@node01 ~]# redis-trib.rb reshard 192.168.0.41:6379 >>> Performing Cluster Check (using node 192.168.0.41:6379) M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379 slots:1181-5460 (4280 slots) master 1 additional replica(s) S: 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380 slots: (0 slots) slave replicates a7ace08c36f7d55c4f28463d72865aa1ff74829e S: dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380 slots: (0 slots) slave replicates 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379 slots:10923-16383 (5461 slots) master 1 additional replica(s) M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380 slots:6827-10922 (4096 slots) master 1 additional replica(s) M: 0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381 slots:0-1180,5461-6826 (2547 slots) master 0 additional replica(s) S: e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379 slots: (0 slots) slave replicates 91169e71359deed96f8778cf31c823dbd6ded350 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 4096 What is the receiving node ID? 0449aa43657d46f487107bfe49344701526b11d8 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1:all Ready to move 4096 slots. Source nodes: M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379 slots:1181-5460 (4280 slots) master 1 additional replica(s) M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379 slots:10923-16383 (5461 slots) master 1 additional replica(s) M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380 slots:6827-10922 (4096 slots) master 1 additional replica(s) Destination node: M: 0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381 slots:0-1180,5461-6826 (2547 slots) master 0 additional replica(s) Resharding plan: Moving slot 10923 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Moving slot 10924 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Moving slot 10925 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Moving slot 10926 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Moving slot 10927 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Moving slot 10928 from a7ace08c36f7d55c4f28463d72865aa1ff74829e Moving slot 10929 from a7ace08c36f7d55c4f28463d72865aa1ff74829e ……省略部分信息…… Moving slot 8033 from 91169e71359deed96f8778cf31c823dbd6ded350 Moving slot 8034 from 91169e71359deed96f8778cf31c823dbd6ded350 Moving slot 8035 from 91169e71359deed96f8778cf31c823dbd6ded350 Moving slot 8036 from 91169e71359deed96f8778cf31c823dbd6ded350 Moving slot 8037 from 91169e71359deed96f8778cf31c823dbd6ded350 Moving slot 8038 from 91169e71359deed96f8778cf31c823dbd6ded350 Do you want to proceed with the proposed reshard plan (yes/no)? yes Moving slot 10923 from 192.168.0.43:6379 to 192.168.0.43:6381: Moving slot 10924 from 192.168.0.43:6379 to 192.168.0.43:6381: Moving slot 10925 from 192.168.0.43:6379 to 192.168.0.43:6381: Moving slot 10926 from 192.168.0.43:6379 to 192.168.0.43:6381: Moving slot 10927 from 192.168.0.43:6379 to 192.168.0.43:6381: ……省略部分信息…… Moving slot 8035 from 192.168.0.43:6380 to 192.168.0.43:6381: Moving slot 8036 from 192.168.0.43:6380 to 192.168.0.43:6381: Moving slot 8037 from 192.168.0.43:6380 to 192.168.0.43:6381: Moving slot 8038 from 192.168.0.43:6380 to 192.168.0.43:6381: [root@node01 ~]#
提示:若是再次分配槽位没有报错,这说明槽位从新分配完成;
确认现有集群槽位分配状况
提示:从上面的截图能够看到,在咱们新加的节点分配了6642个槽位,并无平均分配,缘由是在第一次分配成功了2547个槽位后出错了,再次分配已经分配成功的它并不会退回到0,因此咱们再次分配了4096个槽位给新加的节点,致使最后新加的节点槽位变成了6642个槽位;槽位分配成功了,可是对应master上尚未slave;
给新加节点配置slave
提示:要给新master加slave节点,首先要把slave节点加入到集群,而后在配置slave从属某个master便可;
更改新加的节点从属192.168.0.43:6382
[root@node01 ~]# redis-trib.rb info 192.168.0.41:6379 192.168.0.41:6379 (8c785e6e...) -> 0 keys | 3014 slots | 1 slaves. 192.168.0.43:6382 (6df33baf...) -> 0 keys | 0 slots | 0 slaves. 192.168.0.43:6379 (a7ace08c...) -> 0 keys | 3844 slots | 1 slaves. 192.168.0.43:6380 (91169e71...) -> 0 keys | 2884 slots | 1 slaves. 192.168.0.43:6381 (0449aa43...) -> 0 keys | 6642 slots | 0 slaves. [OK] 0 keys in 5 masters. 0.00 keys per slot on average. [root@node01 ~]# [root@node01 ~]# redis-cli -h 192.168.0.43 -p 6382 192.168.0.43:6382> AUTH admin OK 192.168.0.43:6382> info replication # Replication role:master connected_slaves:0 master_replid:69716e1d83cd44fba96d10e282a6534983b3ab8c master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 192.168.0.43:6382> CLUSTER NODES 0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381@16381 master - 0 1596851725000 12 connected 0-2446 5461-8038 10923-12539 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380@16380 master - 0 1596851725354 8 connected 8039-10922 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379@16379 master - 0 1596851726377 11 connected 2447-5460 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380@16380 slave a7ace08c36f7d55c4f28463d72865aa1ff74829e 0 1596851725762 3 connected e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379@16379 slave 91169e71359deed96f8778cf31c823dbd6ded350 0 1596851724334 8 connected 6df33baf68995c61494a06c06af18045ca5a04f6 192.168.0.43:6382@16382 myself,master - 0 1596851723000 0 connected dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380@16380 slave 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 0 1596851723000 11 connected a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379@16379 master - 0 1596851723311 3 connected 12540-16383 192.168.0.43:6382> CLUSTER REPLICATE 0449aa43657d46f487107bfe49344701526b11d8 OK 192.168.0.43:6382> CLUSTER NODES 0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381@16381 master - 0 1596851781000 12 connected 0-2446 5461-8038 10923-12539 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380@16380 master - 0 1596851784708 8 connected 8039-10922 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379@16379 master - 0 1596851784000 11 connected 2447-5460 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380@16380 slave a7ace08c36f7d55c4f28463d72865aa1ff74829e 0 1596851782000 3 connected e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379@16379 slave 91169e71359deed96f8778cf31c823dbd6ded350 0 1596851781000 8 connected 6df33baf68995c61494a06c06af18045ca5a04f6 192.168.0.43:6382@16382 myself,slave 0449aa43657d46f487107bfe49344701526b11d8 0 1596851783000 0 connected dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380@16380 slave 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 0 1596851783688 11 connected a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379@16379 master - 0 1596851785730 3 connected 12540-16383 192.168.0.43:6382> quit [root@node01 ~]# redis-trib.rb info 192.168.0.41:6379 192.168.0.41:6379 (8c785e6e...) -> 0 keys | 3014 slots | 1 slaves. 192.168.0.43:6379 (a7ace08c...) -> 0 keys | 3844 slots | 1 slaves. 192.168.0.43:6380 (91169e71...) -> 0 keys | 2884 slots | 1 slaves. 192.168.0.43:6381 (0449aa43...) -> 0 keys | 6642 slots | 1 slaves. [OK] 0 keys in 4 masters. 0.00 keys per slot on average. [root@node01 ~]#
提示:要给该集群某个节点从属某个master,须要链接到对应的slave节点上执行cluster replicate +对应master的ID便可;到此向集群中添加新节点就完成了;
验证:在新加的节点上添加数据,看看是否可添加?
[root@node01 ~]# redis-cli -h 192.168.0.43 -p 6381 192.168.0.43:6381> AUTH admin OK 192.168.0.43:6381> get aa (nil) 192.168.0.43:6381> set aa a1 OK 192.168.0.43:6381> get aa "a1" 192.168.0.43:6381> set bb b1 (error) MOVED 8620 192.168.0.43:6380 192.168.0.43:6381>
提示:在新加的master上读写数据是能够的;
验证:把新加master宕机,看看对应slave是否会提高为master?
[root@node01 ~]# redis-cli -h 192.168.0.43 -p 6381 192.168.0.43:6381> AUTH admin OK 192.168.0.43:6381> info replication # Replication role:master connected_slaves:1 slave0:ip=192.168.0.43,port=6382,state=online,offset=1032,lag=1 master_replid:d65b59178dd70a13e75c866d4de738c4f248c84c master_replid2:0000000000000000000000000000000000000000 master_repl_offset:1032 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:1032 192.168.0.43:6381> quit [root@node01 ~]# redis-cli -h 192.168.0.43 -p 6382 192.168.0.43:6382> AUTH admin OK 192.168.0.43:6382> info replication # Replication role:slave master_host:192.168.0.43 master_port:6381 master_link_status:up master_last_io_seconds_ago:8 master_sync_in_progress:0 slave_repl_offset:1046 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:d65b59178dd70a13e75c866d4de738c4f248c84c master_replid2:0000000000000000000000000000000000000000 master_repl_offset:1046 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:1046 192.168.0.43:6382> quit [root@node01 ~]# ssh node03 Last login: Sat Aug 8 10:07:15 2020 from node01 [root@node03 ~]# ps -ef |grep redis root 1425 1 0 08:34 ? 00:00:18 redis-server 0.0.0.0:6379 [cluster] root 1431 1 0 08:35 ? 00:00:18 redis-server 0.0.0.0:6380 [cluster] root 1646 1 0 09:04 ? 00:00:14 redis-server 0.0.0.0:6381 [cluster] root 1651 1 0 09:04 ? 00:00:07 redis-server 0.0.0.0:6382 [cluster] root 5888 5868 0 10:08 pts/1 00:00:00 grep --color=auto redis [root@node03 ~]# kill -9 1646 [root@node03 ~]# redis-cli -p 6382 127.0.0.1:6382> AUTH admin OK 127.0.0.1:6382> info replication # Replication role:master connected_slaves:0 master_replid:34d6ec0e58f12ffe9bc5fbcb0c16008b5054594f master_replid2:d65b59178dd70a13e75c866d4de738c4f248c84c master_repl_offset:1102 second_repl_offset:1103 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:1102 127.0.0.1:6382>
提示:能够看到当master宕机后,对应slave会被提高为master;
删除节点
删除集群某节点,须要保证要删除的节点上没有数据便可
节点不空的状况,迁移槽位到其余master上
[root@node01 ~]# redis-trib.rb reshard 192.168.0.41:6379 >>> Performing Cluster Check (using node 192.168.0.41:6379) M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379 slots:2447-5460 (3014 slots) master 1 additional replica(s) M: 6df33baf68995c61494a06c06af18045ca5a04f6 192.168.0.43:6382 slots:0-2446,5461-8038,10923-12539 (6642 slots) master 0 additional replica(s) S: 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380 slots: (0 slots) slave replicates a7ace08c36f7d55c4f28463d72865aa1ff74829e S: dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380 slots: (0 slots) slave replicates 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379 slots:12540-16383 (3844 slots) master 1 additional replica(s) M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380 slots:8039-10922 (2884 slots) master 1 additional replica(s) S: e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379 slots: (0 slots) slave replicates 91169e71359deed96f8778cf31c823dbd6ded350 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 6642 What is the receiving node ID? 91169e71359deed96f8778cf31c823dbd6ded350 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1:6df33baf68995c61494a06c06af18045ca5a04f6 Source node #2:done Ready to move 6642 slots. Source nodes: M: 6df33baf68995c61494a06c06af18045ca5a04f6 192.168.0.43:6382 slots:0-2446,5461-8038,10923-12539 (6642 slots) master 0 additional replica(s) Destination node: M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380 slots:8039-10922 (2884 slots) master 1 additional replica(s) Resharding plan: Moving slot 0 from 6df33baf68995c61494a06c06af18045ca5a04f6 Moving slot 1 from 6df33baf68995c61494a06c06af18045ca5a04f6 Moving slot 2 from 6df33baf68995c61494a06c06af18045ca5a04f6 Moving slot 3 from 6df33baf68995c61494a06c06af18045ca5a04f6 Moving slot 4 from 6df33baf68995c61494a06c06af18045ca5a04f6 Moving slot 5 from 6df33baf68995c61494a06c06af18045ca5a04f6 Moving slot 6 from 6df33baf68995c61494a06c06af18045ca5a04f6 ……省略部份内容…… Moving slot 12536 from 6df33baf68995c61494a06c06af18045ca5a04f6 Moving slot 12537 from 6df33baf68995c61494a06c06af18045ca5a04f6 Moving slot 12538 from 6df33baf68995c61494a06c06af18045ca5a04f6 Moving slot 12539 from 6df33baf68995c61494a06c06af18045ca5a04f6 Do you want to proceed with the proposed reshard plan (yes/no)? yes Moving slot 0 from 192.168.0.43:6382 to 192.168.0.43:6380: Moving slot 1 from 192.168.0.43:6382 to 192.168.0.43:6380: Moving slot 2 from 192.168.0.43:6382 to 192.168.0.43:6380: Moving slot 3 from 192.168.0.43:6382 to 192.168.0.43:6380: ……省略部份内容…… Moving slot 1178 from 192.168.0.43:6382 to 192.168.0.43:6380: Moving slot 1179 from 192.168.0.43:6382 to 192.168.0.43:6380: Moving slot 1180 from 192.168.0.43:6382 to 192.168.0.43:6380: [ERR] Calling MIGRATE: ERR Syntax error, try CLIENT (LIST | KILL | GETNAME | SETNAME | PAUSE | REPLY) [root@node01 ~]#
提示:这个报错和上面新增节点报错同样,都是告诉咱们对应槽位绑定了数据形成的;解决办法就是把对应节点上的数据拷贝出来,而后把数据状况而后在移动槽位便可;这里说一下,咱们要把某个节点上的slot移动到其余master上,须要指定移动多少个slot到那个节点,这里的节点也是须要用id指定,source node若是是多个分别指定其ID,最后用done表示完成;其实就是和从新分配slot的操做同样;
清空数据
[root@node01 ~]# redis-cli -h 192.168.0.43 -p 6382 192.168.0.43:6382> AUTH admin OK 192.168.0.43:6382> KEYS * 1) "aa" 192.168.0.43:6382> FLUSHDB OK 192.168.0.43:6382> KEYS * (empty list or set) 192.168.0.43:6382> BGSAVE Background saving started 192.168.0.43:6382> quit [root@node01 ~]#
再次挪动slot到其余节点
提示:再次挪动slot须要先修复集群,而后才能够从新分配slot
修复集群
[root@node01 ~]# redis-trib.rb fix 192.168.0.41:6379 >>> Performing Cluster Check (using node 192.168.0.41:6379) M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379 slots:2447-5460 (3014 slots) master 1 additional replica(s) M: 6df33baf68995c61494a06c06af18045ca5a04f6 192.168.0.43:6382 slots:1180-2446,5461-8038,10923-12539 (5462 slots) master 0 additional replica(s) S: 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380 slots: (0 slots) slave replicates a7ace08c36f7d55c4f28463d72865aa1ff74829e S: dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380 slots: (0 slots) slave replicates 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379 slots:12540-16383 (3844 slots) master 1 additional replica(s) M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380 slots:0-1179,8039-10922 (4064 slots) master 1 additional replica(s) S: e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379 slots: (0 slots) slave replicates 91169e71359deed96f8778cf31c823dbd6ded350 [OK] All nodes agree about slots configuration. >>> Check for open slots... [WARNING] Node 192.168.0.43:6382 has slots in migrating state (1180). [WARNING] Node 192.168.0.43:6380 has slots in importing state (1180). [WARNING] The following slots are open: 1180 >>> Fixing open slot 1180 Set as migrating in: 192.168.0.43:6382 Set as importing in: 192.168.0.43:6380 Moving slot 1180 from 192.168.0.43:6382 to 192.168.0.43:6380: >>> Check slots coverage... [OK] All 16384 slots covered. [root@node01 ~]# redis-trib.rb info 192.168.0.41:6379 192.168.0.41:6379 (8c785e6e...) -> 0 keys | 3014 slots | 1 slaves. 192.168.0.43:6382 (6df33baf...) -> 0 keys | 5461 slots | 0 slaves. 192.168.0.43:6379 (a7ace08c...) -> 0 keys | 3844 slots | 1 slaves. 192.168.0.43:6380 (91169e71...) -> 0 keys | 4065 slots | 1 slaves. [OK] 0 keys in 4 masters. 0.00 keys per slot on average. [root@node01 ~]#
提示:修复集群后,能够看到对应master上还剩下5461个slot,接下来咱们再次把这5461个slot分配给其余节点(一次分不完能够屡次分);
再次分配slot到其余节点(分配1461个slot给192.168.0.43:6379)
再次分配slot到其余节点(分配12000个slot给192.168.0.41:6379)
再次分配slot到其余节点(分配12000个slot给192.168.0.43:6380)
确认集群slot分配状况
提示:先能够看到192.168.0.43:6382节点上没有slots了,接下来咱们就能够把它从集群中删除
从集群中删除节点(192.168.0.43:6382)
提示:从集群中把某个节点删除,须要指定集群中任意一个ip地址,以及要删除节点的对应ID便可;
验证:查看现有集群信息
[root@node01 ~]# redis-trib.rb info 192.168.0.41:6379 192.168.0.41:6379 (8c785e6e...) -> 0 keys | 5014 slots | 1 slaves. 192.168.0.43:6379 (a7ace08c...) -> 0 keys | 5305 slots | 1 slaves. 192.168.0.43:6380 (91169e71...) -> 0 keys | 6065 slots | 1 slaves. [OK] 0 keys in 3 masters. 0.00 keys per slot on average. [root@node01 ~]# redis-trib.rb check 192.168.0.41:6379 >>> Performing Cluster Check (using node 192.168.0.41:6379) M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379 slots:2447-5460,5656-7655 (5014 slots) master 1 additional replica(s) S: 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380 slots: (0 slots) slave replicates a7ace08c36f7d55c4f28463d72865aa1ff74829e S: dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380 slots: (0 slots) slave replicates 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379 slots:1181-2446,5461-5655,12540-16383 (5305 slots) master 1 additional replica(s) M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380 slots:0-1180,7656-12539 (6065 slots) master 1 additional replica(s) S: e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379 slots: (0 slots) slave replicates 91169e71359deed96f8778cf31c823dbd6ded350 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. [root@node01 ~]#
提示:能够看到目前集群中就只有3主3从6个节点;
验证:启动以前宕机的192.168.0.43:6381,看看是否还会在集群中呢?
[root@node01 ~]# ssh node03 Last login: Sat Aug 8 10:08:40 2020 from node01 [root@node03 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 *:16379 *:* LISTEN 0 128 *:16380 *:* LISTEN 0 128 *:6379 *:* LISTEN 0 128 *:6380 *:* LISTEN 0 128 [::]:22 [::]:* LISTEN 0 100 [::1]:25 [::]:* LISTEN 0 128 [::]:2376 [::]:* [root@node03 ~]# redis-server /usr/local/redis/6381/etc/redis.conf [root@node03 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 *:16379 *:* LISTEN 0 128 *:16380 *:* LISTEN 0 128 *:16381 *:* LISTEN 0 128 *:6379 *:* LISTEN 0 128 *:6380 *:* LISTEN 0 128 *:6381 *:* LISTEN 0 128 [::]:22 [::]:* LISTEN 0 100 [::1]:25 [::]:* LISTEN 0 128 [::]:2376 [::]:* [root@node03 ~]# exit logout Connection to node03 closed. [root@node01 ~]# redis-trib.rb check 192.168.0.41:6379 [ERR] Sorry, can't connect to node 192.168.0.43:6382 >>> Performing Cluster Check (using node 192.168.0.41:6379) M: 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379 slots:2447-5460,5656-7655 (5014 slots) master 2 additional replica(s) S: 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380 slots: (0 slots) slave replicates a7ace08c36f7d55c4f28463d72865aa1ff74829e S: dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380 slots: (0 slots) slave replicates 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 M: a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379 slots:1181-2446,5461-5655,12540-16383 (5305 slots) master 1 additional replica(s) M: 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380 slots:0-1180,7656-12539 (6065 slots) master 1 additional replica(s) S: 0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381 slots: (0 slots) slave replicates 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 S: e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379 slots: (0 slots) slave replicates 91169e71359deed96f8778cf31c823dbd6ded350 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. [root@node01 ~]# redis-trib.rb info 192.168.0.41:6379 [ERR] Sorry, can't connect to node 192.168.0.43:6382 192.168.0.41:6379 (8c785e6e...) -> 0 keys | 5014 slots | 2 slaves. 192.168.0.43:6379 (a7ace08c...) -> 0 keys | 5305 slots | 1 slaves. 192.168.0.43:6380 (91169e71...) -> 0 keys | 6065 slots | 1 slaves. [OK] 0 keys in 3 masters. 0.00 keys per slot on average. [root@node01 ~]# redis-cli -a admin 127.0.0.1:6379> CLUSTER NODES 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380@16380 slave a7ace08c36f7d55c4f28463d72865aa1ff74829e 0 1596855739865 15 connected dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380@16380 slave 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 0 1596855738000 16 connected 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379@16379 myself,master - 0 1596855736000 16 connected 2447-5460 5656-7655 a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379@16379 master - 0 1596855737000 15 connected 1181-2446 5461-5655 12540-16383 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380@16380 master - 0 1596855740877 18 connected 0-1180 7656-12539 0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381@16381 slave 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 0 1596855738000 16 connected 30a34b27d343883cbfe9db6ba2ad52a1936d8b67 192.168.0.43:6382@16382 handshake - 1596855726853 0 0 disconnected e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379@16379 slave 91169e71359deed96f8778cf31c823dbd6ded350 0 1596855739000 18 connected 127.0.0.1:6379>
提示:能够看到当192.168.0.43:6381(源删除master的slave)启动后,它会自动从属集群节点某一个master;从上面的信息能够看到如今集群成了3主4从,192.168.0.43:6381从属192.168.0.41:6379;还有一个你可能已经发如今node03上对应的6382这个节点也再也不了,从集群node关系看,它的状态变成了handshake disconnected;
验证:把192.168.0.43:6382启动起来,看看它还会在集群吗?
[root@node01 ~]# ssh node03 Last login: Sat Aug 8 11:00:50 2020 from node01 [root@node03 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 *:16379 *:* LISTEN 0 128 *:16380 *:* LISTEN 0 128 *:16381 *:* LISTEN 0 128 *:6379 *:* LISTEN 0 128 *:6380 *:* LISTEN 0 128 *:6381 *:* LISTEN 0 128 [::]:22 [::]:* LISTEN 0 100 [::1]:25 [::]:* LISTEN 0 128 [::]:2376 [::]:* [root@node03 ~]# redis-server /usr/local/redis/6382/etc/redis.conf [root@node03 ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 *:16379 *:* LISTEN 0 128 *:16380 *:* LISTEN 0 128 *:16381 *:* LISTEN 0 128 *:16382 *:* LISTEN 0 128 *:6379 *:* LISTEN 0 128 *:6380 *:* LISTEN 0 128 *:6381 *:* LISTEN 0 128 *:6382 *:* LISTEN 0 128 [::]:22 [::]:* LISTEN 0 100 [::1]:25 [::]:* LISTEN 0 128 [::]:2376 [::]:* [root@node03 ~]# redis-cli 127.0.0.1:6379> AUTH admin OK 127.0.0.1:6379> CLUSTER NODES 0449aa43657d46f487107bfe49344701526b11d8 192.168.0.43:6381@16381 slave 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 0 1596856251000 16 connected a7ace08c36f7d55c4f28463d72865aa1ff74829e 192.168.0.43:6379@16379 myself,master - 0 1596856250000 15 connected 1181-2446 5461-5655 12540-16383 dbfff4c49a94c0ee55d14401ccc9245af3655427 192.168.0.42:6380@16380 slave 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 0 1596856250973 16 connected e99b0b450e78719d63520cb6efc068d5e8d4d081 192.168.0.42:6379@16379 slave 91169e71359deed96f8778cf31c823dbd6ded350 0 1596856253018 18 connected 8c785e6ec3f8f7ff4fb7768765da8b8a93f26855 192.168.0.41:6379@16379 master - 0 1596856252000 16 connected 2447-5460 5656-7655 6df33baf68995c61494a06c06af18045ca5a04f6 192.168.0.43:6382@16382 master - 0 1596856253000 17 connected 62ece0b80b83c0f1f078b07fc1687bb8376f76b3 192.168.0.41:6380@16380 slave a7ace08c36f7d55c4f28463d72865aa1ff74829e 0 1596856252000 15 connected 91169e71359deed96f8778cf31c823dbd6ded350 192.168.0.43:6380@16380 master - 0 1596856254043 18 connected 0-1180 7656-12539 127.0.0.1:6379> quit [root@node03 ~]# exit logout Connection to node03 closed. [root@node01 ~]# redis-trib.rb info 192.168.0.41:6379 192.168.0.41:6379 (8c785e6e...) -> 0 keys | 5014 slots | 2 slaves. 192.168.0.43:6382 (6df33baf...) -> 0 keys | 0 slots | 0 slaves. 192.168.0.43:6379 (a7ace08c...) -> 0 keys | 5305 slots | 1 slaves. 192.168.0.43:6380 (91169e71...) -> 0 keys | 6065 slots | 1 slaves. [OK] 0 keys in 4 masters. 0.00 keys per slot on average. [root@node01 ~]#
提示:能够看到当咱们把192.168.0.43:6382启动起来后,再次查看集群信息,它又回到了集群,只不过它没有对应slot,固然没有slot也就不会有任何链接调度到它上面;