一、Redis Cluster设计要点
redis cluster在设计的时候,就考虑到了去中心化,去中间件,也就是说,集群中的每一个节点都是平等的关系,都是对等的,每一个节点都保存各自的数据和整个集群的状态。每一个节点都和其余全部节点链接,并且这些链接保持活跃,这样就保证了咱们只须要链接集群中的任意一个节点,就能够获取到其余节点的数据。html
那么redis 是如何合理分配这些节点和数据的呢?node
Redis 集群没有并使用传统的一致性哈希来分配数据,而是采用另一种叫作
哈希槽 (hash slot)
的方式来分配的。redis cluster 默认分配了 16384 个slot,当咱们set一个key 时,会用CRC16
算法来取模获得所属的slot
,而后将这个key 分到哈希槽区间的节点上,具体算法就是:CRC16(key) % 16384
。redis注意的是:必需要
3个以上
的主节点,不然在建立集群时会失败。算法因此,咱们假设如今有3个节点已经组成了集群,分别是:A, B, C 三个节点,它们能够是一台机器上的三个端口,也能够是三台不一样的服务器。那么,采用
哈希槽 (hash slot)
的方式来分配16384个slot 的话,它们三个节点分别承担的slot 区间是:ruby
- 节点A覆盖0-5460;
- 节点B覆盖5461-10922;
- 节点C覆盖10923-16383.
那么,如今我想设置一个key ,好比叫
my_name
:bashset my_name yangyi按照redis cluster的哈希槽算法:
CRC16('my_name')%16384 = 2412
。 那么就会把这个key 的存储分配到 A 上了。服务器一样,当我链接(A,B,C)任何一个节点想获取
my_name
这个key时,也会这样的算法,而后内部跳转到B节点上获取数据。ui这种
哈希槽
的分配方式有好也有坏,好处就是很清晰,好比我想新增一个节点D
,redis cluster的这种作法是从各个节点的前面各拿取一部分slot到D
上。大体就会变成这样:spa
- 节点A覆盖1365-5460
- 节点B覆盖6827-10922
- 节点C覆盖12288-16383
- 节点D覆盖0-1364,5461-6826,10923-12287
一样删除一个节点也是相似,移动完成后就能够删除这个节点了。设计
因此redis cluster 就是这样的一个形状:
二、Redis Cluster主从模式
redis cluster 为了保证数据的高可用性,加入了主从模式,一个主节点对应一个或多个从节点,主节点提供数据存取,从节点则是从主节点拉取数据备份,当这个主节点挂掉后,就会有这个从节点选取一个来充当主节点,从而保证集群不会挂掉。
上面那个例子里, 集群有ABC三个主节点, 若是这3个节点都没有加入从节点,若是B挂掉了,咱们就没法访问整个集群了。A和C的slot也没法访问。
因此咱们在集群创建的时候,必定要为每一个主节点都添加了从节点, 好比像这样, 集群包含主节点A、B、C, 以及从节点A一、B一、C1, 那么即便B挂掉系统也能够继续正确工做。
B1节点替代了B节点,因此Redis集群将会选择B1节点做为新的主节点,集群将会继续正确地提供服务。 当B从新开启后,它就会变成B1的从节点。
不过须要注意,若是节点B和B1同时挂了,Redis集群就没法继续正确地提供服务了。
一、 redis实例:
192.168.244.128:6379 主 192.168.244.128:6380 从 192.168.244.130:6379 主 192.168.244.130:6380 从 192.168.244.131:6379 主 192.168.244.131:6380 从二、执行命令建立集群
把cluster-enabled yes 的注释打开
执行命令:./redis-trib.rb create --replicas 1 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380
新版本已经修改命令为:
./redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1 -a zjl123
注意一个服务器启动多实例时如下配置要不同:
pidfile : pidfile/var/run/redis/redis_6380.pid
port 6380
logfile : logfile/var/log/redis/redis_6380.log
rdbfile : dbfilenamedump_6380.rdb
三、问题:
一、报错:/usr/bin/env: ruby: No such file or directory
安装ruby,rubygems 依赖
yum -y install ruby rubygems
二、报错
./redis-trib.rb:6: odd number list for Hash
white: 29,
^
./redis-trib.rb:6: syntax error, unexpected ':', expecting '}'
white: 29,
^
./redis-trib.rb:7: syntax error, unexpected ',', expecting kEND
安装新版ruby
yum remove -y ruby yum remove -y rubygems下载ruby-2.6.5.tar.gztar –zxvf ruby-2.6.5.tar.gzcd ruby-2.6.5./configuremakemake install三、重复运行建立集群命令提示:You should use redis-cli instead. All commands and features belonging to redis-trib.rb have been moved to redis-cli. In order to use them you should call redis-cli with the --cluster option followed by the subcommand name, arguments and options. Use the following syntax: redis-cli --cluster SUBCOMMAND [ARGUMENTS] [OPTIONS] Example: redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1 To get help about all subcommands, type: redis-cli --cluster help[root@zjltest3 src]# redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1
-bash: redis-cli: command not foun
四、执行命令 . /redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1[ERR] Node 192.168.244.128:6379 NOAUTH Authentication required.五、执行命令
./redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1 -a zjl123Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. [ERR] Node 192.168.244.128:6379 is not configured as a cluster node.六、把cluster-enabled yes 的注释打开 成功:
Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 192.168.244.130:6380 to 192.168.244.128:6379 Adding replica 192.168.244.131:6380 to 192.168.244.130:6379 Adding replica 192.168.244.128:6380 to 192.168.244.131:6379 M: 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 192.168.244.128:6379 slots:[0-5460],[5634],[8157] (5461 slots) master S: 1d6b7a10046a75b3ab5461a8d29f411837a3c0d8 192.168.244.128:6380 replicates 0f30ac78eba3be20aa307ea64c09b5025de165af M: d34845ed63f35645e820946cc0dc24460621a386 192.168.244.130:6379 slots:[5461-10922] (5462 slots) master S: 1332da24115473f73e04dfe8b67cd1e595a34a11 192.168.244.130:6380 replicates 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 M: 0f30ac78eba3be20aa307ea64c09b5025de165af 192.168.244.131:6379 slots:[10923-16383] (5461 slots) master S: 1a57a63faa17e5fda4025a5f088fc70055990a07 192.168.244.131:6380 replicates d34845ed63f35645e820946cc0dc24460621a386 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ...... >>> Performing Cluster Check (using node 192.168.244.128:6379) M: 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 192.168.244.128:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 1d6b7a10046a75b3ab5461a8d29f411837a3c0d8 192.168.244.128:6380 slots: (0 slots) slave replicates 0f30ac78eba3be20aa307ea64c09b5025de165af M: d34845ed63f35645e820946cc0dc24460621a386 192.168.244.130:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 1a57a63faa17e5fda4025a5f088fc70055990a07 192.168.244.131:6380 slots: (0 slots) slave replicates d34845ed63f35645e820946cc0dc24460621a386 S: 1332da24115473f73e04dfe8b67cd1e595a34a11 192.168.244.130:6380 slots: (0 slots) slave replicates 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 M: 0f30ac78eba3be20aa307ea64c09b5025de165af 192.168.244.131:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration.
一、查看集群状态:
./redis-cli --cluster check 192.168.244.128:6379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. 192.168.244.128:6379 (0f6f4aab...) -> 0 keys | 5461 slots | 1 slaves. 192.168.244.130:6379 (d34845ed...) -> 0 keys | 5462 slots | 1 slaves. 192.168.244.131:6379 (0f30ac78...) -> 0 keys | 5461 slots | 1 slaves. [OK] 0 keys in 3 masters. 0.00 keys per slot on average. >>> Performing Cluster Check (using node 192.168.244.128:6379) M: 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 192.168.244.128:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 1d6b7a10046a75b3ab5461a8d29f411837a3c0d8 192.168.244.128:6380 slots: (0 slots) slave replicates 0f30ac78eba3be20aa307ea64c09b5025de165af M: d34845ed63f35645e820946cc0dc24460621a386 192.168.244.130:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 1a57a63faa17e5fda4025a5f088fc70055990a07 192.168.244.131:6380 slots: (0 slots) slave replicates d34845ed63f35645e820946cc0dc24460621a386 S: 1332da24115473f73e04dfe8b67cd1e595a34a11 192.168.244.130:6380 slots: (0 slots) slave replicates 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 M: 0f30ac78eba3be20aa307ea64c09b5025de165af 192.168.244.131:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.二、登陆服务器设置和获取值
[root@zjltest3 src]# ./redis-cli -c -h 192.168.244.128 -p 6379 -a zjl123 Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. 192.168.244.128:6379> set zjl 123 -> Redirected to slot [5634] located at 192.168.244.130:6379 OK 192.168.244.130:6379>如上,设置的值被保存在130服务器
[root@zjltest2 redis]# src/redis-cli -c -h 192.168.244.128 -p 6379 -a zjl123 Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. 192.168.244.128:6379> get zjl -> Redirected to slot [5634] located at 192.168.244.130:6379 "123"如上获取值是从130上获取的
以上说明cluster集群配置成功