redis 6.0 redis-cluster-proxy集群代理尝试

伴随着Redis6.0的发布,做为最使人怦然心动的特性之一,Redis官方同时推出Redis集群的proxy了:redis-cluster-proxy, https://github.com/RedisLabs/redis-cluster-proxy
相比从前访问Redis集群时须要制定集群中全部的IP节点相比:
1,redis的redis-cluster-proxy实现了redis cluster集群节点的代理(屏蔽),相似于VIP但又比VIP简单,客户端不须要知道集群中的具体节点个数和主从身份,能够直接经过代理访问集群。
2,不只如此,仍是具备一些很是实用的改进,好比在redis集群模式下,增长了对multiple操做的支持,跨slot操做等等(有点关系数据库的分库分表中间件的感受)。

redis-cluster-proxy主要特性
如下信息来自于官方的说明:
redis-cluster-proxy是Redis集群的代理。Redis可以在基于自动故障转移和分片的集群模式下运行。
这种特殊模式(指Redis集群模式)须要使用特殊的客户端来理解集群协议:经过代理,集群被抽象了出来,能够实现像单实例同样实现redis集群的访问。
Redis集群代理是多线程的,默认状况下,它目前使用多路复用通讯模型,这样每一个线程都有本身的集群链接,全部属于线程自己的客户端均可以共享该链接。
不管如何,在某些特殊状况下(多事务或阻塞命令),多路复用被禁用,客户端将拥有本身的集群链接。
经过这种方式,只发送简单命令(好比get和set)的客户端将不须要一组到Redis集群的私有链接。

Redis集群代理的主要特色以下:
1,自动化路由:每一个查询被自动路由到集群的正确节点
2,多线程(它目前使用多路复用通讯模型,这样每一个线程都有本身的集群链接)
3,支持多路复用和私有链接模型
4,即便在多路复用上下文中,查询执行和应答顺序也是有保证的
5,在请求/重定向错误后自动更新集群配置:当这些类型的错误发生在应答中时,代理经过获取集群的更新配置并从新映射全部slot,自动更新集群的内部表示。
    全部查询将在更新完成后从新执行,所以,从客户机的角度来看,一切都将正常运行(客户机不会收到ASK|重定向错误:在更新集群配置以后,它们将直接收到预期的结果)。
6,跨slot/跨节点查询:支持许多涉及属于不一样slot(甚至不一样集群节点)的多个键的命令。
    这些命令将把查询分红多个查询,这些查询将被路由到不一样的槽/节点。
    这些命令的应答处理是特定于命令的。有些命令,如MGET,将合并全部应答,就好像它们是单个应答同样。
    其余命令(如MSET或DEL)将汇总全部应答的结果。因为这些查询实际上破坏了命令的原子性,因此它们的使用是可选的(默认状况下禁用)。
7,一些没有特定节点/slot的命令(如DBSIZE)被传递给全部节点,为了给出全部应答中包含的全部值的和,应答将被映射简化。
8,可用于执行某些特定于代理的操做的附加代理命令。

Redis 6.0以及redis-cluster-proxy gcc 5+编译环境依赖
Redis 6.0以及redis-cluster-proxy的编译依赖于gcc 5+,centos 7上的默认gcc版本是4.+,没法知足编译要求,在编译时候会出现相似以下的错误
server.h:1022:5: error: expected specifier-qualifier-list before '_Atomic
相似错误参考这里: https://wanghenshui.github.io/2019/12/31/redis-ce
解决方案参考,笔者环境为centos7,为此折腾了小半天
1, https://stackoverflow.com/questions/55345373/how-to-install-gcc-g-8-on-centos,测试可行
2, https://blog.csdn.net/displayMessage/article/details/85602701 gcc源码包编译安装,120MB的源码包,有人说是须要40分钟,笔者机器上编译超过了1个小时仍未果,所以采用的是上一种方法

 

Redis集群环境搭建
测试环境拓扑图,以下所示,基于docker的3主3从6个节点的redis cluster集群

redis cluster 集群信息,参考以前的文章,redis cluster 自动化安装、扩容和缩容,快速实现Redis集群搭建
html

 
redis-cluster-proxy 安装
安装步骤:
1,git clone https://github.com/artix75/redis-cluster-proxy
   cd redis-cluster-proxy
2,解决gcc版本依赖问题,笔者折腾了很久,gcc 5.0+ 源码包编译安装花了一个多小时未果。
 后来尝试以下这种方法可行,参考 https://stackoverflow.com/questions/55345373/how-to-install-gcc-g-8-on-centos
On CentOS 7, you can install GCC 8 from Developer Toolset. First you need to enable the Software Collections repository:
yum install centos-release-scl

Then you can install GCC 8 and its C++ compiler:
yum install devtoolset-8-gcc devtoolset-8-gcc-c++

To switch to a shell which defaults gcc and g++ to this GCC version, use:
scl enable devtoolset-8 -- bash

You need to wrap all commands under the scl call, so that the process environment changes performed by this command affect all subshells. For example, you could use the scl command to invoke a shell script that performs the required actions.
3,make PREFIX=/usr/local/redis_cluster_proxy install 
 
4,关于rediscluster-proxy配置文件
启动的时候能够直接在命令行中指定参数,但最好是使用配置文件模式启动,配置文件中的节点以下,很清爽,注释也很清晰,简单备注了一下,期待发现更多的新特性。
# Redis Cluster Proxy configuration file example.
# 若是指定以配置文件的方式启动,必须指定-c 参数
# ./redis-cluster-proxy -c /path/to/proxy.conf
 

################################## INCLUDES ###################################
# Include one or more other config files here.  Include files can include other files.
# 指定配置文件的路径
# If instead you are interested in using includes to override configuration options, it is better to use include as the last line.
# include /path/to/local.conf
# include /path/to/other.conf

######################## CLUSTER ENTRY POINT ADDRESS ##########################
# Indicate the entry point address in the same way it can be indicated in the
# redis cluster集群自身节点信息,这里是3主3从的6个节点,分别是192.168.0.61~192.168.0.66
# redis-cluster-proxy command line arguments.
# Note that it can be overridden by the command line argument itself.
# You can also specify multiple entry-points, by adding more lines, ie:
# cluster 127.0.0.1:7000
# cluster 127.0.0.1:7001
# You can also use the "entry-point" alias instead of cluster, ie:
# entry-point 127.0.0.1:7000
#
# cluster 127.0.0.1:7000
cluster 192.168.0.61:8888
cluster 192.168.0.62:8888
cluster 192.168.0.63:8888
cluster 192.168.0.64:8888
cluster 192.168.0.65:8888
cluster 192.168.0.66:8888


################################### MAIN ######################################
# Set the port used by Redis Cluster Proxy to listen to incoming connections
# redis-cluster-proxy 端口号指定
# from clients (default 7777)
port 7777
 
#  IP地址绑定,这里指定为redis-proxy-cluster所在节点的IP地址
# If you want you can bind a single interface, if the bind option is not
# specified all the interfaces will listen for incoming connections.
# You can also bind on multiple interfaces by declaring bind on multiple lines
#
# bind 127.0.0.1
bind 192.168.0.12
 
#  socket 文件路径
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis Cluster Proxy won't
# listen on a Unix socket when not specified.
#
# unixsocket /path/to/proxy.socket

# Set the Unix socket file permissions (default 0)
#
# unixsocketperm 760
 
#  线程数量
# Set the number of threads.
threads 8

# Set the TCP keep-alive value on the Redis Cluster Proxy's socket
#
# tcpkeepalive 300

# Set the TCP backlog on the Redis Cluster Proxy's socket
#
# tcp-backlog 511

#  链接池信息
# Size of the connections pool used to provide ready-to-use sockets to
# private connections. The number (size) indicates the number of starting
# connections in the pool.
# Use 0 to disable connections pool at all.
# Every thread will have its pool of ready-to-use connections.
# When the proxy starts, every thread will populate a pool containing
# connections to all the nodes of the cluster.
# Whenever a client needs a private connection, it can take a connection
# from the pool, if available. This will speed-up the client transition from
# the thread's shared connection to its own private connection, since the
# connection from the thread's pool should be already connected and
# ready-to-use. Otherwise, clients with priovate connections must re-connect
# the the nodes of the cluster (this re-connection will act in a 'lazy' way).
#
# connections-pool-size 10

# Minimum number of connections in the the pool. Below this value, the
# thread will start re-spawning connections at the defined rate until
# the pool will be full again.
#
# connections-pool-min-size 10

# Interval in milliseconds used to re-spawn connections in the pool.
# Whenever the number of connections in the pool drops below the minimum
# (see 'connections-pool-min-size' above), the thread will start
# re-spawing connections in the pool, until the pool will be full again.
# New connections will be added at this specified interval.
#
# connections-pool-spawn-every 50

# Number of connections to re-spawn in the pool at every cycle that will
# happen with an interval defined by 'connections-pool-spawn-every' (see above).
#
# connections-pool-spawn-rate 50
 
#  运行模式,一开始最好指定为no,运行时直接打印出来启动日志或者异常信息,这样能够方便地查看启动异常
#  很是奇怪的是:笔者一开始指定为yes,异常日志输出到文件,居然跟直接打印日志输出的信息不一致
# Run Redis Cluster Proxy as a daemon.
daemonize yes
 
#  pid 文件指定
# If a pid file is specified, the proxy writes it where specified at startup
# and removes it at exit.
#
# When the proxy runs non daemonized, no pid file is created if none is
# specified in the configuration. When the proxy is daemonized, the pid file
# is used even if not specified, defaulting to
# "/var/run/redis-cluster-proxy.pid".
#
# Creating a pid file is best effort: if the proxy is not able to create it
# nothing bad happens, the server will start and run normally.
#
#pidfile /var/run/redis-cluster-proxy.pid


#  日志文件指定,若是能够正常启动,强烈建议指定一个输出日志文件,全部的运行异常或者错误均可以从日志中查找
# Specify the log file name. Also the empty string can be used to force
# Redis Cluster Porxy to log on the standard output. Note that if you use
# standard output for logging but daemonize, logs will be sent to /dev/null
#
#logfile ""
logfile "/usr/local/redis_cluster_proxy/redis_cluster_proxy.log"


#  跨slot操做,这里设置为yes,容许
# Enable cross-slot queries that can use multiple keys belonging to different
# slots or even different nodes.
# WARN: these queries will break the the atomicity deisgn of many Redis
# commands.
# NOTE: cross-slots queries are not supported by all the commands, even if
# this feature is enabled
#
# enable-cross-slot no
enable-cross-slot yes
 
# Maximum number of clients allowed
#
# max-clients 10000
 
# 链接到redis cluster时候的身份认证,若是redis集群节点设置了身份认证的话,强烈建议redis集群全部节点设置一个统一的auth
# Authentication password used to authenticate on the cluster in case its nodes
# are password-protected. The password will be used both for fetching cluster's
# configuration and to automatically authenticate proxy's internal connections
# to the cluster itself (both multiplexing shared connections and clients'
# private connections. So, clients connected to the proxy won't need to issue
# the Redis AUTH command in order to be authenticated.
#
# auth mypassw
auth your_redis_cluster_password
 
#  这个节点是redis 6.0以后的用户名,这里没有指定
# Authentication username (supported by Redis >= 6.0)
#
# auth-user myuser

################################# LOGGING #####################################
# Log level: can be debug, info, success, warning o error.
log-level error

# Dump queries received from clients in the log (log-level debug required)
#
# dump-queries no

# Dump buffer in the log (log-level debug required)
#
# dump-buffer no

# Dump requests' queues (requests to send to cluster, request pending, ...)
# in the log (log-level debug required)
#
# dump-queues no

启动redis-cluster-proxy,./bin/redis-cluster-proxy -c ./proxy.conf
须要注意的是,首次运行时直接打印出来启动日志或者异常信息,保证能够正常启动,而后再以daemonize方式运行
由于笔者一开始遇到了一些错误,发现一样的错误,控制台直接打印出来的日志,跟daemonize方式运行打印到文件的日志不彻底一致。
node

 

redis-cluster-proxy尝试
与普通的redis 集群连接方式不一样,redis-cluster-proxy模式下,客户端能够链接至redis-cluster-proxy节点,而无需知道Redis集群自身的详细信息,这里尝试执行一个multpile操做
c++

这里使用传统的集群连接方式,来查看上面multiple操做的数据,能够发现的确是写入到集群中不一样的节点中了。
git

 

故障转移测试
简单粗暴地关闭一个主节点,这里直接关闭192.168.0.61节点,看看redis-cluster-proxy可否正常读写
1,首先redis cluster自身的故障转移是没有问题的,彻底成功
github

2,192.168.0.64接替192.168.0.61成为主节点
redis

3,proxy节点操做数据卡死
docker

查看redis-cluster-proxy的日志,说192.168.0.61节点没法链接,proxy失败退出

因而可知,正如日志里说明的,Redis Cluster Proxy v999.999.999 (unstable),期待有更稳定的版本推出。
相似问题做者本人也有回应,参考:https://github.com/RedisLabs/redis-cluster-proxy/issues/36
The Proxy currently requires that all nodes of the cluster must be up at startup when it fetches the cluster's internal map.
I'll probably change this in the next weeks.shell

 

redis-cluster-proxy是完美的解决方案?
由于刚推出不久,生产环境基本上不会有太多实际的应用,里面确定有很多坑,但不妨害对其有更多的期待。
初次尝试能够感觉的到,redis-cluster-proxy是一个很是轻量级,清爽简单的proxy代理层,它解决了一些redis cluster存在的一些实际问题,对应于程序来讲也带来了一些方便性。
若是没有源码开发能力,相比其余第三方proxy中间件,必需要认可官方可靠性和权威性。
那么,redis-cluster-proxy是一个完美的解决方案么,留下两个问题
1,如何解决redis-cluster-proxy单点故障?
2,proxy节点的如何面对网络流量风暴?数据库

相关文章
相关标签/搜索