做者: 张首富 时间: 2019-02-19 我的博客地址: https://www.zhangshoufu.com QQ群: 895291458 须要有redis基础
每一个Mater 均可以拥有多个slave.当Master掉线后,redis cluster集群会从多个Slave中选举出来一个新的Matser做为代替,而旧的Master从新上线后变成 Master 的Slave.html
对于redis,mysql这种有状态的服务,咱们使用statefulset方式为首选.咱们这边主要就是介绍statefulset这种方式node
ps: statefulset 的设计原理模型: 拓扑状态.应用的多个实例之间不是彻底对等的关系,这个应用实例的启动必须按照某些顺序启动,好比应用的 主节点 A 要先于从节点 B 启动。而若是你把 A 和 B 两个Pod删除掉,他们再次被建立出来是也必须严格按 照这个顺序才行,而且,新建立出来的Pod,必须和原来的Pod的网络标识同样,这样原先的访问者才能使用一样 的方法,访问到这个新的Pod 存储状态:应用的多个实例分别绑定了不一样的存储数据.对于这些应用实例来讲,Pod A第一次读取到的数据,和 隔了十分钟以后再次读取到的数据,应该是同一份,哪怕在此期间Pod A被从新建立过.一个数据库应用的多个 存储实例
不管是Master 仍是 slave都做为statefulset的一个副本,经过pv/pvc进行持久化,对外暴露一个service 接受客户端请求mysql
由于k8s上pod是飘忽不定的,因此咱们确定须要用一个共享存储来提供存储,这样无论pod漂移到哪一个节点都能访问这个共享的数据卷.我这个地方先使用NFS
来作共享存储,后期能够 选择别的替换redis
yum -y install nfs-utils rpcbind vim /etc/exports /usr/local/kubernetes/redis/pv1 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv2 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv3 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv4 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv5 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv6 0.0.0.0/0(rw,all_squash) mkdir -p /usr/local/kubernetes/redis/pv{1..6} chmod 777 /usr/local/kubernetes/redis/pv{1..6}
后期咱们能够写成域名 通配符sql
启动服务 systemctl enable nfs systemctl enable rpcbind systemctl start nfs systemctl start rpcbind
建立6个pv 一会供pvc挂载使用docker
vim pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv1 spec: capacity: storage: 200M #磁盘大小200M accessModes: - ReadWriteMany #多客户可读写 nfs: server: NFS服务器地址 path: "/usr/local/kubernetes/redis/pv1" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-vp2 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: NFS服务器地址 path: "/usr/local/kubernetes/redis/pv2" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv3 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: NFS服务器地址 path: "/usr/local/kubernetes/redis/pv3" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv4 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: NFS服务器地址 path: "/usr/local/kubernetes/redis/pv4" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv5 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: NFS服务器地址 path: "/usr/local/kubernetes/redis/pv5" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv6 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: NFS服务器地址 path: "/usr/local/kubernetes/redis/pv6"
字段说明:
apiversion: api版本
kind: 这个yaml是生成pv的
metadata: 元数据
spec.capacity: 进行资源限制的
spec.accessmodes: 访问模式(读写模式)
spec.nfs: 这个pv卷名是经过nfs提供的数据库
建立pvvim
kubectl create -f pv.yaml kubectl get pv #查看建立的pv
由于redis的配置文件里面可能会改变,因此咱们使用configmap这种方式给配置文件弄出来,咱们后期改的时候就不须要没改次配置文件就重新生成一个docker images包了centos
appendonly yes #开启Redis的AOF持久化 cluster-enabled yes #集群模式打开 cluster-config-file /var/lib/redis/nodes.conf #下面说明 cluster-node-timeout 5000 #节点超时时间 dir /var/lib/redis #AOF持久化文件存在的位置 port 6379 #开启的端口
cluster-conf-file: 选项设定了保存节点配置文件的路径,若是这个配置文件不存在,每一个节点在启动的时候都为他自身指定了一个新的ID存档到这个文件中,实例会一直使用同一个ID,在集群中保持一个独一无二的(Unique)名字.每一个节点都是用ID而不是IP或者端口号来记录其余节点,由于在k8s中,IP地址是不固定的,而这个独一无二的标识符(Identifier)则会在节点的整个生命周期中一直保持不变,咱们这个文件里面存放的是节点IDapi
建立名为redis-conf的Configmap:
kubectl create configmap redis-conf --from-file=redis.conf
查看:
[root@rke ~]# kubectl get cm NAME DATA AGE redis-conf 1 22h [root@rke ~]# kubectl describe cm redis-conf Name: redis-conf Namespace: default Labels: <none> Annotations: <none> Data ==== redis.conf: ---- appendonly yes cluster-enabled yes cluster-config-file /var/lib/redis/nodes.conf cluster-node-timeout 5000 dir /var/lib/redis port 6379 Events: <none>
Headless service是StatefulSet实现稳定网络标识的基础,咱们须要提早建立。准备文件headless-service.yml以下:
apiVersion: v1 kind: Service metadata: name: redis-service labels: app: redis spec: ports: - name: redis-port port: 6379 clusterIP: None selector: app: redis appCluster: redis-cluster
建立:
kubectl create -f headless-service.yml
查看:
[root@k8s-node1 redis]# kubectl get svc redis-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE redis-service ClusterIP None <none> 6379/TCP 53s
能够看到,服务名称为redis-service,其CLUSTER-IP为None,表示这是一个“无头”服务。
这是本文的核心内容,建立redis.yaml
文件
[root@rke ~]# cat /home/docker/redis/redis.yml apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: redis-app spec: serviceName: "redis-service" replicas: 6 template: metadata: labels: app: redis appCluster: redis-cluster spec: terminationGracePeriodSeconds: 20 affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - redis topologyKey: kubernetes.io/hostname containers: - name: redis image: "redis" command: - "redis-server" #redis启动命令 args: - "/etc/redis/redis.conf" #redis-server后面跟的参数,换行表明空格 - "--protected-mode" #容许外网访问 - "no" # command: redis-server /etc/redis/redis.conf --protected-mode no resources: #资源 requests: #请求的资源 cpu: "100m" #m表明千分之,至关于0.1 个cpu资源 memory: "100Mi" #内存100m大小 ports: - name: redis containerPort: 6379 protocol: "TCP" - name: cluster containerPort: 16379 protocol: "TCP" volumeMounts: - name: "redis-conf" #挂载configmap生成的文件 mountPath: "/etc/redis" #挂载到哪一个路径下 - name: "redis-data" #挂载持久卷的路径 mountPath: "/var/lib/redis" volumes: - name: "redis-conf" #引用configMap卷 configMap: name: "redis-conf" items: - key: "redis.conf" #建立configMap指定的名称 path: "redis.conf" #里面的那个文件--from-file参数后面的文件 volumeClaimTemplates: #进行pvc持久卷声明, - metadata: name: redis-data spec: accessModes: - ReadWriteMany resources: requests: storage: 200M
PodAntiAffinity
:表示反亲和性,其决定了某个pod不能够和哪些Pod部署在同一拓扑域,能够用于将一个服务的POD分散在不一样的主机或者拓扑域中,提升服务自己的稳定性。matchExpressions
:规定了Redis Pod要尽可能不要调度到包含app为redis的Node上,也便是说已经存在Redis的Node上尽可能不要再分配Redis Pod了.
另外,根据StatefulSet的规则,咱们生成的Redis的6个Pod的hostname会被依次命名为$(statefulset名称)-$(序号),以下图所示:
[root@rke ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-app-0 1/1 Running 0 40m 10.42.2.17 192.168.1.21 <none> redis-app-1 1/1 Running 0 40m 10.42.0.15 192.168.1.114 <none> redis-app-2 1/1 Running 0 40m 10.42.1.13 192.168.1.20 <none> redis-app-3 1/1 Running 0 40m 10.42.2.18 192.168.1.21 <none> redis-app-4 1/1 Running 0 40m 10.42.0.16 192.168.1.114 <none> redis-app-5 1/1 Running 0 40m 10.42.1.14 192.168.1.20 <none>
如上,能够看到这些Pods在部署时是以{0..N-1}的顺序依次建立的。注意,直到redis-app-0状态启动后达到Running状态以后,redis-app-1 才开始启动。
同时,每一个Pod都会获得集群内的一个DNS域名,格式为$(podname).$(service name).$(namespace).svc.cluster.local
,也便是:
redis-app-0.redis-service.default.svc.cluster.local redis-app-1.redis-service.default.svc.cluster.local ...以此类推...
在K8S集群内部,这些Pod就能够利用该域名互相通讯。咱们可使用busybox镜像的nslookup检验这些域名:
[root@k8s-node1 ~]# kubectl run -i --tty --image busybox dns-test --restart=Never --rm /bin/sh If you don't see a command prompt, try pressing enter. / # nslookup redis-app-1.redis-service.default.svc.cluster.local Server: 10.43.0.10 Address: 10.43.0.10:53 Name: redis-app-1.redis-service.default.svc.cluster.local Address: 10.42.0.15 *** Can't find redis-app-1.redis-service.default.svc.cluster.local: No answer / # nslookup redis-app-0.redis-service.default.svc.cluster.local Server: 10.43.0.10 Address: 10.43.0.10:53 Name: redis-app-0.redis-service.default.svc.cluster.local Address: 10.42.2.17
能够看到, redis-app-0的IP为10.42.2.17。固然,若Redis Pod迁移或是重启(咱们能够手动删除掉一个Redis Pod来测试),则IP是会改变的,但Pod的域名、SRV records、A record都不会改变。
另外能够发现,咱们以前建立的pv都被成功绑定了:
[root@k8s-node1 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv1 200M RWX Retain Bound default/redis-data-redis-app-2 1h nfs-pv2 200M RWX Retain Bound default/redis-data-redis-app-3 1h nfs-pv3 200M RWX Retain Bound default/redis-data-redis-app-4 1h nfs-pv4 200M RWX Retain Bound default/redis-data-redis-app-5 1h nfs-pv5 200M RWX Retain Bound default/redis-data-redis-app-0 1h nfs-pv6 200M RWX Retain Bound default/redis-data-redis-app-1 1h
建立好6个Redis Pod后,咱们还须要利用经常使用的Redis-tribe工具进行集群的初始化。
建立centos容器
因为Redis集群必须在全部节点启动后才能进行初始化,而若是将初始化逻辑写入Statefulset中,则是一件很是复杂并且低效的行为。这里,本人不得不称赞一下原项目做者的思路,值得学习。也就是说,咱们能够在K8S上建立一个额外的容器,专门用于进行K8S集群内部某些服务的管理控制。
这里,咱们专门启动一个Ubuntu的容器,能够在该容器中安装Redis-tribe,进而初始化Redis集群,执行:
kubectl run -i --tty centos --image=centos --restart=Never /bin/bash
成功后,咱们能够进入centos容器中,原项目要求执行以下命令安装基本的软件环境:
cat >> /etc/yum.repo.d/epel.repo<<'EOF' [epel] name=Extra Packages for Enterprise Linux 7 - $basearch baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch failovermethod=priority enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 EOF
初始化redis集群
首先,咱们须要安装redis-trib(redis集群命令行工具):
yum -y install redis-trib.noarch bind-utils-9.9.4-72.el7.x86_64
而后建立一主一从的集群节点信息:
redis-trib create --replicas 1 \ `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-1.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-2.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-3.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-4.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-5.redis-service.default.svc.cluster.local`:6379 #create: 建立一个新的集群 #--replicas 1 : 建立的集群中每一个主节点分配一个从节点,达到3主3从 #后面跟的就是redis实例所在的位置
如上,命令dig +short redis-app-0.redis-service.default.svc.cluster.local
用于将Pod的域名转化为IP,这是由于redis-trib
不支持域名来建立集群。
执行完成后redis-trib
会打印一份预配置文件给你查看,若是没问题输入yes,redis-trib
就会把这份配置文件应用到集群中
>>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 10.42.2.17:6379 10.42.0.15:6379 10.42.1.13:6379 Adding replica 10.42.2.18:6379 to 10.42.2.17:6379 Adding replica 10.42.0.16:6379 to 10.42.0.15:6379 Adding replica 10.42.1.14:6379 to 10.42.1.13:6379 M: 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379 slots:0-5460 (5461 slots) master M: 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379 slots:5461-10922 (5462 slots) master M: 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.13:6379 slots:10923-16383 (5461 slots) master S: 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379 replicates 4676f8913cdcd1e256db432531c80591ae6c5fc3 S: cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379 replicates 505f3e126882c0c5115885e54f9b361bc7e74b97 S: e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379 replicates 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f Can I set the above configuration? (type 'yes' to accept):
输入yes后开始建立集群
>>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join... >>> Performing Cluster Check (using node 10.42.2.17:6379) M: 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.13:6379@16379 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 slots: (0 slots) slave replicates 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f S: 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slots: (0 slots) slave replicates 4676f8913cdcd1e256db432531c80591ae6c5fc3 M: 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slots: (0 slots) slave replicates 505f3e126882c0c5115885e54f9b361bc7e74b97 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
最后一句表示集群中的16384
个槽都有至少一个主节点在处理, 集群运做正常.
至此,咱们的Redis集群就真正建立完毕了,连到任意一个Redis Pod中检验一下:
root@k8s-node1 ~]# kubectl exec -it redis-app-2 /bin/bash root@redis-app-2:/data# /usr/local/bin/redis-cli -c 127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:186 cluster_stats_messages_pong_sent:199 cluster_stats_messages_sent:385 cluster_stats_messages_ping_received:194 cluster_stats_messages_pong_received:186 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:385 127.0.0.1:6379> cluster nodes 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.13:6379@16379 master - 0 1550555011000 3 connected 10923-16383 e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 slave 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 0 1550555011512 6 connected 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slave 4676f8913cdcd1e256db432531c80591ae6c5fc3 0 1550555010507 4 connected 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 master - 0 1550555011000 2 connected 5461-10922 cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slave 505f3e126882c0c5115885e54f9b361bc7e74b97 0 1550555011713 5 connected 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379@16379 myself,master - 0 1550555010000 1 connected 0-5460
另外,还能够在NFS上查看Redis挂载的数据:
[root@rke ~]# tree /usr/local/kubernetes/redis/ /usr/local/kubernetes/redis/ ├── pv1 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── pv2 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── pv3 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── pv4 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf ├── pv5 │ ├── appendonly.aof │ ├── dump.rdb │ └── nodes.conf └── pv6 ├── appendonly.aof ├── dump.rdb └── nodes.conf 6 directories, 18 files
前面咱们建立了用于实现statefulset的headless service,但该service没有cluster IP,所以不能用于外界访问.因此咱们还须要建立一个service,专用于为Redis集群提供访问和负载均衡:
piVersion: v1 kind: Service metadata: name: redis-access-service labels: app: redis spec: ports: - name: redis-port protocol: "TCP" port: 6379 targetPort: 6379 selector: app: redis appCluster: redis-cluster
如上,该Service名称为 redis-access-service
,在K8S集群中暴露6379端口,而且会对labels name
为app: redis
或appCluster: redis-cluster
的pod进行负载均衡。
建立后查看:
[root@rke ~]# kubectl get svc redis-access-service -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR redis-access-service ClusterIP 10.43.40.62 <none> 6379/TCP 47m app=redis,appCluster=redis-cluster
如上,在k8s集群中,全部应用均可以经过10.43.40.62:6379
来访问redis集群,固然,为了方便测试,咱们也能够为Service添加一个NodePort映射到物理机上,待测试。
在K8S上搭建无缺Redis集群后,咱们最关心的就是其原有的高可用机制是否正常。这里,咱们能够任意挑选一个Master的Pod来测试集群的主从切换机制,如redis-app-2
:
[root@rke ~]# kubectl get pods redis-app-2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-app-2 1/1 Running 0 1h 10.42.1.13 192.168.1.20 <none>
进入redis-app-2
查看:
[root@rke ~]# kubectl exec -it redis-app-2 /bin/bash root@redis-app-2:/data# redis-cli 127.0.0.1:6379> role 1) "master" 2) (integer) 9478 3) 1) 1) "10.42.1.14" 2) "6379" 3) "9478"
如上能够看到,其为master,slave
为10.42.1.14
即redis-app-5
。
接着,咱们手动删除redis-app-2
:
[root@rke ~]# kubectl delete pods redis-app-2 pod "redis-app-2" deleted [root@rke ~]# kubectl get pods redis-app-2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-app-2 1/1 Running 0 19s 10.42.1.15 192.168.1.20 <none>
如上,IP改变为10.42.1.15
。咱们再进入redis-app-2
内部查看:
[root@rke ~]# kubectl exec -it redis-app-2 /bin/bash root@redis-app-2:/data# redis-cli 127.0.0.1:6379> ROLE 1) "slave" 2) "10.42.1.14" 3) (integer) 6379 4) "connected" 5) (integer) 9688
如上,redis-app-2
变成了slave
,从属于它以前的从节点10.42.1.14
即redis-app-5
。
咱们如今这个集群中有6个节点三主三从
,我如今添加两个pod节点,达到4主4从
cat >> /etc/exports <<'EOF' /usr/local/kubernetes/redis/pv7 192.168.0.0/16(rw,all_squash) /usr/local/kubernetes/redis/pv8 192.168.0.0/16(rw,all_squash) EOF systemctl restart nfs rpcbind [root@rke ~]# mkdir /usr/local/kubernetes/redis/pv{7..8} [root@rke ~]# chmod 777 /usr/local/kubernetes/redis/*
[root@rke redis]# cat pv_add.yml --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv7 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: 192.168.1.253 path: "/usr/local/kubernetes/redis/pv7" --- apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv8 spec: capacity: storage: 200M accessModes: - ReadWriteMany nfs: server: 192.168.1.253 path: "/usr/local/kubernetes/redis/pv8"
建立查看pv:
[root@rke redis]# kubectl create -f pv_add.yml persistentvolume/nfs-pv7 created persistentvolume/nfs-pv8 created [root@rke redis]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv1 200M RWX Retain Bound default/redis-data-redis-app-1 2h nfs-pv2 200M RWX Retain Bound default/redis-data-redis-app-2 2h nfs-pv3 200M RWX Retain Bound default/redis-data-redis-app-4 2h nfs-pv4 200M RWX Retain Bound default/redis-data-redis-app-5 2h nfs-pv5 200M RWX Retain Bound default/redis-data-redis-app-0 2h nfs-pv6 200M RWX Retain Bound default/redis-data-redis-app-3 2h nfs-pv7 200M RWX Retain Available 7s nfs-pv8 200M RWX Retain Available 7s
更改redis的yml文件里面的replicas:
字段,把这个字段改成8,而后升级运行
[root@rke redis]# kubectl apply -f redis.yml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply statefulset.apps/redis-app configured [root@rke redis]# kubectl get pods NAME READY STATUS RESTARTS AGE redis-app-0 1/1 Running 0 2h redis-app-1 1/1 Running 0 2h redis-app-2 1/1 Running 0 19m redis-app-3 1/1 Running 0 2h redis-app-4 1/1 Running 0 2h redis-app-5 1/1 Running 0 2h redis-app-6 1/1 Running 0 57s redis-app-7 1/1 Running 0 30s
[root@rke redis]#kubectl exec -it centos /bin/bash [root@centos /]# redis-trib add-node \ `dig +short redis-app-6.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 [root@centos /]# redis-trib add-node \ `dig +short redis-app-7.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379
add-node后面跟的是新节点的信息,后面是之前集群中的任意 一个节点
[root@rke redis]# kubectl exec -it redis-app-0 bash root@redis-app-0:/data# redis-cli 127.0.0.1:6379> cluster nodes 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.15:6379@16379 slave e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 0 1550564776000 7 connected e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 master - 0 1550564776000 7 connected 10923-16383 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slave 4676f8913cdcd1e256db432531c80591ae6c5fc3 0 1550564777051 4 connected 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 master - 0 1550564776851 2 connected 5461-10922 cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slave 505f3e126882c0c5115885e54f9b361bc7e74b97 0 1550564775000 5 connected e4697a7ba460ae2979692116b95fbe1f2c8be018 10.42.0.20:6379@16379 master - 0 1550564776549 0 connected 246c79682e6cc78b4c2c28d0e7166baf47ecb265 10.42.2.23:6379@16379 master - 0 1550564776548 8 connected 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379@16379 myself,master - 0 1550564775000 1 connected 0-5460
redis-trib.rb reshard `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 ## 输入要移动的哈希槽 ## 移动到哪一个新的master节点(ID) ## all 是从全部master节点上移动
127.0.0.1:6379> cluster nodes 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.15:6379@16379 slave e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 0 1550566162000 7 connected e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 master - 0 1550566162909 7 connected 11377-16383 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slave 4676f8913cdcd1e256db432531c80591ae6c5fc3 0 1550566161600 4 connected 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 master - 0 1550566161902 2 connected 5917-10922 cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slave 505f3e126882c0c5115885e54f9b361bc7e74b97 0 1550566162506 5 connected 246c79682e6cc78b4c2c28d0e7166baf47ecb265 10.42.2.23:6379@16379 master - 0 1550566161600 8 connected 0-453 5461-5916 10923-11376 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379@16379 myself,master - 0 1550566162000 1 connected 454-5460
对redis的相关操做能够查看:http://redisdoc.com/topic/cluster-tutorial.html#id10