做者:mendickxiaolinux
通过部署Kubernetes集群章节咱们已经能够顺利的部署一个集群用于开发和测试,可是要应用到生产就就不得不考虑master节点的高可用问题,由于如今咱们的master节点上的几个服务kube-apiserver
、kube-scheduler
和kube-controller-manager
都是单点的并且都位于同一个节点上,一旦master节点宕机,虽然不该答当前正在运行的应用,将致使kubernetes集群没法变动。本文将引导你建立一个高可用的master节点。git
在大神gzmzj的ansible建立kubernetes集群神做中有讲到如何配置多个Master,可是在实践过程当中仍是遇到很多坑。须要将坑填上才能工做。 神做连接地址:集群规划和基础参数设定。github
按照神做的描述,其实是经过keepalived + haproxy实现的,其中keepalived是提供一个VIP,经过VIP关联全部的Master节点;而后haproxy提供端口转发功能。因为VIP仍是存在Master的机器上的,默认配置API Server的端口是6443,因此咱们须要将另一个端口关联到这个VIP上,通常用8443。api
图片 - Master HA架构图服务器
根据神做的实践,我发现须要在Master手工安装keepalived, haproxy。架构
yum install keepalived yum install haproxy
须要将HAProxy默认的配置文件balance从source修改成roundrobin
方式。haproxy的配置文件haproxy.cfg
默认路径是/etc/haproxy/haproxy.cfg
。另外须要手工建立/run/haproxy
的目录,不然haproxy会启动失败。app
注意负载均衡
roundrobin
方式,默认是source方式。在个人测试中,source方式不工做。# haproxy.cfg sample global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy *stats socket /run/haproxy/admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon nbproc 1 defaults log global timeout connect 5000 timeout client 50000 timeout server 50000 listen kube-master **bind 0.0.0.0:8443** mode tcp option tcplog **balance roundrobin** server s1 **Master 1的IP地址**:6443 check inter 10000 fall 2 rise 2 weight 1 server s2 **Master 2的IP地址**:6443 check inter 10000 fall 2 rise 2 weight 1
修改keepalived的配置文件,配置正确的VIP。keepalived的配置文件keepalived.conf
的默认路径是/etc/keepalived/keepalived.conf
socket
注意tcp
virtual_router_id
决定当前VIP的路由号,实际上VIP提供了一个虚拟的路由功能,该VIP在同一个子网内必须是惟一。# keepalived.cfg sample global_defs { router_id lb-backup } vrrp_instance VI-kube-master { state BACKUP **priority 110** dont_track_primary interface eth0 **virtual_router_id 51** advert_int 3 virtual_ipaddress { **10.86.13.36** } }
配置好后,那么先启动主Master的keepalived和haproxy。
systemctl enable keepalived systemctl start keepalived systemctl enable haproxy systemctl start haproxy
而后使用ip a s命令查看是否有VIP地址分配。若是看到VIP地址已经成功分配在eth0网卡上,说明keepalived启动成功。
[root@kube32 ~]# ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:50:56:a9:d5:be brd ff:ff:ff:ff:ff:ff inet 10.86.13.32/23 brd 10.86.13.255 scope global eth0 valid_lft forever preferred_lft forever **inet 10.86.13.36/32 scope global eth0** valid_lft forever preferred_lft forever inet6 fe80::250:56ff:fea9:d5be/64 scope link valid_lft forever preferred_lft forever
更保险方法还能够经过systemctl status keepalived -l
看看keepalived的状态
[root@kube32 ~]# systemctl status keepalived -l ● keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-02-01 10:24:51 CST; 1 months 16 days ago Main PID: 13448 (keepalived) Memory: 6.0M CGroup: /system.slice/keepalived.service ├─13448 /usr/sbin/keepalived -D ├─13449 /usr/sbin/keepalived -D └─13450 /usr/sbin/keepalived -D Mar 20 04:51:15 kube32 Keepalived_vrrp[13450]: VRRP_Instance(VI-kube-master) Dropping received VRRP packet... **Mar 20 04:51:18 kube32 Keepalived_vrrp[13450]: (VI-kube-master): ip address associated with VRID 51 not present in MASTER advert : 10.86.13.36 Mar 20 04:51:18 kube32 Keepalived_vrrp[13450]: bogus VRRP packet received on eth0 !!!**
而后经过systemctl status haproxy -l看haproxy的状态
[root@kube32 ~]# systemctl status haproxy -l ● haproxy.service - HAProxy Load Balancer Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-02-01 10:33:22 CST; 1 months 16 days ago Main PID: 15116 (haproxy-systemd) Memory: 3.2M CGroup: /system.slice/haproxy.service ├─15116 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid ├─15117 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds └─15118 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
这个时候经过kubectl version命令,能够获取到kubectl的服务器信息。
[root@kube32 ~]# kubectl version **Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-03T22:31:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-03T22:18:41Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}**
这个时候说明你的keepalived和haproxy都是成功的。这个时候你能够依次将你其余Master节点的keepalived和haproxy启动。 此时,你经过ip a s命令去查看其中一台Master(非主Master)的时候,你看不到VIP,这个是正常的,由于VIP永远只在主Master节点上,只有当主Master节点挂掉后,才会切换到其余Master节点上。
[root@kube31 ~]# ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:50:56:a9:07:23 brd ff:ff:ff:ff:ff:ff inet 10.86.13.31/23 brd 10.86.13.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::250:56ff:fea9:723/64 scope link valid_lft forever preferred_lft forever
在个人实践过程当中,经过大神的脚本快速启动多个Master节点,会致使主Master始终获取不了VIP,当时的报错很是奇怪。后来通过个人研究发现,主Master获取VIP是须要时间的,若是多个Master同时启动,会致使冲突。这个不知道是否算是Keepalived的Bug。可是最稳妥的方式仍是先启动一台主Master,等VIP肯定后再启动其余Master比较靠谱。
Kubernetes经过Keepalived + Haproxy实现多个Master的高可用部署,你实际上能够采用其余方式,如外部的负载均衡方式。实际上Kubernetes的多个Master是没有主从的,均可以提供一致性服务。Keepalived + Haproxy实际上就是提供了一个简单的负载均衡方式。