kubeasz部署高可用kubernetes集群

准备

工具

https://github.com/easzlab/kubeasznode

kubeasz 使用 ansible 快速部署非容器化 高可用 k8s 集群python

kubeasz 主要有 1.x2.x 两个版本,版本比较参考:https://github.com/easzlab/kubeasz/blob/master/docs/mixes/branch.mdgit

这里使用 2.x 版本github

环境

主机 内网ip 外网ip 系统
k8s-1 10.0.0.18 61.184.241.187 ubuntu 18.04
k8s-2 10.0.0.19 ubuntu 18.04
k8s-3 10.0.0.20 ubuntu 18.04

架构

在这里插入图片描述

k8s集群内部访问 apiserver,能够经过节点自身安装的 haproxy 实现负载,高可用,而外部访问 apiserver,是经过另外的 haproxy + keepalived 实现的,这个外部的负载均衡是可选的,由于该负载只影响能不能高可用访问 k8s 管理界面,并不影响 k8s 内部集群的高可用,因此能够选择不安装web

参考:https://github.com/easzlab/kubeasz/issues/585#issuecomment-502948966docker

masternode节点共用时,没法在master上部署haproxy+keepalived实现外部访问apiserver高可用,详情:https://github.com/easzlab/kubeasz/issues/585#issuecomment-575954720shell

规划

部署节点 k8s-1
etcd 节点 k8s-1 k8s-2 k8s-3
master 节点 k8s-1 k8s-2
node 节点 k8s-1 k8s-2 k8s-3

部署

默认所有使用 root 用户操做ubuntu

配置 DNS

# 全部节点
vim /etc/hosts
10.0.0.18 k8s-1
10.0.0.19 k8s-2
10.0.0.20 k8s-3

修改 apt 源

# 全部节点
cp /etc/apt/sources.list /etc/apt/sources.list.bakcup
cat > /etc/apt/sources.list <<EOF deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse EOF

apt update

配置ssh免密登录

# 在部署节点 k8s-1 上
ssh-keygen
ssh-copy-id k8s-1
ssh-copy-id k8s-2
ssh-copy-id k8s-3

内核升级

ubuntu 18.04 使用内核 4.15, 达到要求, 不须要更新, 其余系统内核更新参考:
https://github.com/easzlab/kubeasz/blob/master/docs/guide/kernel_upgrade.mdvim

安装依赖

# 在全部节点上
apt install -y python2.7

下载

cd /opt
# 下载工具脚本easzup,举例使用kubeasz版本2.2.0
export release=2.2.0
curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/easzup
chmod +x ./easzup
# 使用工具脚本下载
./easzup -D

上述脚本运行成功后,全部文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/ansibleapi

  • /etc/ansible 包含 kubeasz 版本为 ${release} 的发布代码
  • /etc/ansible/bin 包含 k8s/etcd/docker/cni 等二进制文件
  • /etc/ansible/down 包含集群安装时须要的离线容器镜像
  • /etc/ansible/down/packages 包含集群安装时须要的系统基础软件

安装集群

# 容器化运行 kubeasz
cd /opt
./easzup -S
# 安装集群
# 建立集群上下文
docker exec -it kubeasz easzctl checkout myk8s
# 修改hosts文件
cd /etc/ansible && cp example/hosts.multi-node hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
# variable 'NODE_NAME' is the distinct name of a member in 'etcd' cluster
[etcd]
10.0.0.18 NODE_NAME=etcd1
10.0.0.19 NODE_NAME=etcd2
10.0.0.20 NODE_NAME=etcd3

# master node(s)
[kube-master]
10.0.0.18
10.0.0.19

# work node(s)
[kube-node]
10.0.0.18
10.0.0.19
10.0.0.20

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'yes' to install a harbor server; 'no' to integrate with existed one
# 'SELF_SIGNED_CERT': 'no' you need put files of certificates named harbor.pem and harbor-key.pem in directory 'down'
[harbor]
#192.168.1.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no SELF_SIGNED_CERT=yes

# [optional] loadbalance for accessing k8s from outside
[ex-lb]
#192.168.1.6 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
#192.168.1.7 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443

# [optional] ntp server for the cluster
[chrony]
10.0.0.18

[all:vars]
# --------- Main Variables ---------------
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="flannel"

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"

# NodePort Range
NODE_PORT_RANGE="20000-40000"

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local."

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"

# Deploy Directory (kubeasz workspace)
base_dir="/etc/ansible"
# 配置整个 k8s 集群
docker exec -it kubeasz easzctl setup

验证

# 若是提示kubectl: command not found,退出从新ssh登陆一下,环境变量生效便可

$ kubectl version                   # 验证集群版本 
$ kubectl get componentstatus       # 验证 scheduler/controller-manager/etcd等组件状态
$ kubectl get node                  # 验证节点就绪 (Ready) 状态
$ kubectl get pod --all-namespaces  # 验证集群pod状态,默认已安装网络插件、coredns、metrics-server等
$ kubectl get svc --all-namespaces  # 验证集群服务状态

清理

# 清理集群
docker exec -it kubeasz easzctl destroy

# 进一步清理, 清理部署节点上的 docker 以及下载的全部文件
# 清理运行的容器 easzup -C
# 清理容器镜像 docker system prune -a
# 中止docker服务 systemctl stop docker
# 删除下载文件 rm -rf /etc/ansible /etc/docker /opt/kube
# 删除docker文件
$ umount /var/run/docker/netns/default
$ umount /var/lib/docker/overlay
$ rm -rf /var/lib/docker /var/run/docker
# 重启节点