CentOS 使用二进制部署 Kubernetes 1.13集群

1、概述

kubernetes 1.13 已发布,这是 2018 年年内第四次也是最后一次发布新版本。Kubernetes 1.13 是迄今为止发布间隔最短的版本之一(与上一版本间隔十周),主要关注 Kubernetes 的稳定性与可扩展性,其中存储与集群生命周期相关的三项主要功能已逐步实现广泛可用性。node

Kubernetes 1.13 的核心特性包括:利用 kubeadm 简化集群管理、容器存储接口(CSI )以及将 CoreDNS 做为默认 DNS 。linux

利用 kubeadm 简化集群管理功能git

大多数与 Kubernetes 接触频繁的人或多或少都会亲自动手使用 kubeadm ,它是管理集群生命周期的重要工具,可以帮助从建立到配置再到升级的整个流程。;随着 1.13 版本的发布,kubeadm 功能进入 GA 版本,正式广泛可用。kubeadm 处理现有硬件上的生产集群的引导,并以最佳实践方式配置核心 Kubernetes 组件,以便为新节点提供安全而简单的链接流程并支持轻松升级。github

该 GA 版本中最值得注意的是已经毕业的高级功能,尤为是可插拔性和可配置性。kubeadm 旨在为管理员与高级自动化系统提供一套工具箱,现在已迈出重要一步。算法

容器存储接口(CSI)docker

容器存储接口最初于 1.9 版本中做为 alpha 测试功能引入,在 1.10 版本中进入 beta 测试,现在终于进入 GA 阶段正式广泛可用。在 CSI 的帮助下,Kubernetes 卷层将真正实现可扩展性。经过 CSI ,第三方存储供应商将能够直接编写可与 Kubernetes 互操做的代码,而无需触及任何 Kubernetes 核心代码。事实上,相关规范也已经同步进入 1.0 阶段。json

随着 CSI 的稳定,插件做者将可以按照本身的节奏开发核心存储插件,详见 CSI 文档。bootstrap

CoreDNS 成为 Kubernetes 的默认 DNS 服务器vim

在 1.11 版本中,开发团队宣布 CoreDNS 已实现基于 DNS 服务发现的广泛可用性。在最新的 1.13 版本中,CoreDNS 正式取代 kuber-dns 成为 Kubernetes 中的默认 DNS 服务器。CoreDNS 是一种通用的、权威的 DNS 服务器,可以提供与 Kubernetes 向下兼容且具有可扩展性的集成能力。因为 CoreDNS 自身单一可执行文件与单一进程的特性,所以 CoreDNS 的活动部件数量会少于以前的 DNS 服务器,且可以经过建立自定义 DNS 条目来支持各种灵活的用例。此外,因为 CoreDNS 采用 Go 语言编写,它具备强大的内存安全性。后端

CoreDNS 如今是 Kubernetes 1.13 及后续版本推荐的 DNS 解决方案,Kubernetes 已将经常使用测试基础设施架构切换为默认使用 CoreDNS ,所以,开发团队建议用户也尽快完成切换。KubeDNS 仍将至少支持一个版本,但如今是时候开始规划迁移了。另外,包括 1.11 中 Kubeadm 在内的许多 OSS 安装工具也已经进行了切换。

一、安装环境准备:

部署节点说明

IP地址 主机名 CPU 内存 磁盘
192.168.4.100 master 1C 1G 40G
192.168.4.21 node 1C 1G 40G
192.168.4.56 node1 1C 1G 40G

k8s安装包下载

连接:https://pan.baidu.com/s/1wO6T7byhaJYBuu2JlhZvkQ
提取码:pm9u

部署网络说明

二、架构图

Kubernetes 架构图

Flannel网络架构图

  • 数据从源容器中发出后,经由所在主机的docker0虚拟网卡转发到flannel0虚拟网卡,这是个P2P的虚拟网卡,flanneld服务监听在网卡的另一端。
  • Flannel经过Etcd服务维护了一张节点间的路由表,在稍后的配置部分咱们会介绍其中的内容。
  • 源主机的flanneld服务将本来的数据内容UDP封装后根据本身的路由表投递给目的节点的flanneld服务,数据到达之后被解包,而后直接进入目的节点的flannel0虚拟网卡,
    而后被转发到目的主机的docker0虚拟网卡,最后就像本机容器通讯一下的有docker0路由到达目标容器。

三、 Kubernetes工做流程


集群功能各模块功能描述:

Master节点:
Master 节点上面主要由四个模块组成,APIServer,schedule , controller-manager , etcd

APIServer: APIServer 负责对外提供 RESTful 的 kubernetes API 的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给 APIServer 处理后再交给 etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对 kubernetes API 的调用)是直接和 APIServer 交互的。

schedule: schedule 负责调度 Pod 到合适的 Node 上,若是把 scheduler 当作一个黑匣子,那么它的输入是 pod 和由多个 Node 组成的列表,输出是 Pod 和一个 Node 的绑定。 kubernetes 目前提供了调度算法,一样也保留了接口。用户根据本身的需求定义本身的调度算法。

controller manager: 若是 APIServer 作的是前台的工做的话,那么 controller manager 就是负责后台的。每个资源都对应一个控制器。而 control manager 就是负责管理这些控制器的,好比咱们经过 APIServer 建立了一个Pod,当这个 Pod 建立成功后,APIServer 的任务就算完成了。

etcd:etcd 是一个高可用的键值存储系统,kubernetes 使用它来存储各个资源的状态,从而实现了 Restful 的 API。

Node节点:
每一个Node节点主要由三个模板组成:kublet, kube-proxy

kube-proxy: 该模块实现了 kubernetes 中的服务发现和反向代理功能。kube-proxy 支持 TCP 和 UDP 链接转发,默认基 Round Robin 算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy 使用etcd 的 watch 机制监控集群中 service 和 endpoint 对象数据的动态变化,而且维护一个 service 到 endpoint 的映射关系,从而保证了后端 pod 的 IP 变化不会对访问者形成影响,另外,kube-proxy 还支持 session affinity。

kublet:kublet 是 Master 在每一个 Node 节点上面的 agent,是 Node 节点上面最重要的模块,它负责维护和管理该 Node 上的全部容器,可是若是容器不是经过 kubernetes 建立的,它并不会管理。本质上,它负责使 Pod 的运行状态与指望的状态一致。

2、Kubernetes 安装及配置

一、初始化环境

1.一、设置关闭防火墙及SELINUX

systemctl stop firewalld && systemctl disable firewalld
setenforce 0
vi /etc/selinux/config
SELINUX=disabled

1.二、关闭Swap

swapoff -a && sysctl -w vm.swappiness=0
vi /etc/fstab
#UUID=7bff6243-324c-4587-b550-55dc34018ebf swap                    swap    defaults        0 0

1.三、设置Docker所需参数

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf

1.四、安装 Docker

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce -y
systemctl start docker && systemctl enable docker

1.五、建立安装目录

mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p

1.六、安装及配置CFSSL

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

1.七、建立认证证书

建立 ETCD 证书

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

建立 ETCD CA 配置文件

cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen"
        }
    ]
}
EOF

建立 ETCD Server 证书

cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "192.168.4.100",
    "192.168.4.21",
    "192.168.4.56"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen"
        }
    ]
}
EOF

生成 ETCD CA 证书和私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

建立 Kubernetes CA 证书

cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

生成 API_SERVER 证书

cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.4.100",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shenzhen",
            "ST": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

建立 Kubernetes Proxy 证书

cat << EOF | tee kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Shenzhen",
      "ST": "Shenzhen",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

 

1.八、 ssh-key认证

# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:FQjjiRDp8IKGT+UDM+GbQLBzF3DqDJ+pKnMIcHGyO/o root@qas-k8s-master01
The key's randomart image is:
+---[RSA 2048]----+
|o.==o o. ..      |
|ooB+o+ o.  .     |
|B++@o o   .      |
|=X**o    .       |
|o=O. .  S        |
|..+              |
|oo .             |
|* .              |
|o+E              |
+----[SHA256]-----+

# 复制 SSH 密钥到目标主机,开启无密码 SSH 登陆
# ssh-copy-id 192.168.4.21
# ssh-copy-id 192.168.4.56

 

2 、部署ETCD

解压安装文件

tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
vim /k8s/etcd/cfg/etcd   
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.4.100:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.4.100:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.100:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.100:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://192.168.4.56:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

建立 etcd的 systemd unit 启动文件

vim /usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/k8s/etcd/cfg/etcd
ExecStart=/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/k8s/etcd/ssl/server.pem \
--key-file=/k8s/etcd/ssl/server-key.pem \
--peer-cert-file=/k8s/etcd/ssl/server.pem \
--peer-key-file=/k8s/etcd/ssl/server-key.pem \
--trusted-ca-file=/k8s/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

拷贝证书文件

cp ca*pem server*pem  /k8s/etcd/ssl

启动ETCD服务

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

将启动文件、配置文件拷贝到 节点一、节点2

cd /k8s/ 
scp -r etcd 192.168.4.21:/k8s/
scp -r etcd 192.168.4.56:/k8s/
scp /usr/lib/systemd/system/etcd.service  192.168.4.21:/usr/lib/systemd/system/etcd.service
scp /usr/lib/systemd/system/etcd.service  192.168.4.56:/usr/lib/systemd/system/etcd.service 

#--节点1
vim /k8s/etcd/cfg/etcd 
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.4.21:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.4.21:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.21:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.21:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://172.16.8.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#--节点2
vim /k8s/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.4.56:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.4.56:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.4.56:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.4.56:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.4.100:2380,etcd02=https://192.168.4.21:2380,etcd03=https://192.168.4.56:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

验证集群是否正常运行

[root@master ~]# cd /k8s/etcd/bin/
[root@master bin]# ./etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.4.100:2379,\
> https://192.168.4.21:2379,\
> https://192.168.4.56:2379" cluster-health

member 2345cdd5020eb294 is healthy: got healthy result from https://192.168.4.100:2379
member 91d74712f79e544f is healthy: got healthy result from https://192.168.4.21:2379
member b313b7e8d0a528cc is healthy: got healthy result from https://192.168.4.56:2379
cluster is healthy


注意:
启动ETCD集群同时启动二个节点,启动一个节点集群是没法正常启动的(或将处于activing状态)

 

三、部署Flannel网络

向 etcd 写入集群 Pod 网段信息

cd /k8s/etcd/ssl/

/k8s/etcd/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem \
--key-file=server-key.pem \
--endpoints="https://192.168.4.100:2379,\
https://192.168.4.21:2379,https://192.168.4.56:2379" \
set /coreos.com/network/config  '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'
  • flanneld 当前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 写入配置 key 和网段数据;
  • 写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致;

解压安装

tar -xvf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/

配置Flannel

vim /k8s/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

建立 flanneld 的 systemd unit 文件

vim /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/flanneld
ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 docker 启动时 使用这个文件中的环境变量配置 docker0 网桥;
  • flanneld 使用系统缺省路由所在的接口与其它节点通讯,对于有多个网络接口(如内网和公网)的节点,能够用 -iface 参数指定通讯接口,如上面的 eth0 接口;
  • flanneld 运行时须要 root 权限;

配置Docker启动指定子网段

vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

配置Docker启动指定子网段

vim /usr/lib/systemd/system/docker.service 

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

将flanneld systemd unit 文件到全部节点

cd /k8s/
scp -r kubernetes 192.168.4.21:/k8s/
scp -r kubernetes 192.168.4.56:/k8s/
scp /k8s/kubernetes/cfg/flanneld 192.168.4.21:/k8s/kubernetes/cfg/flanneld
scp /k8s/kubernetes/cfg/flanneld 192.168.4.56:/k8s/kubernetes/cfg/flanneld
scp /usr/lib/systemd/system/docker.service  192.168.4.21:/usr/lib/systemd/system/docker.service 
scp /usr/lib/systemd/system/docker.service  192.168.4.56:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/flanneld.service  192.168.4.21:/usr/lib/systemd/system/flanneld.service 
scp /usr/lib/systemd/system/flanneld.service  192.168.4.56:/usr/lib/systemd/system/flanneld.service 

# 启动服务
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

查看是否生效

[root@node ssl]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:a5:99:6a brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.21/16 brd 192.168.255.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::93dc:dfaf:2ddf:1aa9/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:5a:29:34:85 brd ff:ff:ff:ff:ff:ff
    inet 172.18.58.1/24 brd 172.18.58.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 16:6e:22:47:d0:cd brd ff:ff:ff:ff:ff:ff
    inet 172.18.58.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever

四、部署 master 节点

kubernetes master 节点运行以下组件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager
    kube-scheduler 和 kube-controller-manager 能够以集群模式运行,经过 leader 选举产生一个工做进程,其它进程处于阻塞模式。

将二进制文件解压拷贝到master 节点

tar -xvf kubernetes-server-linux-amd64.tar.gz 
cd kubernetes/server/bin/
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/

拷贝认证

cp *pem /k8s/kubernetes/ssl/

部署 kube-apiserver 组件

建立 TLS Bootstrapping Token

[root@master ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
91af09d8720f467def95b65704862025

[root@master ~]# cat /k8s/kubernetes/cfg/token.csv 
91af09d8720f467def95b65704862025,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

建立apiserver配置文件

vim /k8s/kubernetes/cfg/kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 \
--bind-address=192.168.4.100 \
--secure-port=6443 \
--advertise-address=192.168.4.100 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem  \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

建立 kube-apiserver systemd unit 文件

vim /usr/lib/systemd/system/kube-apiserver.service 

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

查看apiserver是否运行

[root@master ~]# ps -ef |grep kube-apiserver
root      90572 118543  0 10:27 pts/0    00:00:00 grep --color=auto kube-apiserver
root     119804      1  1 Feb26 ?        00:22:45 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.4.100:2379,https://192.168.4.21:2379,https://192.168.4.56:2379 --bind-address=192.168.4.100 --secure-port=6443 --advertise-address=192.168.4.100 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem

 

部署kube-scheduler

建立kube-scheduler配置文件

vim  /k8s/kubernetes/cfg/kube-scheduler 

KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
  • –address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;
  • –kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它链接和验证 kube-apiserver;
  • –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工做,其它节点为阻塞状态;

建立kube-scheduler systemd unit 文件

vim /usr/lib/systemd/system/kube-scheduler.service 

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl restart kube-scheduler.service

查看kube-scheduler是否运行

[root@master ~]# ps -ef |grep kube-scheduler 
root       3591      1  0 Feb25 ?        00:16:17 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
root      90724 118543  0 10:28 pts/0    00:00:00 grep --color=auto kube-scheduler
[root@master ~]# 
[root@master ~]# systemctl status kube-scheduler 
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-25 14:58:31 CST; 1 day 19h ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 3591 (kube-scheduler)
   Memory: 36.9M
   CGroup: /system.slice/kube-scheduler.service
           └─3591 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

Feb 27 10:22:54 master kube-scheduler[3591]: I0227 10:22:54.611139    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:23:01 master kube-scheduler[3591]: I0227 10:23:01.496338    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:23:02 master kube-scheduler[3591]: I0227 10:23:02.346595    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:23:19 master kube-scheduler[3591]: I0227 10:23:19.677905    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:26:36 master kube-scheduler[3591]: I0227 10:26:36.850715    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:27:21 master kube-scheduler[3591]: I0227 10:27:21.523891    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:27:22 master kube-scheduler[3591]: I0227 10:27:22.520733    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:28:12 master kube-scheduler[3591]: I0227 10:28:12.498729    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:28:33 master kube-scheduler[3591]: I0227 10:28:33.519011    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Feb 27 10:28:50 master kube-scheduler[3591]: I0227 10:28:50.573353    3591 reflector.go:357] k8s.io/client-go/informers/...ceived
Hint: Some lines were ellipsized, use -l to show in full.

 

部署kube-controller-manager

建立kube-controller-manager配置文件

vim /k8s/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

 

建立kube-controller-manager systemd unit 文件

vim /usr/lib/systemd/system/kube-controller-manager.service 

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

查看kube-controller-manager是否运行

[root@master ~]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-02-26 14:14:18 CST; 20h ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 120023 (kube-controller)
   Memory: 76.2M
   CGroup: /system.slice/kube-controller-manager.service
           └─120023 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elec...

Feb 27 10:31:30 master kube-controller-manager[120023]: I0227 10:31:30.722696  120023 node_lifecycle_controller.go:929] N...tamp.
Feb 27 10:31:31 master kube-controller-manager[120023]: I0227 10:31:31.088697  120023 gc_controller.go:144] GC'ing orphaned
Feb 27 10:31:31 master kube-controller-manager[120023]: I0227 10:31:31.094678  120023 gc_controller.go:173] GC'ing unsche...ting.
Feb 27 10:31:34 master kube-controller-manager[120023]: I0227 10:31:34.271634  120023 attach_detach_controller.go:634] pr...4.21"
Feb 27 10:31:35 master kube-controller-manager[120023]: I0227 10:31:35.723490  120023 node_lifecycle_controller.go:929] N...tamp.
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.377876  120023 attach_detach_controller.go:634] pr....100"
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.498005  120023 attach_detach_controller.go:634] pr...4.56"
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.500915  120023 cronjob_controller.go:111] Found 0 jobs
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.505005  120023 cronjob_controller.go:119] Found 0 cronjobs
Feb 27 10:31:36 master kube-controller-manager[120023]: I0227 10:31:36.505021  120023 cronjob_controller.go:122] Found 0 groups
Hint: Some lines were ellipsized, use -l to show in full.
[root@master ~]# 
[root@master ~]# ps -ef|grep  kube-controller-manager
root      90967 118543  0 10:31 pts/0    00:00:00 grep --color=auto kube-controller-manager
root     120023      1  0 Feb26 ?        00:08:42 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem --root-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem

 

将可执行文件路/k8s/kubernetes/ 添加到 PATH 变量中

vim /etc/profile
PATH=/k8s/kubernetes/bin:$PATH:$HOME/bin
​​​​​​​
# 生效变量
source /etc/profile

查看master集群状态

[root@master ~]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-0               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"}

 

五、部署node 节点

kubernetes work 节点运行以下组件:

  • docker 前面已经部署
  • kubelet
  • kube-proxy

部署 kubelet 组件

  • kublet 运行在每一个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如exec、run、logs 等;
  • kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用状况;
  • 为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和受权,拒绝未受权的访问(如apiserver、heapster)。

将kubelet 二进制文件拷贝node节点

cp kubelet kube-proxy /k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.4.21:/k8s/kubernetes/bin/
scp kubelet kube-proxy 192.168.4.56:/k8s/kubernetes/bin/

建立 kubelet bootstrap kubeconfig 文件(master节点)

# 在master节点
cd /k8s/kubernetes/ssl/

# 编辑并运行该脚本
vim  environment.sh
# 建立kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=91af09d8720f467def95b65704862025
KUBE_APISERVER="https://192.168.4.100:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 建立kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

将bootstrap kubeconfig kube-proxy.kubeconfig 文件拷贝到全部 nodes节点(master节点)

cp bootstrap.kubeconfig kube-proxy.kubeconfig /k8s/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.4.21:/k8s/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.4.56:/k8s/kubernetes/cfg/

建立kubelet 参数配置文件拷贝到全部 nodes节点

建立 kubelet 参数配置模板文件

# 节点1
vim /k8s/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.4.21
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true


# 节点2
vim /k8s/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.4.56
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true

建立kubelet配置文件

# 节点1
vim /k8s/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.4.21 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

# 节点2
vim /k8s/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.4.56 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

建立kubelet systemd unit 文件(全部节点)

vim /usr/lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

将kubelet-bootstrap用户绑定到系统集群角色(全部节点)

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

启动服务(全部节点)

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

approve kubelet CSR 请求

能够手动或自动 approve CSR 请求。推荐使用自动的方式,由于从 v1.8 版本开始,能够自动轮转approve csr 后生成的证书。
手动 approve CSR 请求
查看 CSR 列表:

# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs   39m    kubelet-bootstrap   Pending
node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s   5m5s   kubelet-bootstrap   Pending

# kubectl certificate approve node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs
certificatesigningrequest.certificates.k8s.io/node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs 

# kubectl certificate approve node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s  
certificatesigningrequest.certificates.k8s.io/node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s approved
[
# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs   41m     kubelet-bootstrap   Approved,Issued
node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s   7m32s   kubelet-bootstrap   Approved,Issued
  • Requesting User:请求 CSR 的用户,kube-apiserver 对它进行认证和受权;
  • Subject:请求签名的证书信息;
  • 证书的 CN 是 system:node:kube-node2, Organization 是 system:nodes,kube-apiserver 的 Node 受权模式会授予该证书的相关权限;

查看集群状态

[root@master ssl]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
192.168.4.100   Ready             43h   v1.13.0
192.168.4.21    Ready             20h   v1.13.0
192.168.4.56    Ready             20h   v1.13.0

部署 kube-proxy 组件

kube-proxy 运行在全部 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化状况,建立路由规则来进行服务负载均衡。

建立 kube-proxy 配置文件

vim /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.4.100 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
  • bindAddress: 监听地址;
  • clientConnection.kubeconfig: 链接 apiserver 的 kubeconfig 文件;
  • clusterCIDR: kube-proxy 根据 –cluster-cidr 判断集群内部和外部流量,指定 –cluster-cidr 或 –masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求作 SNAT;
  • hostnameOverride: 参数值必须与 kubelet 的值一致,不然 kube-proxy 启动后会找不到该 Node,从而不会建立任何 ipvs 规则;
  • mode: 使用 ipvs 模式;

建立kube-proxy systemd unit 文件

vim /usr/lib/systemd/system/kube-proxy.service 

[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
[root@node ~]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-02-25 15:38:16 CST; 1 day 19h ago
 Main PID: 2887 (kube-proxy)
   Memory: 8.2M
   CGroup: /system.slice/kube-proxy.service
           ‣ 2887 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.4.100 --cluster-cidr=10....

Feb 27 11:06:44 node kube-proxy[2887]: I0227 11:06:44.625875    2887 config.go:141] Calling handler.OnEndpointsUpdate

集群状态

打node 或者master 节点的标签

kubectl label node 192.168.4.100  node-role.kubernetes.io/master='master'
kubectl label node 192.168.4.21  node-role.kubernetes.io/node='node'
kubectl label node 192.168.4.56  node-role.kubernetes.io/node='node'
[root@master ~]# kubectl get node,cs
NAME                 STATUS   ROLES    AGE   VERSION
node/192.168.4.100   Ready    master   43h   v1.13.0
node/192.168.4.21    Ready    node     20h   v1.13.0
node/192.168.4.56    Ready    node     20h   v1.13.0

NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-1               Healthy   {"health":"true"}   
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-0               Healthy   {"health":"true"}
相关文章
相关标签/搜索