出于某些缘由删除了k8s-001节点,如今须要将k8s-001节点从新做为控制平面加入集群,在加入集群过程当中出错node
集群版本:1.13.1docker
3个控制平面,2个worker节点bootstrap
通常直接从新加入集群的话会出现下面的问题api
[kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Checking Etcd cluster health error syncing endpoints with etc: dial tcp 10.0.3.4:2379: connect: connection refused
这是由于控制平面10.0.3.4(k8s-001)已经被删除了,可是configmap:kubeadm-config中存在未删除的状态dom
root@k8s-002:/home# kubectl get configmaps -n kube-system kubeadm-config -oyaml . . . ClusterStatus: | apiEndpoints: k8s-001: advertiseAddress: 10.0.3.4 bindPort: 6443 k8s-002: advertiseAddress: 10.0.3.5 bindPort: 6443 k8s-003: advertiseAddress: 10.0.3.6 bindPort: 6443 apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterStatus . . .
能够看到集群信息中k8s-001仍然存在,在使用kubeadm从新加入集群时会检测节点上的etcd健康状态tcp
所以要从配置文件中删掉k8s-001spa
root@k8s-002:/home# kubectl edit configmaps -n kube-system kubeadm-config
删除以下的k8s-001内容,保存code
k8s-001: advertiseAddress: 10.0.3.4 bindPort: 6443
用kubeadm搭建的集群,若是是非手动部署etcd(kubeadm自动搭建)的话,etcd是在每一个控制平面都启动一个实例的,当删除k8s-001节点时,etcd集群未自动删除此节点上的etcd成员,所以须要手动删除orm
首先查看etcd集群成员信息server
先设置快捷方式
root@k8s-002:/home# export ETCDCTL_API=3
root@k8s-002:/home# alias etcdctl='etcdctl --endpoints=https://10.0.3.5:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key'
查看etcd集群成员信息
root@k8s-002:/home# etcdctl member list 57b3a6dc282908df, started, k8s-003, https://10.0.3.6:2380, https://10.0.3.6:2379 58bfa292d53697d0, started, k8s-001, https://10.0.3.4:2380, https://10.0.3.4:2379 f38fd5735de92e88, started, k8s-002, https://10.0.3.5:2380, https://10.0.3.5:2379
虽然看起来集群很健康,但实际上k8s-001已经不存在了,若是这时加入集群,就会报以下错误
[kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Checking Etcd cluster health [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-001" as an annotation error creating local etcd static pod manifest file: etcdserver: unhealthy cluster
删除失效成员(k8s-001)
root@k8s-002:/home# etcdctl member remove 58bfa292d53697d0 Member 58bfa292d53697d0 removed from cluster f06e01da83f7000d
root@k8s-002:/home# etcdctl member list 57b3a6dc282908df, started, k8s-003, https://10.0.3.6:2380, https://10.0.3.6:2379 f38fd5735de92e88, started, k8s-002, https://10.0.3.5:2380, https://10.0.3.5:2379
一切正常
root@k8s-002:/home# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-4956t 1/1 Running 0 128m kube-system calico-node-hkcmq 1/1 Running 0 5h58m kube-system calico-node-lsqsg 1/1 Running 0 5h58m kube-system calico-node-q2zpt 1/1 Running 0 5h58m kube-system calico-node-qdg49 1/1 Running 0 5h58m kube-system coredns-89cc84847-sl2s5 1/1 Running 0 6h3m kube-system coredns-89cc84847-x57kv 1/1 Running 0 6h3m kube-system etcd-k8s-001 1/1 Running 0 39m kube-system etcd-k8s-002 1/1 Running 1 3h8m kube-system etcd-k8s-003 1/1 Running 0 3h7m kube-system kube-apiserver-k8s-001 1/1 Running 0 128m kube-system kube-apiserver-k8s-002 1/1 Running 1 6h1m kube-system kube-apiserver-k8s-003 1/1 Running 2 6h kube-system kube-controller-manager-k8s-001 1/1 Running 0 128m kube-system kube-controller-manager-k8s-002 1/1 Running 1 6h1m kube-system kube-controller-manager-k8s-003 1/1 Running 0 6h kube-system kube-proxy-5stnn 1/1 Running 0 5h59m kube-system kube-proxy-92vtd 1/1 Running 0 6h1m kube-system kube-proxy-sz998 1/1 Running 0 5h59m kube-system kube-proxy-wp2jx 1/1 Running 0 6h kube-system kube-proxy-xl5nn 1/1 Running 0 128m kube-system kube-scheduler-k8s-001 1/1 Running 0 128m kube-system kube-scheduler-k8s-002 1/1 Running 0 6h1m kube-system kube-scheduler-k8s-003 1/1 Running 1 6h
root@k8s-002:/home# etcdctl member list 57b3a6dc282908df, started, k8s-003, https://10.0.3.6:2380, https://10.0.3.6:2379 f38fd5735de92e88, started, k8s-002, https://10.0.3.5:2380, https://10.0.3.5:2379 fc790bd58a364c97, started, k8s-001, https://10.0.3.4:2380, https://10.0.3.4:2379
每次k8s-001执行kubeadm join失败后,须要执行kubeadm reset重置节点状态,重置状态后,若是要从新做为控制平面加入集群的话,须要从其它健康的控制平面节点的/etc/kubernetes/pki目录下向k8s-001拷贝证书,具体证书以下:
打印加入集群的kubeadm join
命令
root@master:~# kubeadm token create --print-join-command kubeadm join your.k8s.domain:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
做为普通节点加入集群
kubeadm join your.k8s.domain:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
做为控制平面加入集群
kubeadm join your.k8s.domain:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx --experimental-control-plane
注意,
--experimental-control-plane
参数在1.15+版本须要替换为--control-plane