我把三台虚拟机重启,发现2个节点一直处于NotReady状态,便去查找问题,到最后是由于子节点的kubelet的状态异常了,restart一下就行了,下面转一下解决的思路node
昨天晚上,针对K8S环境作了一次压测,50路并发实施,早上起来看监控,发现昨晚8点以后,系统好像都宕掉了,一看master节点和一个node节点状态变成了not ready,主要定位手段以下:linux
1. 查看master kubelet状态docker
systemctl status kubelet 状态正常api
2. 查看master kube-proxy状态网络
systemctl status kube-proxy 状态正常并发
3. 查看master kube-apiserver状态spa
systemctl status kube-apiserver 状态正常.net
4. 查看master kube-scheduler状态rest
systemctl status kube-scheduler 状态正常日志
5. 查看master etcd状态
systemctl status etcd 状态正常
6. 查看flannel状态
在kubernetes-dashboard上看到flannel挂掉了,查看日志以下
Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-flannel-ds-amd64-sc7sr": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "process_linux.go:301: running exec setns process for init caused \"signal: broken pipe\"": unknown
而这个问题,经过分析应该是flannel在网络比较大的状况下,内存资源不足了,因此修改flannel的配置,将内存扩大便可。
"resources": { "limits": { "cpu": "300m", "memory": "200Mi" }, "requests": { "cpu": "300m", "memory": "200Mi" } },
修改完成以后,须要重启docker,在删除原来的flannel pod,这样问题就解决拉原文连接:https://blog.csdn.net/Viogs/article/details/96114776