k8s 1.12.1 的坑和解决方法node
gcr.io 被墙,须要 pull 本身的镜像,而后改 tag。具体须要 pull 哪些镜像呢,kubeadm config images 可查看
我本身 build 的都放到了 https://github.com/FingerLiu/... , 须要的话也能够直接用里面的脚本:git
wget -O - https://raw.githubusercontent.com/FingerLiu/k8s.gcr.io/master/pull.sh | bash
kubeadm reset 重复安装的时候,.kube 文件夹不会清空,但 key 已经从新生成了,全部会key secret 不匹配。解决办法是清空 .kube 目录,而后将 /etc/kubernetes/kube-admin.json 拷贝过来github
pending,network not ready:安装对应版本的 flannel。kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
json
默认 k8s 不容许往 master 节点装东西,强行设置下容许:kubectl taint nodes --all node-role.kubernetes.io/master-
bash
kubelet 本身的 bug, 无视app
sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u):$(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf
That's because you don't have permission to deploy tiller, add an account for it:ui
kubectl create serviceaccount --namespace kube-system tiller
serviceaccount "tiller" createdspa
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding "tiller-cluster-rule" createdcode
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
deployment "tiller-deploy" patched
Then run below to check it :server
helm listhelm repo update