kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每一个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践作调整,经过实验kubeadm能够学习到Kubernetes官方在集群配置上一些新的最佳实践。node
[root@k8s-master ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.246 k8s-master 192.168.0.247 k8s-node1 192.168.0.248 k8s-node2
若是各个主机启用了防火墙,须要开放Kubernetes各个组件所须要的端口,能够查看Installing kubeadm中的”Check required ports”一节。 这里简单起见在各节点禁用防火墙:linux
systemctl stop firewalld systemctl disable firewalld
禁用SELINUX:nginx
setenforce 0
git
vi /etc/selinux/config SELINUX=disabled
建立/etc/sysctl.d/k8s.conf文件,添加以下内容:github
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
执行命令使修改生效。docker
modprobe br_netfilter
json
sysctl -p /etc/sysctl.d/k8s.conf
centos
因为ipvs已经加入到了内核的主干,因此为kube-proxy开启ipvs的前提须要加载如下的内核模块:api
ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4
在全部的Kubernetes节点node1和node2上执行如下脚本:bash
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面脚本建立了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。
接下来还须要确保各个节点上已经安装了ipset软件包yum install ipset。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm。
若是以上前提条件若是不知足,则即便kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。
Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口。默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现。
安装docker的yum源:
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
查看最新的Docker版本:
yum list docker-ce.x86_64 --showduplicates |sort -r docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
Kubernetes 1.15当前支持的docker版本列表是1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。 这里在各节点安装docker的18.09.7版本。
yum makecache fast yum install -y --setopt=obsoletes=0 \ docker-ce-18.09.7-3.el7 systemctl start docker systemctl enable docker
确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。
iptables -nvL Chain INPUT (policy ACCEPT 263 packets, 19209 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
1.4 修改docker cgroup driver为systemd
根据文档CRI installation中的内容,对于使用systemd做为init system的Linux的发行版,使用systemd做为docker的cgroup driver能够确保服务器节点在资源紧张的状况更加稳定,所以这里修改各个节点上docker的cgroup driver为systemd。
建立或修改/etc/docker/daemon.json:
{ "exec-opts": ["native.cgroupdriver=systemd"] }
重启docker:
systemctl restart docker
docker info | grep Cgroup Cgroup Driver: systemd
2.1 安装kubeadm和kubelet
下面在各节点安装kubeadm和kubelet:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum makecache fast yum install -y kubelet-1.15.2 kubeadm-1.15.2 kubectl-1.15.2
...
Installed: kubeadm.x86_64 0:1.15.0-0 kubectl.x86_64 0:1.15.0-0 kubelet.x86_64 0:1.15.0-0 Dependency Installed: conntrack-tools.x86_64 0:1.4.4-4.el7 cri-tools.x86_64 0:1.12.0-0 kubernetes-cni.x86_64 0:0.7.5-0 libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
从安装结果能够看出还安装了cri-tools, kubernetes-cni, socat三个依赖:
官方从Kubernetes 1.14开始将cni依赖升级到了0.7.5版本
socat是kubelet的依赖
cri-tools是CRI(Container Runtime Interface)容器运行时接口的命令行工具
运行kubelet –help能够看到原来kubelet的绝大多数命令行flag参数都被DEPRECATED了,如:
......--address 0.0.0.0 The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and
::for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
......
而官方推荐咱们使用–config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容能够查看这里Set Kubelet parameters via a config file。这也是Kubernetes为了支持动态Kubelet配置(Dynamic Kubelet Configuration)才这么作的,参考Reconfigure a Node’s Kubelet in a Live Cluster。
kubelet的配置文件必须是json或yaml格式,具体可查看这里。
Kubernetes 1.8开始要求关闭系统的Swap,若是不关闭,默认配置下kubelet将没法启动。 关闭系统的Swap方法以下:
swapoff -a
修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。 swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:
vm.swappiness=0
执行下面命令:
sysctl -p /etc/sysctl.d/k8s.conf
使修改生效。
在各节点开机启动kubelet服务:
systemctl enable kubelet.service
初始化master以前确认修改/etc/sysconfig/kubelet 中的内容为:
KUBELET_EXTRA_ARGS=--fail-swap-on=false
kubeadm init \ --apiserver-advertise-address=192.168.0.246 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.13.3 \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.244.0.0/16
结果主要内容以下:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.246:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \ --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e
执行如下命令:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
最后给出了将节点加入集群的命令:kubeadm join 192.168.0.246:6443 –token 4qcl2f.gtl3h8e5kjltuo0r \ –discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e
查看一下集群状态,确认个组件都处于healthy状态:
kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}
接下来安装flannel network add-on:
kdir -p ~/k8s/ cd ~/k8s curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
[这里注意kube-flannel.yml这个文件里的flannel的镜像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64
请添加连接描述]()
若是镜像拉取失败请每个node进行手动拉取:
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64<br/>docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64<br/>docker rmi quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
使用kubectl get pod –all-namespaces -o wide确保全部的Pod都处于Running状态。
kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-dr8lf 1/1 Running 0 52m coredns-5c98db65d4-lp8dg 1/1 Running 0 52m etcd-node1 1/1 Running 0 51m kube-apiserver-node1 1/1 Running 0 51m kube-controller-manager-node1 1/1 Running 0 51m kube-flannel-ds-amd64-mm296 1/1 Running 0 44s kube-proxy-kchkf 1/1 Running 0 52m kube-scheduler-node1 1/1 Running 0 51m
下面将node主机添加到Kubernetes集群中,在node上执行:
kubeadm join 192.168.99.11:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \ --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e
加入集群非常顺利,下面在master节点上执行命令查看集群中的节点:
kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 57m v1.15.2 k8s-node1 Ready <none> 11s v1.15.2 k8s-node2 Ready <none> 11s v1.15.2
修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”
kubectl edit cm kube-proxy -n kube-system
以后重启各个节点上的kube-proxy pod:
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}' kubectl get pod -n kube-system | grep kube-proxy kube-proxy-7fsrg 1/1 Running 0 3s kube-proxy-k8vhm 1/1 Running 0 9s
kubectl logs kube-proxy-7fsrg -n kube-system I0703 04:42:33.308289 1 server_others.go:170] Using ipvs Proxier. W0703 04:42:33.309074 1 proxier.go:401] IPVS scheduler not specified, use rr by default I0703 04:42:33.309831 1 server.go:534] Version: v1.15.0 I0703 04:42:33.320088 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0703 04:42:33.320365 1 config.go:96] Starting endpoints config controller I0703 04:42:33.320393 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller I0703 04:42:33.320455 1 config.go:187] Starting service config controller I0703 04:42:33.320470 1 controller_utils.go:1029] Waiting for caches to sync for service config controller I0703 04:42:33.420899 1 controller_utils.go:1036] Caches are synced for endpoints config controller I0703 04:42:33.420969 1 controller_utils.go:1036] Caches are synced for service config controller
愈来愈多的公司和团队开始使用Helm这个Kubernetes的包管理器,这里也将使用Helm安装Kubernetes的经常使用组件。
Helm由客户端命helm令行工具和服务端tiller组成,Helm的安装十分简单。 下载helm命令行工具到master节点node1的/usr/local/bin下,这里下载的2.14.1版本:
curl -O https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz tar -zxvf helm-v2.14.1-linux-amd64.tar.gz cd linux-amd64/ cp helm /usr/local/bin/
为了安装服务端tiller,还须要在这台机器上配置好kubectl工具和kubeconfig文件,确保kubectl工具能够在这台机器上访问apiserver且正常使用。 这里的master节点已经配置好了kubectl。
由于Kubernetes APIServer开启了RBAC访问控制,因此须要建立tiller使用的service account: tiller并分配合适的角色给它。 详细内容能够查看helm文档中的Role-based Access Control。 这里简单起见直接分配cluster-admin这个集群内置的ClusterRole给它。建立helm-rbac.yaml文件:
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
kubectl create -f helm-rbac.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
接下来使用helm部署tiller:
helm init --service-account tiller --skip-refresh Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!
tiller默认被部署在k8s集群中的kube-system这个namespace下:
kubectl get pod -n kube-system -l app=helm NAME READY STATUS RESTARTS AGE tiller-deploy-c4fd4cd68-dwkhv 1/1 Running 0 83s
helm version Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
若是 tiller拉取失败请在全部Node节点手动拉取镜像:
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 <br/>docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 gcr.io/kubernetes-helm/tiller:v2.14.1<br/>docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1
最后在master上修改helm chart仓库的地址为azure提供的镜像地址(阿里云地址:):
helm repo add stable http://mirror.azure.cn/kubernetes/charts "stable" has been added to your repositories helm repo list NAME URL stable http://mirror.azure.cn/kubernetes/charts local http://127.0.0.1:8879/charts
3.3 使用Helm部署dashboard
kubernetes-dashboard.yaml:
image:
repository: k8s.gcr.io/kubernetes-dashboard-amd64
tag: v1.10.1
ingress:
enabled: true
hosts:
helm install stable/kubernetes-dashboard \
-n kubernetes-dashboard \
--namespace kube-system \
-f kubernetes-dashboard.yaml
kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubernetes-dashboard-token-pkm2s kubernetes.io/service-account-token 3 3m7s
kubectl describe -n kube-system secret/kubernetes-dashboard-token-pkm2s
Name: kubernetes-dashboard-token-pkm2s
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: 2f0781dd-156a-11e9-b0f0-080027bb7c43
Type: kubernetes.io/service-account-token
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1wa20ycyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJmMDc4MWRkLTE1NmEtMTFlOS1iMGYwLTA4MDAyN2JiN2M0MyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.24ad6ZgZMxdydpwlmYAiMxZ9VSIN7dDR7Q6-RLW0qC81ajXoQKHAyrEGpIonfld3gqbE0xO8nisskpmlkQra72-9X6sBPoByqIKyTsO83BQlME2sfOJemWD0HqzwSCjvSQa0x-bUlq9HgH2vEXzpFuSS6Svi7RbfzLXlEuggNoC4MfA4E2hF1OXml8iAKx-49y1BQQe5FGWyCyBSi1TD-ZpVs44H5gIvsGK2kcvi0JT4oHXtWjjQBKLIWL7xxyRCSE4HmUZT2StIHnOwlX7IEIB0oBX4mPg2_xNGnqwcu-8OERU9IoqAAE2cZa0v3b5O2LMcJPrcxrVOukvRIumA
在dashboard的登陆窗口使用上面的token登陆。
3.4 使用Helm部署metrics-server
从Heapster的github https://github.com/kubernetes/heapster中能够看到已经,heapster已经DEPRECATED。 这里是heapster的deprecation timeline。 能够看出heapster从Kubernetes 1.12开始从Kubernetes各类安装脚本中移除。
Kubernetes推荐使用metrics-server。咱们这里也使用helm来部署metrics-server。
metrics-server.yaml:
args:
helm install stable/metrics-server \
-n metrics-server \
--namespace kube-system \
-f metrics-server.yaml
使用下面的命令能够获取到关于集群节点基本的指标信息:
若是拉取镜像失败请手动拉取:
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5 gcr.io/google_containers/metrics-server-amd64:v0.3.5 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5
kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% node1 650m 32% 1276Mi 73% node2 73m 3% 527Mi 30%
kubectl top pod -n kube-system NAME CPU(cores) MEMORY(bytes) coredns-5c98db65d4-dr8lf 8m 7Mi coredns-5c98db65d4-lp8dg 6m 8Mi etcd-node1 44m 46Mi kube-apiserver-node1 74m 295Mi kube-controller-manager-node1 35m 50Mi kube-flannel-ds-amd64-7lwm9 2m 8Mi kube-flannel-ds-amd64-mm296 5m 9Mi kube-proxy-7fsrg 1m 11Mi kube-proxy-k8vhm 3m 11Mi kube-scheduler-node1 8m 15Mi kubernetes-dashboard-848b8dd798-c4sc2 2m 14Mi metrics-server-8456fb6676-fwh2t 10m 19Mi tiller-deploy-7bf78cdbf7-9q94c 1m 16Mi
遗憾的是,当前Kubernetes Dashboard还不支持metrics-server。所以若是使用metrics-server替代了heapster,将没法在dashboard中以图形展现Pod的内存和CPU状况(实际上这也不是很重要,当前咱们是在Prometheus和Grafana中定制的Kubernetes集群中各个Pod的监控,所以在dashboard中查看Pod内存和CPU也不是很重要)。 Dashboard的github上有不少这方面的讨论,如https://github.com/kubernetes/dashboard/issues/2986,Dashboard已经准备在未来的某个时间点支持metrics-server。但因为metrics-server和metrics pipeline确定是Kubernetes在monitor方面将来的方向,因此推荐使用metrics-server。
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.0 docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.0 k8s.gcr.io/ube-apiserver:v1.15.0 docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.0 docker pull registry.aliyuncs.com/google_containers/pause:3.1 docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1 docker rmi registry.aliyuncs.com/google_containers/pause:3.1 docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0 docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0 k8s.gcr.io/kube-controller-manager:v1.15.0 docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0 docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.0 docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.0 k8s.gcr.io/kube-scheduler:v1.15.0 docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.0 docker pull registry.aliyuncs.com/google_containers/coredns:1.3.1 docker tag registry.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1 docker rmi registry.aliyuncs.com/google_containers/coredns:1.3.1 docker pull registry.aliyuncs.com/google_containers/etcd:3.3.10 docker tag registry.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10 docker rmi registry.aliyuncs.com/google_containers/etcd:3.3.10 docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64 docker rmi quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 gcr.io/kubernetes-helm/tiller:v2.14.1 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0 docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0 docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5 k8s.gcr.io/metrics-server-amd64:v0.3.5 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.5