在升级以前你须要了解各版本间的关系:node
经过查询命令行帮助:linux
$ kubeadm upgrade -h Upgrade your cluster smoothly to a newer version with this command. Usage: kubeadm upgrade [flags] kubeadm upgrade [command] ` Available Commands: apply Upgrade your Kubernetes cluster to the specified version. diff Show what differences would be applied to existing static pod manifests. See also: kubeadm upgrade apply --dry-run node Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself. plan Check which versions are available to upgrade to and validate whether your current cluster is upgradeable. To skip the internet check, pass in the optional [version] parameter.
命令解析:数据库
其中node子命令又支持以下子命令和选项:bootstrap
$ kubeadm upgrade node -h Upgrade commands for a node in the cluster. Currently only supports upgrading the configuration, not the kubelet itself. Usage: kubeadm upgrade node [flags] kubeadm upgrade node [command] Available Commands: config Downloads the kubelet configuration from the cluster ConfigMap kubelet-config-1.X, where X is the minor version of the kubelet. experimental-control-plane Upgrades the control plane instance deployed on this node. IMPORTANT. This command should be executed after executing `kubeadm upgrade apply` on another control plane instance Flags: -h, --help help for node Global Flags: --log-file string If non-empty, use this log file --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem. --skip-headers If true, avoid header prefixes in the log messages -v, --v Level number for the log level verbosity
命令解析:api
操做环境说明: app
因为当前环境中的集群是由kubeadm建立的,其版本为1.13.1,因此本次实验将其升级至1.14.0。ide
首先,在第一个控制平面节点也就是主控制平面上操做:post
1. 肯定升级前集群版本:ui
root@k8s-master:~# kubectl version Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
2. 查找可升级的版本:this
apt update apt-cache policy kubeadm # find the latest 1.14 version in the list # it should look like 1.14.x-00, where x is the latest patch 1.14.0-00 500 500 http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
3. 先升级kubeadm到1.14.0
# replace x in 1.14.x-00 with the latest patch version apt-mark unhold kubeadm kubelet && \ apt-get update && apt-get install -y kubeadm=1.14.0-00 && \ apt-mark hold kubeadm
如上升级kubeadm到1.14版本,Ubuntu系统有可能会自动升级kubelet到当前最新版本的1.16.0,因此此时就把kubelet也升级下:
apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00
若是确实发生这种状况,致使了kubeadm和kubelet版本不一致,最终导致后面的升级集群操做失败,此时能够删除kubeadm、kubelet
删除:
apt-get remove kubelet kubeadm
再次安装预期版本:
apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00
肯定kubeadm已升级到预期版本:
root@k8s-master:~# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:51:21Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} root@k8s-master:~#
4. 运行升级计划命令:检测集群是否能够升级,及获取到的升级的版本。
kubeadm upgrade plan
输出以下:
root@k8s-master:~# kubeadm upgrade plan [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.13.1 [upgrade/versions] kubeadm version: v1.14.0 Awesome, you're up-to-date! Enjoy!
告诉你集群能够升级。
5. 升级控制平面各组件,包含etcd。
root@k8s-master:~# kubeadm upgrade apply v1.14.0 [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade/version] You have chosen to change the cluster version to "v1.14.0" [upgrade/versions] Cluster version: v1.13.1 [upgrade/versions] kubeadm version: v1.14.0 //输出 y 确认以后,开始进行升级。 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-scheduler. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.0"... Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514 Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477 [upgrade/etcd] Upgrading to TLS for etcd [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests696355120" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: b5fdfbe9ab5d3cc91000d2734dd669ca Static pod: kube-apiserver-k8s-master hash: bb799a8d323c1577bf9e10ede7914b30 [apiclient] Found 1 Pods for label selector component=kube-apiserver [apiclient] Found 0 Pods for label selector component=kube-apiserver [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-k8s-master hash: 31a4d945c251e62ac94e215494184514 Static pod: kube-controller-manager-k8s-master hash: 54146492ed90bfa147f56609eee8005a [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-10-03-20-30-46/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-k8s-master hash: fefab66bc5a8a35b1f328ff4f74a8477 Static pod: kube-scheduler-k8s-master hash: 58272442e226c838b193bbba4c44091e [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.3.1.20] [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. root@k8s-master:~#
在最后两行中,能够看到,集群升级成功。
kubeadm upgrade apply 执行了以下操做:
到v1.16版本为止,kubeadm upgrade apply必须在主控制平面节点上执行。
6. 运行完以后,验证集群版本:
root@k8s-master:~# kubectl version Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
能够看到,虽然kubectl版本是在1.13.1,而服务端的控制平面已经升级到了1.14.0
Master组件已正常运行:
root@k8s-master:~# kubectl get componentstatuses NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}
到这里,第一台控制平面的Master组件已升级完成,控制平面节点上一般还有kubelet和kubectl,因此这两个也要作升级。
7. 升级CNI插件。
这一步是可选的,查询CNI插件是否能够升级。
8. 升级该控制平面上的kubelet和kubectl
如今能够升级kubelet了,在升级过程当中,不影响业务Pod的运行。
8.1. 升级kubelet、kubectl
# replace x in 1.14.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet=1.14.0-00 kubectl=1.14.0-00 && \ apt-mark hold kubelet kubectl
8.2. 重启kubelet:
sudo systemctl restart kubelet
9. 查看kubectl版本,与预期一致。
root@k8s-master:~# kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} root@k8s-master:~#
第一台控制平面节点已完成升级。
10. 升级其它控制平面节点。
在其它控制平面上执行,与第一个控制平面节点相同,但使用:
sudo kubeadm upgrade node experimental-control-plane
代替:
sudo kubeadm upgrade apply
而 sudo kubeadm upgrade plan 没有必要再执行了。
kubeadm upgrade node experimental-control-plane执行以下操做:
如今开始升级Node上的各组件:kubeadm、kubelet、kube-proxy。
在不影响集群访问的状况下,一个节点一个节点的执行。
1.将Node标记为维护状态。
如今Node还原来的1.13:
root@k8s-master:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 292d v1.14.0 k8s-node01 Ready node 292d v1.13.1
升级Node以前先将Node标记为不可用,并逐出全部Pod:
kubectl drain $NODE --ignore-daemonsets
2. 升级kubeadm和kubelet
如今在各Node上一样的安装kubeadm、kubelet,由于使用kubeadm升级kubelet。
# replace x in 1.14.x-00 with the latest patch version apt-mark unhold kubeadm kubelet && \ apt-get update && apt-get install -y kubeadm=1.14.0-00 kubelet=1.14.0-00 && \ apt-mark hold kubeadm kubelet
3. 升级kubelet的配置文件
$ kubeadm upgrade node config --kubelet-version v1.14.0 [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager. root@k8s-master:~#
4. 从新启动kubelet
$ sudo systemctl restart kubelet
5. 最后将节点标记为可调度来使其从新加入集群
kubectl uncordon $NODE
至此,该Node升级完毕,能够查看kubelet、kube-proxy的版本已变为预期版本v1.14.0
root@k8s-master:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 292d v1.14.0 k8s-node01 Ready node 292d v1.14.0
该STATUS列应全部节点显示Ready,而且版本号已更新。
到这里,全部升级流程已完美攻克。
若是kubeadm upgrade失败而且没法 回滚(例如因为执行期间意外关闭),则能够再次运行kubeadm upgrade。此命令是幂等的,并确保实际状态最终是您声明的状态。
要从不良状态中恢复,您能够在不更改集群运行版本的状况下运行:
kubeadm upgrade --force。
更多升级信息查看官方升级文档
从1.14.0升级到1.15.0的升级流程也大体相同,只是升级命令稍有区别。
升级流程 与 从1.13升级至 1.14.0 相同。
1. 查询可升级版本,安装kubeadm到预期版本v1.15.0
apt-cache policy kubeadm apt-mark unhold kubeadm kubelet apt-get install -y kubeadm=1.15.0-00
kubeadm已达预期版本:
root@k8s-master:~# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
2. 执行升级计划
因为v1.15版本中,证书到期会自动续费,kubeadm在控制平面升级期间更新全部证书,即 v1.15发布的kubeadm upgrade,会自动续订它在该节点上管理的证书。若是不想自动更新证书,能够加上参数:--certificate-renewal=false。
升级计划:
kubeadm upgrade plan
能够看到以下输出:
root@k8s-master:~# kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.14.0 [upgrade/versions] kubeadm version: v1.15.0 I1005 20:45:04.474363 38108 version.go:248] remote version is much newer: v1.16.1; falling back to: stable-1.15 [upgrade/versions] Latest stable version: v1.15.4 [upgrade/versions] Latest version in the v1.14 series: v1.14.7 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 1 x v1.14.0 v1.14.7 1 x v1.15.0 v1.14.7 Upgrade to the latest version in the v1.14 series: COMPONENT CURRENT AVAILABLE API Server v1.14.0 v1.14.7 Controller Manager v1.14.0 v1.14.7 Scheduler v1.14.0 v1.14.7 Kube Proxy v1.14.0 v1.14.7 CoreDNS 1.3.1 1.3.1 Etcd 3.3.10 3.3.10 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.14.7 _____________________________________________________________________ Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 1 x v1.14.0 v1.15.4 1 x v1.15.0 v1.15.4 Upgrade to the latest stable version: COMPONENT CURRENT AVAILABLE API Server v1.14.0 v1.15.4 Controller Manager v1.14.0 v1.15.4 Scheduler v1.14.0 v1.15.4 Kube Proxy v1.14.0 v1.15.4 CoreDNS 1.3.1 1.3.1 Etcd 3.3.10 3.3.10 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.15.4 Note: Before you can perform this upgrade, you have to update kubeadm to v1.15.4. _____________________________________________________________________
3. 升级控制平面
根据任务指引,升级控制平面:
kubeadm upgrade apply v1.15.0
因为kubeadm的版本是v1.15.0,因此集群版本也只能为v1.15.0。
输出以下信息:
root@k8s-master:~# kubeadm upgrade apply v1.15.0 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/version] You have chosen to change the cluster version to "v1.15.0" [upgrade/versions] Cluster version: v1.14.0 [upgrade/versions] kubeadm version: v1.15.0 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y ... ##正在拉取镜像 [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-scheduler. ... ##已经拉取全部组件的镜像 [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Successfully prepulled the images for all the control plane components ... ... ##以下自动续订了全部证书 [upgrade/etcd] Upgrading to TLS for etcd [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests353124264" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate ... [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.15.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
4. 升级成功,验证。
能够看到,升级成功,此时,再次查询集群核心组件版本:
root@k8s-master:~# kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
查该node版本:
NAME STATUS ROLES AGE VERSION k8s-master Ready master 295d v1.14.0 k8s-node01 Ready node 295d v1.14.0
5. 升级该控制平面上的kubelet和kubectl
控制平面核心组件已升级为v1.15.0,如今升级该节点上的kubelet及kubectl了,在升级过程当中,不影响业务Pod的运行。
# replace x in 1.15.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet=1.15.0-00 kubectl=1.15.0-00 && \ apt-mark hold kubelet kubectl
6. 重启kubelet:
sudo systemctl restart kubelet
7. 验证kubelet、kubectl版本,与预期一致。
root@k8s-master:~# kubectl version Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
查该node版本:
root@k8s-master:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 295d v1.15.0 k8s-node01 Ready node 295d v1.14.0
升级其它控制平面上的三个组件的命令有所不一样,使用:
1. 升级其它控制平面组件,可是使用以下命令:
$ sudo kubeadm upgrade node
2. 而后,再升级kubelet和kubectl。
# replace x in 1.15.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \ apt-mark hold kubelet kubectl
3. 重启kubelet
$ sudo systemctl restart kubelet
升级Node与前面一致,此处简写。
在全部Node上执行。
1. 升级kubeadm:
# replace x in 1.15.x-00 with the latest patch version apt-mark unhold kubeadm && \ apt-get update && apt-get install -y kubeadm=1.15.x-00 && \ apt-mark hold kubeadm
查询kubeadm版本:
root@k8s-node01:~# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
2. 设置node为维护状态:
kubectl cordon $NODE
3. 更新kubelet配置文件
$ sudo kubeadm upgrade node upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
4. 升级kubelet组件和kubectl。
# replace x in 1.15.x-00 with the latest patch version apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet=1.15.x-00 kubectl=1.15.x-00 && \ apt-mark hold kubelet kubectl
5. 重启kubelet
sudo systemctl restart kubelet
此时kube-proxy也会自动升级并重启。
6. 取消维护状态
kubectl uncordon $NODE
Node升级完成。
root@k8s-master:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 295d v1.15.0 k8s-node01 NotReady node 295d v1.15.0
在此次升级流程中,升级其它控制平面和升级Node 用的都是 kubeadm upgrade node。
kubeadm upgrade node 在其它控制平面节点执行时:
kubeadm upgrade node 在Node节点上执行如下操做:
从1.15.x升级到1.16.x 与 前面的 从1.14.x升级到1.15.x,升级命令彻底相同,此处就再也不重复。