在以前的文章,咱们已经演示了yum 和二进制方式的安装方式,本文咱们将用官方推荐的kubeadm
来进行安装部署。node
kubeadm
是 Kubernetes 官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每一个版本的发布都会同步更新,kubeadm
会对集群配置方面的一些实践作调整,经过实验kubeadm
能够学习到Kubernetes官方在集群配置上一些新的最佳实践。linux
软件 | 版本 |
---|---|
kubernetes | v1.12.2 |
CentOS 7.5 | CentOS Linux release 7.5.1804 |
Docker | v18.06 |
flannel | 0.10.0 |
IP | 角色 | 主机名 |
---|---|---|
172.18.8.200 | k8s master | master.wzlinux.com |
172.18.8.201 | k8s node01 | node01.wzlinux.com |
172.18.8.202 | k8s node02 | node02.wzlinux.com |
节点及网络规划以下:nginx
关闭防火墙。git
systemctl stop firewalld systemctl disable firewalld
配置/etc/hosts,添加以下内容。github
172.18.8.200 master.wzlinux.com master 172.18.8.201 node01.wzlinux.com node01 172.18.8.202 node02.wzlinux.com node02
关闭SELinux。算法
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config setenforce 0
关闭swap。docker
swapoff -a sed -i 's/.*swap.*/#&/' /etc/fstab
配置转发参数。json
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
设置国内kubernetes阿里云源。bootstrap
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
由于不论是master仍是node,都是须要容器引擎,因此咱们提早把docker安装好。
设置官方docker源。centos
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -P /etc/yum.repos.d/
查看目前官方仓库的docker版本。
[root@master ~]# yum list docker-ce.x86_64 --showduplicates |sort -r 已加载插件:fastestmirror 可安装的软件包 * updates: mirrors.aliyun.com Loading mirror speeds from cached hostfile * extras: mirrors.aliyun.com docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable * base: mirrors.aliyun.com
根据官方的推荐要求,咱们须要安装v18.06。
yum install docker-ce-18.06.1.ce -y
配置国内镜像仓库加速器。
sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://hdi5v8p1.mirror.aliyuncs.com"] } EOF
启动docker。
systemctl daemon-reload systemctl enable docker systemctl start docker
yum install kubelet kubeadm kubectl -y systemctl enable kubelet && systemctl start kubelet
加载ipvs
内核,使node节点kube-proxy
支持ipvs
代理规则。
modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh
并添加到开机启动文件/etc/rc.local
里面。
cat <<EOF >> /etc/rc.local modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh EOF
由于国内没办法访问Google的镜像源,变通的方法是从其余镜像源下载后,注意下载的版本尽可能和咱们的kubeadm等版本同样,咱们选择v1.12.2,修改tag。执行下面这个Shell脚本便可。
#!/bin/bash kube_version=:v1.12.2 kube_images=(kube-proxy kube-scheduler kube-controller-manager kube-apiserver) addon_images=(etcd-amd64:3.2.24 coredns:1.2.2 pause-amd64:3.1) for imageName in ${kube_images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version k8s.gcr.io/$imageName$kube_version docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version done for imageName in ${addon_images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done docker tag k8s.gcr.io/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24 docker image rm k8s.gcr.io/etcd-amd64:3.2.24 docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1 docker image rm k8s.gcr.io/pause-amd64:3.1
关于脚本中的各镜像的版本,若是你们不清楚的话,能够先进行
kubeadm init
初始化一下,查看一下报错的版本,而后咱们在针对获取。
若是kubeadm
升级了,咱们能够选用新的版本,下载新版本镜像便可。
执行脚本,咱们就把须要的的镜像下载下来了,咱们是使用别人作好的仓库,固然咱们也能够建本身的私有仓库。
[root@master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.12.2 15e9da1ca195 4 weeks ago 96.5MB k8s.gcr.io/kube-apiserver v1.12.2 51a9c329b7c5 4 weeks ago 194MB k8s.gcr.io/kube-controller-manager v1.12.2 15548c720a70 4 weeks ago 164MB k8s.gcr.io/kube-scheduler v1.12.2 d6d57c76136c 4 weeks ago 58.3MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 2 months ago 220MB k8s.gcr.io/coredns 1.2.2 367cdc8433a4 3 months ago 39.2MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 11 months ago 742kB
使用kubeadm init自动安装 Master 节点,须要指定版本。
kubeadm init --kubernetes-version=v1.12.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
[init] using Kubernetes version: v1.12.2 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [172.18.8.200 127.0.0.1 ::1] [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [master.wzlinux.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.18.8.200] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 20.005448 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node master.wzlinux.com as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node master.wzlinux.com as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master.wzlinux.com" as an annotation [bootstraptoken] using token: 3mfpdm.atgk908eq1imgwqp [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.18.8.200:6443 --token 3mfpdm.atgk908eq1imgwqp --discovery-token-ca-cert-hash sha256:ff67ead9f43931f08e67873ba00695cd4b997f87dace5255ff45fc386b08941d
服务启动后须要根据输出提示,进行配置:
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
pod网络插件是必要安装,以便pod能够相互通讯。在部署应用和启动kube-dns以前,须要部署网络,kubeadm仅支持CNI的网络。
pod支持的网络插件有不少,如Calico
,Canal
,Flannel
,Romana
,Weave Net
等,由于以前咱们初始化使用了参数--pod-network-cidr=10.244.0.0/16
,因此咱们使用插件flannel
。
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
检查是否正常启动,由于要下载flannel镜像,须要时间会稍微长一些。
[root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-576cbf47c7-ptzmh 1/1 Running 0 22m kube-system coredns-576cbf47c7-q78r9 1/1 Running 0 22m kube-system etcd-master.wzlinux.com 1/1 Running 0 21m kube-system kube-apiserver-master.wzlinux.com 1/1 Running 0 22m kube-system kube-controller-manager-master.wzlinux.com 1/1 Running 0 22m kube-system kube-flannel-ds-amd64-vqtzq 1/1 Running 0 5m54s kube-system kube-proxy-ld262 1/1 Running 0 22m kube-system kube-scheduler-master.wzlinux.com 1/1 Running 0 22m
故障排查思路:
docker logs ID
查看容器的启动日志,特别是频繁建立的容器kubectl --namespace=kube-system describe pod POD-NAME
查看错误状态的pod日志。kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}
查看具体错误。一样的node节点也须要下载镜像kube-proxy
,pause
,它须要的镜像会少一些。
#!/bin/bash kube_version=:v1.12.2 coredns_version=1.2.2 pause_version=3.1 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version k8s.gcr.io/kube-proxy$kube_version docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version k8s.gcr.io/pause:$pause_version docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version k8s.gcr.io/coredns:$coredns_version docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version
查看下载好的镜像。
[root@node01 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.12.2 15e9da1ca195 4 weeks ago 96.5MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 11 months ago 742kB
咱们在master节点上初始化成功的时候,在最后有一个kubeadm join
的命令,就是用来添加node节点的。
kubeadm join 172.18.8.200:6443 --token 3mfpdm.atgk908eq1imgwqp --discovery-token-ca-cert-hash sha256:ff67ead9f43931f08e67873ba00695cd4b997f87dace5255ff45fc386b08941d
[preflight] running pre-flight checks [discovery] Trying to connect to API Server "172.18.8.200:6443" [discovery] Created cluster-info discovery client, requesting info from "https://172.18.8.200:6443" [discovery] Requesting info from "https://172.18.8.200:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.18.8.200:6443" [discovery] Successfully established connection with API Server "172.18.8.200:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01.wzlinux.com" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
提示:若是执行join命令时提示token过时,按照提示在Master 上执行kubeadm token create生成一个新的token。
若是忘记token,可使用kubeadm token list查看。
执行添加命令后,在Master上查看节点信息。
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master.wzlinux.com Ready master 64m v1.12.2 node01.wzlinux.com Ready <none> 32m v1.12.2 node02.wzlinux.com Ready <none> 15m v1.12.2
能够把master节点的配置文件放到node节点上面,方便node节点使用kubectl。
scp /etc/kubernetes/admin.conf 172.18.8.201:/root/.kube/config
建立几个pod看看。
[root@master ~]# kubectl run nginx --image=nginx --replicas=3
[root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-dbddb74b8-7qnsl 1/1 Running 0 27s 10.244.2.2 node02.wzlinux.com <none> nginx-dbddb74b8-ck4l9 1/1 Running 0 27s 10.244.1.2 node01.wzlinux.com <none> nginx-dbddb74b8-rpc2r 1/1 Running 0 27s 10.244.1.3 node01.wzlinux.com <none>
完整的架构图以下:
为了帮助你们更好地理解 Kubernetes 架构,咱们部署一个应用来演示各个组件之间是如何协做的。
kubectl run httpd-app --image=httpd --replicas=2
查看部署的应用。
[root@master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE httpd-app-66cb7d499b-gskrg 1/1 Running 0 59s 10.244.1.2 node01.wzlinux.com <none> httpd-app-66cb7d499b-km5t8 1/1 Running 0 59s 10.244.2.2 node02.wzlinux.com <none>
Kubernetes 部署了 deployment httpd-app
,有两个副本 Pod,分别运行在node1
和node2
。
整个部署过程流程以下:
应用的配置和当前状态信息保存在 etcd 中,执行 kubectl get pod 时 API Server 会从 etcd 中读取这些数据。
flannel 会为每一个 Pod 都分配 IP。由于没有建立 service,目前 kube-proxy 还没参与进来。
一切OK,到此为止,咱们的集群已经部署完成,你们能够开始应用了。
从kubernetes1.8版本开始,新增了kube-proxy对ipvs的支持,而且在新版的kubernetes1.11版本中被归入了GA。
iptables模式问题很差定位,规则多了性能会显著降低,甚至会出现规则丢失的状况;相比而言,ipvs就稳定的多。
默认安装使用的是iptables,咱们须要进行修改配置开启ipvs。
modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh
kubectl edit configmap kube-proxy -n kube-system
找到以下部分。
kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" nodePortAddresses: null oomScoreAdj: -999
其中mode原来是空,默认为iptables模式,改成ipvs。scheduler默认是空,默认负载均衡算法为轮训。
kubectl delete pod kube-proxy-xxx -n kube-system
[root@master ~]# kubectl logs kube-proxy-t4t8j -n kube-system I1211 03:43:01.297068 1 server_others.go:189] Using ipvs Proxier. W1211 03:43:01.297549 1 proxier.go:365] IPVS scheduler not specified, use rr by default I1211 03:43:01.297698 1 server_others.go:216] Tearing down inactive rules. I1211 03:43:01.355516 1 server.go:464] Version: v1.13.0 I1211 03:43:01.366922 1 conntrack.go:52] Setting nf_conntrack_max to 196608 I1211 03:43:01.367294 1 config.go:102] Starting endpoints config controller I1211 03:43:01.367304 1 config.go:202] Starting service config controller I1211 03:43:01.367327 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller I1211 03:43:01.367343 1 controller_utils.go:1027] Waiting for caches to sync for service config controller I1211 03:43:01.467475 1 controller_utils.go:1034] Caches are synced for service config controller I1211 03:43:01.467485 1 controller_utils.go:1034] Caches are synced for endpoints config controller
使用ipvsadm查看ipvs相关规则,若是没有这个命令能够直接yum安装
yum install -y ipvsadm
[root@master ~]# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.96.0.1:443 rr -> 172.18.8.200:6443 Masq 1 0 0 TCP 10.96.0.10:53 rr -> 10.244.0.4:53 Masq 1 0 0 -> 10.244.0.5:53 Masq 1 0 0 UDP 10.96.0.10:53 rr -> 10.244.0.4:53 Masq 1 0 0 -> 10.244.0.5:53 Masq 1 0 0
全部的密钥明文占用篇幅太多,我这里用秘钥内容
代替。
apiVersion: v1 clusters: - cluster: certificate-authority-data: 秘钥内容 server: https://172.18.8.200:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: 秘钥内容 client-key-data: 秘钥内容
apiVersion: v1 clusters: - cluster: certificate-authority-data: 密钥内容 server: https://172.18.8.200:6443 name: kubernetes contexts: - context: cluster: kubernetes user: system:kube-controller-manager name: system:kube-controller-manager@kubernetes current-context: system:kube-controller-manager@kubernetes kind: Config preferences: {} users: - name: system:kube-controller-manager user: client-certificate-data: 密钥内容 client-key-data: 密钥内容
apiVersion: v1 clusters: - cluster: certificate-authority-data: 密钥内容 server: https://172.18.8.200:6443 name: kubernetes contexts: - context: cluster: kubernetes user: system:node:master.wzlinux.com name: system:node:master.wzlinux.com@kubernetes current-context: system:node:master.wzlinux.com@kubernetes kind: Config preferences: {} users: - name: system:node:master.wzlinux.com user: client-certificate-data: 密钥内容 client-key-data: 密钥内容
apiVersion: v1 clusters: - cluster: certificate-authority-data: 密钥内容 server: https://172.18.8.200:6443 name: kubernetes contexts: - context: cluster: kubernetes user: system:kube-scheduler name: system:kube-scheduler@kubernetes current-context: system:kube-scheduler@kubernetes kind: Config preferences: {} users: - name: system:kube-scheduler user: client-certificate-data: 密钥内容 client-key-data: 秘钥内容
参考文档:https://kubernetes.io/docs/setup/independent/install-kubeadm/