(一)Kubeadm部署Kubernetes集群

Kubernetes大体有如下几种安装方式:html

  1. 直接yum安装,yum install kubernetes -y这样最快,可是此版本较低
  2. 去github上下载,用ansible部署安装,但须要对ansible有必定的了解
  3. 在每一个节点(Master/Node)上按组件逐一安装,慎用,有可能会崩溃,特别复杂,并且还不必定搞好。
  4. 利用一些提供的安装工具来安装,如kubeadm,下边介绍这个安装方式

环境准备:node

Master:10.0.0.100
Node01:10.0.0.101
Node02:10.0.0.102

hosts解析:linux

10.0.0.100 master1.rsq.com master1
10.0.0.101 node01.rsq.com node01
10.0.0.102 node02.rsq.com node02

时间同步:git

ntpdate ntp1.aliyun.com

最好作定时任务同步
crontab -e
*/20 * * * * /usr/sbin/ntpdate ntp1.aliyun.com >/dev/null &

1 Master部署安装kubeadm

安装大体顺序:github

  1. master,nodes: 安装kubelet,kubeadm,docker
  2. master: 安装kubectl,kubeadm init
  3. nodes: kubeadm join

master和node节点配置docker和Kubernetes的yum源web

[root@master1 ~]# cd /etc/yum.repos.d/
[root@master1 yum.repos.d]# wget -q https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master1 yum.repos.d]# vim kubernetes.repo 
[Kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1

导入gpgcheck包,node也要作docker

[root@master1 ~]# wget -q https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
[root@master1 ~]# wget -q https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@master1 ~]# rpm --import yum-key.gpg 
[root@master1 ~]# rpm --import rpm-package-key.gpg
[root@master1 ~]# scp rpm-package-key.gpg node01:/root
[root@master1 ~]# scp yum-key.gpg node02:/root
[root@master1 ~]# scp /etc/yum.repos.d/kubernetes.repo node01:/etc/yum.repos.d/
[root@master1 ~]# scp /etc/yum.repos.d/kubernetes.repo node02:/etc/yum.repos.d/

master安装docker-ce kubelet kubeadm kubectlbootstrap

[root@master1 ~]# yum install docker-ce kubelet kubeadm kubectl -y
[root@master1 ~]# rpm -ql kubelet
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/etc/systemd/system/kubelet.service
/usr/bin/kubelet

设置开机自启vim

[root@master1 ~]# systemctl enable kubelet
[root@master1 ~]# systemctl enable docker
[root@master1 ~]# systemctl start docker

把如下功能打开,不然会报错centos

[root@master1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables 
[root@master1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@master1 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward

查看kubeadm所需镜像

[root@master1 ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.13.0
k8s.gcr.io/kube-controller-manager:v1.13.0
k8s.gcr.io/kube-scheduler:v1.13.0
k8s.gcr.io/kube-proxy:v1.13.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6

1.1 下载kubeadm所需镜像

下载kubeadm所需镜像,有两种方法
(1)代理安装kubeadm所需镜像(最近测试代理地址失效)

[root@master1 ~]# vim /usr/lib/systemd/system/docker.service
Environment="HTTPS_PROXY=http://www.ik8s.io:10080" 
Environment="NO_PROXY=127.0.0.0/8,10.0.0.0/16"
[root@master1 ~]# systemctl daemon-reload
[root@master1 ~]# systemctl start docker
[root@master1 ~]# docker info
......
HTTPS Proxy: http://www.ik8s.io:10080
No Proxy: 127.0.0.0/8,10.0.0.0/16
......

# 在后边测试时候会报如下错误
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.12.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: proxyconnect tcp: dial tcp 172.96.236.117:10080: connect: connection refused

(2)因为不能访问谷歌镜像仓库(www.ik8s.io),故从个人阿里云仓库下载

  • 推荐中转地址docker.io/mirrorgooglecontainers(有许多版本),可是惟独没有corednscoredns须要从coredns/coredns:版本号 获取(这里就再也不演示)
  • 参考博客:kubeadm 没法下载镜像问题
# 拼接命令pull镜像
[root@master1 ~]# kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/rsq_kubeadm#g' | sh -x

# 打标签
[root@master1 ~]# docker images |grep rsq_kubeadm |awk '{print "docker tag",$1":"$2,$1":"$2}' |sed -e 's#registry.cn-hangzhou.aliyuncs.com/rsq_kubeadm#k8s.gcr.io#2' |sh -x

# 删除旧镜像
[root@master1 ~]# docker images |grep rsq_kubeadm |awk '{print "docker rmi -f", $1":"$2}' |sh -x

# 关闭Swap选项
[root@master1 ~]# vim /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

kubeadm初始化: 加上--token-ttl=0使得token永不过时,即此token可永久使用

[root@master1 ~]# kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --token-ttl=0 --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master1.rsq.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.100]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master1.rsq.com localhost] and IPs [10.0.0.100 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master1.rsq.com localhost] and IPs [10.0.0.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.001730 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master1.rsq.com" as an annotation
[mark-control-plane] Marking the node master1.rsq.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master1.rsq.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: qxl5b3.5b78nwu3gm1r4u6o
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.0.0.100:6443 --token awwl45.ejyr39p0cgwgo53c --discovery-token-ca-cert-hash sha256:17f6dc5827bf00b1bdc2ea5333c918fff881bd17627949043551ad1a8201798a

须要注意初始化输出的最后的信息

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.0.0.100:6443 --token awwl45.ejyr39p0cgwgo53c --discovery-token-ca-cert-hash sha256:17f6dc5827bf00b1bdc2ea5333c918fff881bd17627949043551ad1a8201798a

按照提示操做,因为已经是root,故忽略sudo提权

[root@master1 ~]# mkdir -p $HOME/.kube
[root@master1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master1 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"} 
[root@master1 ~]# kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   

# 查看集群节点,此时节点状态为NotReady
[root@master1 ~]#  kubectl get nodes
NAME              STATUS     ROLES    AGE     VERSION
master1.rsq.com   NotReady   master   4m49s   v1.13.0

1.2 安装flannel网络组件

参考网址:https://github.com/coreos/flannel#deploying-flannel-manually
若Kubernetes v1.7+ 则能够直接一条命令搞定,不过须要等待些许时间

[root@master1 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

[root@master1 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.13.0             8fa56d18961f        7 days ago          80.2MB
k8s.gcr.io/kube-scheduler            v1.13.0             9508b7d8008d        7 days ago          79.6MB
k8s.gcr.io/kube-apiserver            v1.13.0             f1ff9b7e3d6e        7 days ago          181MB
k8s.gcr.io/kube-controller-manager   v1.13.0             d82530ead066        7 days ago          146MB
k8s.gcr.io/coredns                   1.2.6               f59dcacceff4        5 weeks ago         40MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        2 months ago        220MB
quay.io/coreos/flannel               v0.10.0-amd64       f0fad859c909        10 months ago       44.6MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        11 months ago       742kB

再次查看集群节点

# 会发现节点状态从NotReady---->Ready
[root@master1 ~]# kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
master1.rsq.com   Ready    master   22m   v1.13.0

若下载失败,能够从阿里云镜像中pull

[root@master1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/rsq_kubeadm/flannel:v0.10.0-amd64
[root@master1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/rsq_kubeadm/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
[root@master1 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/rsq_kubeadm/flannel:v0.10.0-amd64

查看kube-system名称空间上运行的pods,若不指定名称空间,会指向默认名称空间defaults

[root@master1 ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-jl278                  1/1     Running   0          24m
coredns-86c58d9df4-r684d                  1/1     Running   0          24m
etcd-master1.rsq.com                      1/1     Running   1          23m
kube-apiserver-master1.rsq.com            1/1     Running   1          23m
kube-controller-manager-master1.rsq.com   1/1     Running   1          23m
kube-flannel-ds-amd64-r2dh7               1/1     Running   0          4m36s
kube-proxy-g6jgk                          1/1     Running   1          24m
kube-scheduler-master1.rsq.com            1/1     Running   1          23m

查看名称空间

[root@master1 ~]# kubectl get ns
NAME          STATUS   AGE
default       Active   25m
kube-public   Active   25m
kube-system   Active   25m

2 Node节点配置

两个节点相同操做

# rpm-gpg引入完成后开始安装所需包
[root@node01 ~]# rpm --import yum-key.gpg 
[root@node01 ~]# rpm --import rpm-package-key.gpg

# 安装所需包,kubelet不用当即启动,在加入k8s后就会启动
[root@node01 ~]# yum install docker-ce kubelet kubeadm kubectl -y
[root@node01 ~]# systemctl enable docker kubelet
[root@node01 ~]# systemctl start docker
[root@node01 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@node01 ~]# vim /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

node节点加入集群

[root@node01 ~]# kubeadm join 10.0.0.100:6443 --token awwl45.ejyr39p0cgwgo53c --discovery-token-ca-cert-hash sha256:17f6dc5827bf00b1bdc2ea5333c918fff881bd17627949043551ad1a8201798a --ignore-preflight-errors=Swap
[preflight] Running pre-flight checks
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
[discovery] Trying to connect to API Server "10.0.0.100:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.0.100:6443"
[discovery] Requesting info from "https://10.0.0.100:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.0.0.100:6443"
[discovery] Successfully established connection with API Server "10.0.0.100:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01.rsq.com" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

Node下载所需镜像

[root@node01 ~]# cat pull.sh 
for image in kube-proxy:v1.13.0 pause:3.1
do
	docker pull registry.cn-hangzhou.aliyuncs.com/rsq_kubeadm/$image && 
	docker tag registry.cn-hangzhou.aliyuncs.com/rsq_kubeadm/$image k8s.gcr.io/$image && 
	docker rmi -f registry.cn-hangzhou.aliyuncs.com/rsq_kubeadm/$image
done
[root@node01 ~]# ./pull.sh
[root@node01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/rsq_kubeadm/flannel:v0.10.0-amd64
[root@node01 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/rsq_kubeadm/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
[root@node01 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/rsq_kubeadm/flannel:v0.10.0-amd64
[root@node01 ~]# docker images
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy    v1.13.0             8fa56d18961f        8 days ago          80.2MB
quay.io/coreos/flannel   v0.10.0-amd64       f0fad859c909        10 months ago       44.6MB
k8s.gcr.io/pause         3.1                 da86e6ba6ca1        11 months ago       742kB

master查看nodes

[root@master1 ~]# kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
master1.rsq.com   Ready    master   16h   v1.13.0
node01.rsq.com    Ready    <none>   16h   v1.13.0
node02.rsq.com    Ready    <none>   39m   v1.13.0

3 K8s踩坑记录

k8s踩坑记录

4 安装部署心得

  • 安装部署的过程当中一不当心就会踩到不少坑,并且解决起来还很麻烦,我这里遇到的主要的麻烦就是master初始化、node join的问题,一旦遇到这种问题,快速的解决办法就是 kudeadm resert,一步到位

参考博客:

一、kubeadm部署kubernetes集群
二、kubeadm 没法下载镜像问题
三、k8s踩坑记
四、k8s master查看token
五、coreDNS一直处于建立中解决