Kubernetes集群的搭建方法其实有多种,好比我在以前的文章《利用K8S技术栈打造我的私有云(连载之:K8S集群搭建)》中使用的就是二进制的安装方法。虽然这种方法有利于咱们理解 k8s集群,但却过于繁琐。而 kubeadm是 Kubernetes官方提供的用于快速部署Kubernetes集群的工具,其历经发展现在已经比较成熟了,利用其来部署 Kubernetes集群能够说是很是好上手,操做起来也简便了许多,所以本文详细叙述之。node
注: 本文首发于 My Personal Blog:CodeSheep·程序羊,欢迎光临 小站linux
本文准备部署一个 一主两从 的 三节点 Kubernetes集群,总体节点规划以下表所示:docker
主机名 | IP | 角色 |
---|---|---|
k8s-master | 192.168.39.79 | k8s主节点 |
k8s-node-1 | 192.168.39.77 | k8s从节点 |
k8s-node-2 | 192.168.39.78 | k8s从节点 |
下面介绍一下各个节点的软件版本:shell
CentOS-7.4-64Bit
1.13.1
1.13.1
全部节点都须要安装如下组件:json
Docker
:不用多说了吧kubelet
:运行于全部 Node上,负责启动容器和 Podkubeadm
:负责初始化集群kubectl
: k8s命令行工具,经过其能够部署/管理应用 以及CRUD各类资源systemctl disable firewalld.service
systemctl stop firewalld.service
复制代码
setenforce 0
vi /etc/selinux/config
SELINUX=disabled
复制代码
swapoff -a
复制代码
hostnamectl --static set-hostname k8s-master
hostnamectl --static set-hostname k8s-node-1
hostnamectl --static set-hostname k8s-node-2
复制代码
编辑 /etc/hosts
文件,加入如下内容:bootstrap
192.168.39.79 k8s-master
192.168.39.77 k8s-node-1
192.168.39.78 k8s-node-2
复制代码
不赘述 ! ! !api
cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF
复制代码
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX= disabled/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
复制代码
为了应对网络不顺畅通的问题,咱们国内网络环境只能提早手动下载相关镜像并从新打 tag :浏览器
docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd:3.2.24
docker rmi coredns/coredns:1.2.6
docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
复制代码
而后再在 Master节点上执行以下命令初始化 k8s集群:bash
kubeadm init --kubernetes-version=v1.13.1 --apiserver-advertise-address 192.168.39.79 --pod-network-cidr=10.244.0.0/16
复制代码
--kubernetes-version
: 用于指定 k8s版本--apiserver-advertise-address
:用于指定使用 Master的哪一个network interface进行通讯,若不指定,则 kubeadm会自动选择具备默认网关的 interface--pod-network-cidr
:用于指定Pod的网络范围。该参数使用依赖于使用的网络方案,本文将使用经典的flannel网络方案。执行命令后,控制台给出了以下所示的详细集群初始化过程:网络
[root@localhost ~]# kubeadm init --config kubeadm-config.yaml
W1224 11:01:25.408209 10137 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "\u00a0 podSubnet”
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml”
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki”
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.39.79]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes”
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for "kube-apiserver”
[control-plane] Creating static Pod manifest for "kube-controller-manager”
[control-plane] Creating static Pod manifest for "kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.005638 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system” Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''”
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 26uprk.t7vpbwxojest0tvq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.39.79:6443 --token 26uprk.t7vpbwxojest0tvq --discovery-token-ca-cert-hash sha256:028727c0c21f22dd29d119b080dcbebb37f5545e7da1968800140ffe225b0123
[root@localhost ~]#
复制代码
在 Master上用 root用户执行下列命令来配置 kubectl:
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
echo $KUBECONFIG
复制代码
安装 Pod网络是 Pod之间进行通讯的必要条件,k8s支持众多网络方案,这里咱们依然选用经典的 flannel方案
sysctl net.bridge.bridge-nf-call-iptables=1
复制代码
kubectl apply -f kube-flannel.yaml
复制代码
kube-flannel.yaml
文件在此
一旦 Pod网络安装完成,能够执行以下命令检查一下 CoreDNS Pod此刻是否正常运行起来了,一旦其正常运行起来,则能够继续后续步骤
kubectl get pods --all-namespaces -o wide
复制代码
同时咱们能够看到主节点已经就绪:kubectl get nodes
在两个 Slave节点上分别执行以下命令来让其加入Master上已经就绪了的 k8s集群:
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
复制代码
若是 token忘记,则能够去 Master上执行以下命令来获取:
kubeadm token list
复制代码
上述kubectl join命令的执行结果以下:
[root@localhost ~]# kubeadm join 192.168.39.79:6443 --token yndddp.oamgloerxuune80q --discovery-token-ca-cert-hash sha256:7a45c40b5302aba7d8b9cbd3afc6d25c6bb8536dd6317aebcd2909b0427677c8
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.39.79:6443” [discovery] Created cluster-info discovery client, requesting info from "https://192.168.39.79:6443”
[discovery] Requesting info from "https://192.168.39.79:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.39.79:6443” [discovery] Successfully established connection with API Server "192.168.39.79:6443”
[join] Reading configuration from the cluster…
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml’ [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml” [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env” [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap… [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. 复制代码
kubectl get nodes
复制代码
kubectl get pods --all-namespaces -o wide
复制代码
好了,集群如今已经正常运行了,接下来看看如何正常的拆卸集群。
首先处理各节点:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
复制代码
一旦节点移除以后,则能够执行以下命令来重置集群:
kubeadm reset
复制代码
就像给elasticsearch配一个可视化的管理工具同样,咱们最好也给 k8s集群配一个可视化的管理工具,便于管理集群。
所以咱们接下来安装 v1.10.0
版本的 kubernetes-dashboard,用于集群可视化的管理。
docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
docker tag registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker image rm registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
复制代码
kubectl create -f dashboard.yaml
复制代码
dashboard.yaml
文件在此
kubectl get pods --namespace=kube-system
复制代码
[root@k8s-master ~]# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-4rds2 1/1 Running 0 81m
coredns-86c58d9df4-rhtgq 1/1 Running 0 81m
etcd-k8s-master 1/1 Running 0 80m
kube-apiserver-k8s-master 1/1 Running 0 80m
kube-controller-manager-k8s-master 1/1 Running 0 80m
kube-flannel-ds-amd64-8qzpx 1/1 Running 0 78m
kube-flannel-ds-amd64-jvp59 1/1 Running 0 77m
kube-flannel-ds-amd64-wztbk 1/1 Running 0 78m
kube-proxy-crr7k 1/1 Running 0 81m
kube-proxy-gk5vf 1/1 Running 0 78m
kube-proxy-ktr27 1/1 Running 0 77m
kube-scheduler-k8s-master 1/1 Running 0 80m
kubernetes-dashboard-79ff88449c-v2jnc 1/1 Running 0 21s
复制代码
kubectl get service --namespace=kube-system
复制代码
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5h38m
kubernetes-dashboard NodePort 10.99.242.186 <none> 443:31234/TCP 14
复制代码
openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
rm dashboard.pass.key
openssl req -new -key dashboard.key -out dashboard.csr【如遇输入,一路回车便可】
复制代码
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
复制代码
dashboard.key
和 dashboard.crt
置于路径 /home/share/certs
下,该路径会配置到下面即将要操做的dashboard-user-role.yaml
文件中
kubectl create -f dashboard-user-role.yaml
复制代码
dashboard-user-role.yaml
文件在此
kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
复制代码
[root@k8s-master ~]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
Name: admin-token-9d4vl
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin
kubernetes.io/service-account.uid: a320b00f-07ed-11e9-93f2-000c2978f207
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi05ZDR2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImEzMjBiMDBmLTA3ZWQtMTFlOS05M2YyLTAwMGMyOTc4ZjIwNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.WbaHx-BfZEd0SvJwA9V_vGUe8jPMUHjKlkT7MWJ4JcQldRFY8Tdpv5GKCY25JsvT_GM3ob303r0yE6vjQdKna7EfQNO_Wb2j1Yu5UvZnWw52HhNudHNOVL_fFRKxkSVjAILA_C_HvW6aw6TG5h7zHARgl71I0LpW1VESeHeThipQ-pkt-Dr1jWcpPgE39cwxSgi-5qY4ssbyYBc2aPYLsqJibmE-KUhwmyOheF4Lxpg7E3SQEczsig2HjXpNtJizCu0kPyiR4qbbsusulH-kdgjhmD9_XWP9k0BzgutXWteV8Iqe4-uuRGHZAxgutCvaL5qENv4OAlaArlZqSgkNWw
复制代码
token既然生成成功,接下来就能够打开浏览器,输入 token来登陆进集群管理页面:
因为能力有限,如有错误或者不当之处,还请你们批评指正,一块儿学习交流!
可 长按 或 扫描 下面的 当心心 来订阅做者公众号 CodeSheep,获取更多 务实、能看懂、可复现的 原创文 ↓↓↓