kubeadm是K8S官方提供的集群部署工具。kubeadm将master节点上的apiserver、scheduler、controller-manager、etcd和node节点上的kube-proxy都部署为Pod运行,因此master和node都须要安装kubelet和docker。node
一、前期准备
主机准备:
k8s1 master 192.168.4.35 CentOS7.6 4C8G
k8s2 node1 192.168.4.36 CentOS7.6 4C8G
k8s3 node2 192.168.4.37 CentOS7.6 4C8Glinux
修改hosts文件,添加host:git
vi /etc/hostsgithub
192.168.4.35 k8s1 192.168.4.36 k8s1 192.168.4.37 k8s1
关闭防火墙:Systemctl disable firewalld && systemctl stop firewalld
docker
命令补全:shell
yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc
二、 环境准备:centos
设置kubernetes的yum源api
vi /etc/yum.repos.d/kubernetes.repobash
[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
设置docker的yum源app
wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo cp docker-ce.repo /etc/yum.repos.d/
安装docker和kubeadmin kubectl kubeletyum install -y kubelet kubeadm kubectl docker-ce
设置开机启动并启动服务
systemctl enable kubelet docker
systemctl start kubelet docker
查看该版本的容器镜像版本:kubeadm config images list
输出以下:
~# kubeadm config images list k8s.gcr.io/kube-apiserver:v1.13.4 k8s.gcr.io/kube-controller-manager:v1.13.4 k8s.gcr.io/kube-scheduler:v1.13.4 k8s.gcr.io/kube-proxy:v1.13.4 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2.6
三、拉取容器镜像
原始的kubernetes镜像文件在gcr上,不能直接下载。下面是阿里云上的资源,全部主机上都执行一下。参考https://www.520mwx.com/view/37277
echo "" echo "==========================================================" echo "Pull Kubernetes v1.13.4 Images from aliyuncs.com ......" echo "==========================================================" echo "" MY_REGISTRY=registry.cn-hangzhou.aliyuncs.com/openthings ## 拉取镜像 docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.13.4 docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.13.4 docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.13.4 docker pull ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.13.4 docker pull ${MY_REGISTRY}/k8s-gcr-io-etcd:3.2.24 docker pull ${MY_REGISTRY}/k8s-gcr-io-pause:3.1 docker pull ${MY_REGISTRY}/k8s-gcr-io-coredns:1.2.6 ## 添加Tag docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.13.4 k8s.gcr.io/kube-apiserver:v1.13.4 docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.13.4 k8s.gcr.io/kube-scheduler:v1.13.4 docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.13.4 k8s.gcr.io/kube-controller-manager:v1.13.4 docker tag ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.13.4 k8s.gcr.io/kube-proxy:v1.13.4 docker tag ${MY_REGISTRY}/k8s-gcr-io-etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag ${MY_REGISTRY}/k8s-gcr-io-pause:3.1 k8s.gcr.io/pause:3.1 docker tag ${MY_REGISTRY}/k8s-gcr-io-coredns:1.2.6 k8s.gcr.io/coredns:1.2.6 ##删除镜像 docker rmi ${MY_REGISTRY}/k8s-gcr-io-kube-apiserver:v1.13.4 docker rmi ${MY_REGISTRY}/k8s-gcr-io-kube-controller-manager:v1.13.4 docker rmi ${MY_REGISTRY}/k8s-gcr-io-kube-scheduler:v1.13.4 docker rmi ${MY_REGISTRY}/k8s-gcr-io-kube-proxy:v1.13.4 docker rmi ${MY_REGISTRY}/k8s-gcr-io-etcd:3.2.24 docker rmi ${MY_REGISTRY}/k8s-gcr-io-pause:3.1 docker rmi ${MY_REGISTRY}/k8s-gcr-io-coredns:1.2.6 echo "" echo "==========================================================" echo "Pull Kubernetes v1.13.4 Images FINISHED." echo "into registry.cn-hangzhou.aliyuncs.com/openthings, " echo "==========================================================" echo ""
保存为shell脚本,而后执行。
四、安装Kubernetes集群
初始化
#指定IP地址,1.13.4版本:kubeadm init --kubernetes-version=v1.13.4 --pod-network-cidr=10.244.0.0/16
#注意,CoreDNS已经内置,再也不须要参数--feature-gates CoreDNS=true
若是失败能够执行 kubeadm reset进行重置再执行上面的命令。
完成后会显示以下信息:
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.4.35:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:f79b68fb698c92b9336474eb3bf184e847f967dc58a6296911892662b98b1315
而后,配置当前用户环境:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
若是不执行这一步,会提示x509错误
node节点注册到master(node节点执行):kubeadm join 192.168.4.35:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:f79b68fb698c92b9336474eb3bf184e847f967dc58a6296911892662b98b1315
在master节点查看节点信息,能够看到node1和node2已经加入集群了:kubectl get nodes
因为缺乏flannel组件,因此status都显示NotReady。
安装flannel组件
docker pull registry.cn-hangzhou.aliyuncs.com/gaven_k8s/flannel:v0.11.0-amd64 docker tag registry.cn-hangzhou.aliyuncs.com/gaven_k8s/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64 docker rmi registry.cn-hangzhou.aliyuncs.com/gaven_k8s/flannel:v0.11.0-amd64 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
就可使用 kubectl version 来查看状态和 kubectl cluster-info 查看服务地址。
在master上检查群集的状态kubectl get nodes -o wide
查看status是否都是ready
在master上检查容器的运行情况kubectl get pods --all-namespaces -o wide
查看status是否都是running
若是发现有容器状态不是running,可使用下面命令查看events:kubectl describe pod kube-flannel-ds-amd64-XXXXX -n kube-system
五、节点查看
每一个工做节点须要拉取上面对应版本的镜像,以及安装kubelet的对应版本。
检查版本:~$ kubectl version
六、安装dashboard 图形化管理平台。
部署dashboard应用资源
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 docker rmi mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
修改成NodePort类型的service,让集群外部也能够访问dashboard:
kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system
使用token认证进行登录
kubectl create serviceaccount dashboard-admin -n kube-system kubectl create clusterrolebinding cluster-dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin kubectl get secrets -n kube-system kubectl describe secret dashboard-admin-token-rb2xh -n kube-system
dashboard-admin-token-xxxxx 安装的设备不同,xxxxx也不同,-n kube-system是指定空间,若是没有加上会提示错误。而后复制token值进行登录便可。