在2018年初接触了Kubernetes,研究学习了如何使用kubeadm安装kubernetes集群,也写了一篇文章讲如何离线安装kubernetes集群连接,然而当时水平有限,文中所讲现在看来不少漏洞,操做也过于繁琐,所以从新写一篇文章,采用更方便的在线安装方法,快捷安装kubernetes集群,本次在线安装离不开阿里的帮助,也要感谢一波阿里。最后但愿本文能帮助到有须要的人。html
本次安装使用了3台ubuntu server 16.04虚拟机node
ip | hostname | 用途 |
---|---|---|
10.0.3.4 | k8s-001 | control plane |
10.0.3.5 | k8s-002 | worker |
10.0.3.6 | k8s-003 | worker |
采用较新的1.14.3版本linux
在全部节点执行如下命令github
关闭swapweb
swapoff -a
关闭配置文件中的swap,防止重启机器后自动开启swapdocker
sed -i 's/^\([^#].*swap.*$\)/#\1/' /etc/fstab
按照谷歌官方文档进行安装ubuntu
上一步99%的同窗都会由于被墙而安装失败,所以能够按照这里寻求阿里帮助api
进入阿里旗下的OPSX,ctrl+f
搜索kubernetes
,找到后再点击最后的帮助,能够看到操做提示,这里直接贴出来
apt-get update && apt-get install -y apt-transport-https curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF apt-get update # apt-get install -y kubelet kubeadm kubectl #注意这一句先不要直接执行,见下文
注意上面最后一句命令不要直接执行,由于直接安装的话默认是最新版,而咱们须要指定安装版本1.14.3,所以执行如下命令
apt-get install -y kubelet=1.14.3-00 kubeadm=1.14.3-00 kubectl=1.14.3-00
有些小伙伴不知道当前源有哪些版本,这里可使用如下命令进行查询,第二列就是版本信息
root@k8s-001:/home# apt-cache madison kubeadm kubeadm | 1.15.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.14.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.14.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages . . .
在k8s-001上执行如下命令
cat <<EOF > ./kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.14.3 apiServerCertSANs: - k8s-001 - 10.0.3.4 - myk8s.cluster controlPlaneEndpoint: "10.0.3.4:6443" imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers networking: podSubnet: "10.244.0.0/16" EOF
以上配置文件说明
在k8s-001上执行如下命令
kubeadm init --config kubeadm-config.yaml
以上命令说明
不出意外能够看到成功信息
. . . Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 10.0.3.4:6443 --token xxxxxx.xxxxxxxxxxxxxxxx \ --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx \ --experimental-control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.3.4:6443 --token xxxxxx.xxxxxxxxxxxxxxxx \ --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
按照提示在k8s-001节点执行如下三条命令
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
咱们须要使用Kubectl客户端链接kubernetes集群,而且链接时是须要通过认证的,上面的三步操做就是将认证文件放到默认路径
$HOME/.kube/config
下,供kubectl读取
此时咱们就能够执行kubectl命令查看集群信息了
root@k8s-001:/home# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-d5947d4b-k69nd 0/1 Pending 0 4m34s kube-system coredns-d5947d4b-ll6hx 0/1 Pending 0 4m34s kube-system etcd-k8s-001 1/1 Running 0 3m44s kube-system kube-apiserver-k8s-001 1/1 Running 0 3m47s kube-system kube-controller-manager-k8s-001 1/1 Running 0 4m1s kube-system kube-proxy-p9jgp 1/1 Running 0 4m34s kube-system kube-scheduler-k8s-001 1/1 Running 0 3m48s
上面的结果显示coredns pod处于Pending状态,这是由于coredns须要依赖网络插件,而如今网络插件还未安装
安装第四步提示,在k8s-002,k8s-003节点分别执行如下命令
kubeadm join 10.0.3.4:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
须要注意的是加入集群的命令中,token是有有效期的,通常为2小时,超过2小时再使用此token就会加入失败,这时须要从新生成加入命令,以下
root@k8s-001:~# kubeadm token create --print-join-command kubeadm join 10.0.3.4:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
成功信息以下
. . . This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
此时咱们能够在k8s-001节点查看集群信息了
root@k8s-001:/home# kubectl get node NAME STATUS ROLES AGE VERSION k8s-001 NotReady master 13m v1.14.3 k8s-002 NotReady <none> 76s v1.14.3 k8s-003 NotReady <none> 56s v1.14.3
root@k8s-001:/home/kubernetes/init# kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-d5947d4b-k69nd 0/1 Pending 0 14m <none> <none> <none> <none> kube-system coredns-d5947d4b-ll6hx 0/1 Pending 0 14m <none> <none> <none> <none> kube-system etcd-k8s-001 1/1 Running 0 13m 10.0.3.4 k8s-001 <none> <none> kube-system kube-apiserver-k8s-001 1/1 Running 0 13m 10.0.3.4 k8s-001 <none> <none> kube-system kube-controller-manager-k8s-001 1/1 Running 0 13m 10.0.3.4 k8s-001 <none> <none> kube-system kube-proxy-g4p5x 1/1 Running 0 2m 10.0.3.5 k8s-002 <none> <none> kube-system kube-proxy-p9jgp 1/1 Running 0 14m 10.0.3.4 k8s-001 <none> <none> kube-system kube-proxy-z9cpd 1/1 Running 0 100s 10.0.3.6 k8s-003 <none> <none> kube-system kube-scheduler-k8s-001 1/1 Running 0 13m 10.0.3.4 k8s-001 <none> <none>
网络插件有不少,例如flannel等,你们能够自行选用,这里使用calico
root@k8s-001:/home# kubectl apply -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml etworking/1.7/calico.yaml configmap/calico-config created service/calico-typha created deployment.apps/calico-typha created poddisruptionbudget.policy/calico-typha created daemonset.extensions/calico-node created serviceaccount/calico-node created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created
网络插件安装完毕后,再次查看pod信息,就会看见coredns状态已经时Running了
root@k8s-001:/home# kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-node-gf6j4 1/1 Running 0 39s 10.0.3.4 k8s-001 <none> <none> kube-system calico-node-l4w9n 1/1 Running 0 39s 10.0.3.5 k8s-002 <none> <none> kube-system calico-node-rtcnl 1/1 Running 0 39s 10.0.3.6 k8s-003 <none> <none> kube-system coredns-d5947d4b-k69nd 1/1 Running 0 17m 10.244.0.10 k8s-001 <none> <none> kube-system coredns-d5947d4b-ll6hx 1/1 Running 0 17m 10.244.1.6 k8s-002 <none> <none> kube-system etcd-k8s-001 1/1 Running 0 16m 10.0.3.4 k8s-001 <none> <none> kube-system kube-apiserver-k8s-001 1/1 Running 0 16m 10.0.3.4 k8s-001 <none> <none> kube-system kube-controller-manager-k8s-001 1/1 Running 0 16m 10.0.3.4 k8s-001 <none> <none> kube-system kube-proxy-g4p5x 1/1 Running 0 5m5s 10.0.3.5 k8s-002 <none> <none> kube-system kube-proxy-p9jgp 1/1 Running 0 17m 10.0.3.4 k8s-001 <none> <none> kube-system kube-proxy-z9cpd 1/1 Running 0 4m45s 10.0.3.6 k8s-003 <none> <none> kube-system kube-scheduler-k8s-001 1/1 Running 0 16m 10.0.3.4 k8s-001 <none> <none>
咱们计划将ingress启动一个副本,而且部署在k8s-003节点上,使用k8s-003的host网络(只启动了一个副本,这是非高可用的作法,后续能够讲一讲高可用该怎么操做)
下载ingress配置文件,咱们须要对其进行修改
wget -O ingress.yaml https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
修改下载好的ingress.yaml
. . . apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: nodeName: k8s-003 #新增,表示只能部署在k8s-003节点上 hostNetwork: true #新增,表示使用k8s-003的宿主机网络 serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: . . .
quay.io镜像仓库也有地方会被墙,或者下载速度很慢,所以可使用镜像加速,替换
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
为image: quay.mirrors.ustc.edu.cn/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
(这里替换为0.24.1版本是由于我在使用0.25.0版本时存在健康探测不经过的问题)
应用以上修改完毕的配置文件
root@k8s-001:/home# kubectl apply -f ingress.yaml namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created deployment.apps/nginx-ingress-controller created
稍等一会再看下pod状态
root@k8s-001:/home/kubernetes/init# kubectl get pod -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-6558b88448-mv7cz 1/1 Running 0 2m57s 10.0.3.6 k8s-003 <none> <none>
到此为止集群已经安装完毕了,如今安装一个简单的nginx应用测试一把
生成nginx应用的配置文件nginx.yaml,定义了deployment、service、ingress三种资源
apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - name: nginx image: nginx:1.15.0 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx spec: ports: - port: 80 targetPort: 80 selector: app: nginx --- apiVersion: extensions/v1beta1 kind: Ingress metadata: labels: app: nginx name: nginx spec: rules: - host: your.local.domain http: paths: - backend: serviceName: nginx servicePort: 80 path: /
执行如下命令部署
root@k8s-001:/home# kubectl apply -f nginx.yaml deployment.apps/nginx created service/nginx created ingress.extensions/nginx created
查看pod状态
root@k8s-001:/home# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-8cc98cb56-knszf 1/1 Running 0 12s
使用curl命令测试
root@k8s-001:/home# curl -H "Host: your.local.domain" 10.0.3.6 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
大功告成!