CentOS7搭建kubernetes集群环境

1、服务器环境准备node

192.168.247.128 : k8s-master、etcd、registrydocker

192.168.247.129 : k8s-nodeAapi

192.168.247.130 : k8s-nodeBbash

注:安装lsb_release命令:yum install redhat-lsb -y服务器

三台机器配置相同网络

安装好相同版本的docker:ui

[root@localhost ~]# docker -v
Docker version 1.12.6, build 85d7426/1.12.6atom

三台机器上分别修改hostname:.net

master上运行:rest

[root@localhost ~]#  hostnamectl --static set-hostname  k8s-master

nodeA上运行:

[root@localhost ~]# hostnamectl --static set-hostname  k8s-nodeA

nodeB上运行:

[root@localhost ~]# hostnamectl --static set-hostname  k8s-nodeB

三台机器上分别配置hosts, 执行以下命令修改hosts文件:

echo '192.168.247.128  k8s-master
192.168.247.128   etcd
192.168.247.128   registry
192.168.247.129   k8s-nodeA
192.168.247.130   k8s-nodeB' >> /etc/hosts

关闭三台机器的防火墙, 三台机器上分别执行:

[root@localhost ~]# systemctl stop firewalld

注:防火墙相关命令:

查看防火墙状态: systemctl status firewalld 

关闭防火墙:systemctl stop firewalld

开启防火墙:systemctl start firewalld

关闭前:

 

关闭后:

2、安装etcd

k8s运行依赖etcd,须要先安装etcd, yum方式安装etcd:

在k8s-master上运行:

yum install etcd -y

安装完成后编辑配置文件 , yum安装的etcd默认配置文件在/etc/etcd/etcd.conf

修改以下三个参数值:

执行以下命令,启动etcd, 并验证状态是否正确 :

[root@localhost ~]# systemctl start etcd

[root@localhost ~]# etcdctl set developer xiejunbo
xiejunbo
[root@localhost ~]# etcdctl get developer
xiejunbo
[root@localhost ~]# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy
[root@localhost ~]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy

说明ectd状态健康,能够正常使用。

3、部署k8s-master

安装docker:

yum install docker

修改docker配置文件:vi /etc/sysconfig/docker

设置docker开机自启动,而后开启docker服务:

[root@localhost ~]# chkconfig docker on

[root@localhost ~]# service docker start

安装kubernetes : 

使用yum方式安装kubernetes:  yum install kubernetes

kubernetes安装成功后, 配置并启动kubernetes :

在kubernetes master 上运行须要如下组件:

1.kubernetes api server

2.kubernetes controller manager

3.kubernetes scheduler

须要修改相对应的配置文件 :

/etc/kubernetes/apiserver: 修改四个参数:

/etc/kubernetes/config: 修改一个参数:

修改完成后,启动服务,而后设置开机自启动:

[root@localhost ~]# systemctl enable kube-apiserver.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@localhost ~]# systemctl start kube-apiserver.service
[root@localhost ~]# systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@localhost ~]# systemctl start kube-controller-manager.service
[root@localhost ~]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@localhost ~]# systemctl start kube-scheduler.service

4、部署k8s-node

1.安装docker略

2.nodeA节点安装kubernetes: yum install kubernetes

配置并启动kubernetes:

在k8s-node上须要运行如下组件:

1.kubelet

 2.kubernetes proxy

须要对应修改两个配置文件 :

修改/etc/kubernetes/config中的kube_master地址参数:

修改/etc/kubernetes/kubelet中的三个参数:

修改完成后,启动服务并设置开机自动启动:

[root@localhost ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@localhost ~]# systemctl start kubelet.service
[root@localhost ~]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@localhost ~]# systemctl start kube-proxy.service

 

节点启动后,在master上查看状态是否正常:

[root@localhost ~]# kubectl -s http://k8s-master:8080 get node
NAME        STATUS    AGE
k8s-nodea   Ready     2m
[root@localhost ~]# kubectl get nodes
NAME        STATUS    AGE
k8s-nodea   Ready     7m

 

在节点k8s-nodeB上按nodeA操做,一样安装kubernetes:

安装kubernetes 成功后, 按k8s-nodeA修改配置:

修改完配置后,启动服务并设置开机自启动:

[root@localhost ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@localhost ~]# systemctl start kubelet.service
[root@localhost ~]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@localhost ~]# systemctl start kube-proxy.service

在master上查看集群中节点及节点状态:

5、建立Flannel网络

在k8s-master、k8s-nodeA、k8s-nodeB上均安装Flannel, 执行命令:

yum install flannel

安装成功后,k8s-master、k8s-nodeA、k8s-nodeB上均修改配置文件为 :/etc/sysconfig/flanneld

k8s-master中配置etcd中关于flannel的key:

[root@localhost ~]# etcdctl mk /atomic.io/network/config '{"Network":"192.0.0.0/16"}'
{"Network":"192.0.0.0/16"}

特别注意:Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,因此须要在etcd上进行以下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)

启动Flannel以后,须要依次重启docker、kubernete:

 在master执行:

systemctl enable flanneld.service 
systemctl start flanneld.service 
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

在node上执行:

systemctl enable flanneld.service 
systemctl start flanneld.service 
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service

安装配置完成

===========================================================

检查K8S版本:

 

Congratulation !  K8S集群环境搭建完成!能够开撸了~

============================================================

相关文章
相关标签/搜索