一、设置永久主机名称,而后从新登陆html
1
2
3
|
$
sudo
hostnamectl
set
-
hostname
master
$
sudo
hostnamectl
set
-
hostname
node1
$
sudo
hostnamectl
set
-
hostname
node2
|
二、修改 /etc/hostname 文件,添加主机名和 IP 的对应关系:node
1
2
3
4
|
$ vim
/etc/hosts
192.168.10.103 master
192.168.10.104 node1
192.168.10.105 node2
|
1
2
|
$ yum -y
install
ntpdate
$
sudo
ntpdate cn.pool.ntp.org
|
在每台机器上关闭防火墙:linux
① 关闭服务,并设为开机不自启git
1
2
|
$
sudo
systemctl stop firewalld
$
sudo
systemctl disable firewalld
|
② 清空防火墙规则github
1
2
|
$
sudo
iptables -F &&
sudo
iptables -X &&
sudo
iptables -F -t nat &&
sudo
iptables -X -t nat
$
sudo
iptables -P FORWARD ACCEPT
|
一、若是开启了 swap 分区,kubelet 会启动失败(能够经过将参数 --fail-swap-on 设置为false 来忽略 swap on),故须要在每台机器上关闭 swap 分区:docker
1
|
$
sudo
swapoff -a
|
二、为了防止开机自动挂载 swap 分区,能够注释 /etc/fstab 中相应的条目:json
1
|
$
sudo
sed
-i
'/ swap / s/^\(.*\)$/#\1/g'
/etc/fstab
|
一、关闭 SELinux,不然后续 K8S 挂载目录时可能报错 Permission denied :vim
1
|
$
sudo
setenforce 0
|
二、修改配置文件,永久生效;centos
1
2
|
$ vim
/etc/selinux/config
SELINUX=disabled
|
如下操做在3个服务器上,都要执行!api
(1)添加docker-ce 源信息
1
|
[root@master ~]
# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo
|
(2)修改docker-ce 源
1
|
[root@master ~]
# sed -i 's@download.docker.com@mirrors.tuna.tsinghua.edu.cn/docker-ce@g' /etc/yum.repos.d/docker-ce.repo
|
1
2
3
4
5
6
7
|
[root@node2 ~]
# cd /etc/yum.repos.d/
[root@master yum.repos.d]
# vim kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https:
//mirrors
.aliyun.com
/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
enable
=1
|
1
2
3
4
5
6
7
8
|
[root@master yum.repos.d]
# yum clean all
[root@master yum.repos.d]
# yum repolist
repo
id
repo name status
base base 9,363
docker-ce-stable
/x86_64
Docker CE Stable - x86_64 20
epel
/x86_64
Extra Packages
for
Enterprise Linux 7 - x86_64 12,663
kubernetes Kubernetes Repo 246
repolist: 22,292
|
(1)安装
1
2
|
[root@master ~]
# yum -y install docker-ce-17.03.2.ce 下载稳定版本17.03.2
[root@master ~]
# yum -y install kubeadm-1.11.1 kubelet-1.11.1 kubectl-1.11.1
|
(2)安装docker报错(虚拟机中可能会遇到,若是没有报错请忽略)
Error: Package: docker-ce-18.03.1.ce-1.el7.centos.x86_64 (docker-ce-stable)
Requires: container-selinux >= 2.9
报错缘由: docker-ce-selinux 版本太低
解决办法:在https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/x86_64/stable/Packages/网站下载对应版本的docker-ce-selinux,安装便可
1
|
[root@master ~]
# yum -y install https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm
|
再次安装docker 成功:
1
|
[root@master ~]
# yum -y install docker-ce-17.03.2.ce
|
(1)添加加速器到配置文件
1
2
3
4
5
6
|
[root@master ~]
# mkdir -p /etc/docker
[root@master ~]
# tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors"
: [
"https://registry.docker-cn.com"
]
}
EOF
|
(2)启动服务
1
2
3
|
[root@master ~]
# systemctl daemon-reload
[root@master ~]
# systemctl start docker
[root@master ~]
# systemctl enable docker.service
|
(3)打开iptables内生的桥接相关功能,已经默认开启了,没开启的自行开启
1
2
3
4
|
[root@node1 ~]
# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
[root@node1 ~]
# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
|
(1)修改配置文件
1
2
3
|
[root@master ~]
# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=
"--fail-swap-on=false"
KUBE_PROXY=MODE=ipvs
|
(2)先设为开机自启
1
|
[root@master ~]
# systemctl enable kubelet.service
|
由于K8S集群还未初始化,因此kubelet 服务启动不成功,下面初始化完成,再启动便可。
在master服务器上执行,完成如下全部操做
(1)使用kubeadm init 进行初始化(须要进行不少操做,因此要等待一段时间)
1
|
[root@master ~]
# kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
|
释:
注:
由于kubeadm须要拉取必要的镜像,这些镜像须要“***”;因此能够先在docker hub或其余镜像仓库拉取kube-proxy、kube-scheduler、kube-apiserver、kube-controller-manager、etcd、pause镜像;并加上 --ignore-preflight-errors=all 忽略全部报错便可。
(2)下载镜像
我已经将我下载的镜像导出,放入个人网盘,有须要的打赏一杯咖啡钱,私聊博主;博主会很快恢复的;
1
2
3
4
5
6
|
[root@master ~]
# docker image load -i kube-apiserver-amd64.tar.gz
[root@master ~]
# docker image load -i kube-proxy-amd64.tar.gz
[root@master ~]
# docker image load -i kube-controller-manager-amd64.tar.gz
[root@master ~]
# docker image load -i kube-scheduler-amd64.tar.gz
[root@master ~]
# docker image load -i etcd-amd64.tar.gz
[root@master ~]
# docker image load -i pause.tar.gz
|
(3)初始化命令成功后,建立.kube目录
1
2
|
[root@master ~]
# mkdir -p $HOME/.kube
[root@master ~]
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
(1)拉取了必须的镜像
1
2
3
4
5
6
7
8
|
[root@master ~]
# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io
/kube-proxy-amd64
v1.11.1 d5c25579d0ff 6 months ago 97.8 MB
k8s.gcr.io
/kube-scheduler-amd64
v1.11.1 272b3a60cd68 6 months ago 56.8 MB
k8s.gcr.io
/kube-apiserver-amd64
v1.11.1 816332bd9d11 6 months ago 187 MB
k8s.gcr.io
/kube-controller-manager-amd64
v1.11.1 52096ee87d0e 6 months ago 155 MB
k8s.gcr.io
/etcd-amd64
3.2.18 b8df3b177be2 9 months ago 219 MB
k8s.gcr.io
/pause
3.1 da86e6ba6ca1 13 months ago 742 kB
|
(2)开启了kube-apiserver 的6443端口
1
2
|
[root@master ~]
# ss -nutlp
tcp LISTEN 0 128 :::6443 :::*
users
:((
"kube-apiserver"
,pid=1609,fd=3))
|
(3)使用kubectl命令查询集群信息
查询组件状态信息
1
2
3
4
5
|
[root@master ~]
# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {
"health"
:
"true"
}
|
查询集群节点信息(由于尚未部署好flannel,因此节点显示为NotReady)
1
2
3
|
[root@master ~]
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 13m v1.11.1
|
查询名称空间,默认
1
2
3
4
5
|
[root@master ~]
# kubectl get ns
NAME STATUS AGE
default Active 13m
kube-public Active 13m
kube-system Active 13m
|
(1)直接使用kubectl 执行gitlab上的flannel 部署文件
1
2
3
4
5
6
7
8
9
10
|
[root@master ~]
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io
/flannel
created
clusterrolebinding.rbac.authorization.k8s.io
/flannel
created
serviceaccount
/flannel
created
configmap
/kube-flannel-cfg
created
daemonset.extensions
/kube-flannel-ds-amd64
created
daemonset.extensions
/kube-flannel-ds-arm64
created
daemonset.extensions
/kube-flannel-ds-arm
created
daemonset.extensions
/kube-flannel-ds-ppc64le
created
daemonset.extensions
/kube-flannel-ds-s390x
created
|
(2)会看到下载好的flannel 的镜像
1
2
3
|
[root@master ~]
# docker image ls |grep flannel
quay.io
/coreos/flannel
v0.10.0-amd64 f0fad859c909 12 months ago 44.6 MB
quay.io
/coreos/flannel
v0.9.1 2b736d06ca4c 14 months ago 51.3 MB
|
(3)验证
① master 节点已经Ready
1
2
3
|
[root@master ~]
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 14m v1.11.1
|
② 查询kube-system名称空间下
1
2
3
|
[root@master ~]
# kubectl get pods -n kube-system(指定名称空间) |grep flannel
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-4wck2 1
/1
Running 0 1m
|
在2个node 服务器上执行,完成如下全部操做
(1)初始化node节点;下边的命令是master初始化完成后,下边有提示的操做命令
1
|
[root@node1 ~]
# kubeadm join 192.168.10.103:6443 --token t56pjr.cm898tj09xm9pkqz --discovery-token-ca-cert-hash sha256:3ffe1c840e8a4b334fc2cc3d976b0e3635410e52e3653bb39585b8b557f81bc4 --ignore-preflight-errors=Swap
|
(2)从节点若是不能“***”,只需从本地上传2个镜像便可;仍是我网盘中的镜像
1
2
|
[root@node1 ~]
# docker image load -i kube-proxy-amd64.tar.gz
[root@node1 ~]
# docker image load -i pause.tar.gz
|
(1)查询2个节点的镜像
1
2
3
4
5
|
[root@node1 ~]
# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io
/kube-proxy-amd64
v1.11.1 d5c25579d0ff 6 weeks ago 97.8 MB
quay.io
/coreos/flannel
v0.10.0-amd64 f0fad859c909 7 months ago 44.6 MB
k8s.gcr.io
/pause
3.1 da86e6ba6ca1 8 months ago 742 kB
|
(2)等2个从节点上下载好镜像,初始化完成,再在主上查询验证
1
2
3
4
5
|
[root@master ~]
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 28m v1.11.1
node1 Ready <none> 7m v1.11.1
node2 Ready <none> 2m v1.11.1
|
(3)在主节点查询kube-system名称空间下关于node节点pod的信息
1
2
3
4
5
|
[root@master ~]
# kubectl get pods -n kube-system -o wide |grep node
kube-flannel-ds-amd64-fcm9x 1
/1
Running 15 91d 192.168.130.105 node2
kube-flannel-ds-amd64-hzkp7 1
/1
Running 17 91d 192.168.130.104 node1
kube-proxy-f2kkn 1
/1
Running 34 139d 192.168.130.104 node1
kube-proxy-kkqln 1
/1
Running 35 139d 192.168.130.105 node2
|
至此,kubernetes集群已经搭建安装完成;kubeadm帮助咱们在后台完成了全部操做;想要本身所有手动搭建kubernetes集群