Kubernetes持久化存储可分为静态存储以及动态存储,静态存储,经常使用的用hostpath本地存储,NFS,glusterfs存储等,须要事先部署好存储卷pv,再经过K8S的pvc获取到存储空间进行存储。动态存储,事先部署好glusterfs集群以及Heketi,经过两者协做,达到只须要建立申请pvc就能够动态申请到存储空间,省去了底层存储卷以及pv的建立。node
准备三台虚拟机,配置2核4g kubeadm安装的高可用K8S集群
linux
172.30.0.74 k8smaster1 hostname: k8smaster1
git
172.30.0.82 k8smaster2 hostname: k8smaster2github
172.30.0.90 k8snode hostname: k8snode
数据库
glusterfs集群建立json
配置glusterfs yum源centos
# CentOS-Gluster-4.1.repo
#
# Please see http://wiki.centos.org/SpecialInterestGroup/Storage for more
# information
[centos-gluster41]
name=CentOS-$releasever - Gluster 4.1 (Long Term Maintanance)
baseurl=http://mirror.centos.org/centos-7/7/storage/x86_64/gluster-4.1/
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
[centos-gluster41-test]
name=CentOS-$releasever - Gluster 4.1 Testing (Long Term Maintenance)
baseurl=http://buildlogs.centos.org/centos/$releasever/storage/$basearch/gluster-4.1/
gpgcheck=0
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
安装glusterfs
api
# yum install glusterfs-server服务器
启动:
app
# systemctl start glusterd
# systemctl start glusterfsd
每台glusterfs节点都要安装
glusterfs使用
# glusterfs peer probe k8smaster2
# glusterfs peer probe k8snode
将glusterfs节点加入到集群中
设置glusterfs卷:
# mkdir /data/brick1/gv2
三台机子都须要执行
建立复制卷:
# gluster volume create gv2 replica 2 172.30.0.74:/data/brick1/gv2 172.30.0.82:/data/brick1/gv2 force
启动glusterfs卷:
# gluster volume start gv2
# gluster volume info
查看卷的状况
glusterfs集群建立完成
Heketi配置
Heketi提供了一个RESTful管理界面,能够用来管理GlusterFS卷的生命周期。 经过Heketi,就能够像使用OpenStack Manila,Kubernetes和OpenShift同样申请能够动态配置GlusterFS卷。Heketi会动态在集群内选择bricks构建所需的volumes,这样以确保数据的副本会分散到集群不一样的故障域内。同时Heketi还支持任意数量的ClusterFS集群,以保证接入的云服务器不局限于单个GlusterFS集群。
获取安装包
# wget https://github.com/heketi/heketi/releases/download/v5.0.1/heketi-v5.0.1.linux.amd64.tar.gz
# tar xf heketi-v5.0.1.linux.amd64.tar.gz
# ln -s /root/heketi/heketi /bin/heketi
# ln -s /root/heketi/heketi-cli /bin/heketi-cli
修改Heketi配置文件
修改heketi配置文件/etc/heketi/heketi.json,内容以下:
......
#修改端口,防止端口冲突
"port": "18080",
......
#容许认证
"use_auth": true,
......
#admin用户的key改成adminkey
"key": "adminkey"
......
#修改执行插件为ssh,并配置ssh的所需证书,注意要能对集群中的机器免密ssh登录,使用ssh-copy-id把pub key拷到每台glusterfs服务器上
"executor": "ssh",
"sshexec": {
"keyfile": "/root/.ssh/id_rsa",
"user": "root",
"port": "22",
"fstab": "/etc/fstab"
},
......
# 定义heketi数据库文件位置
"db": "/var/lib/heketi/heketi.db"
......
#调整日志输出级别
"loglevel" : "warning"
PS:须要说明的是,heketi有三种executor,分别为mock、ssh、kubernetes,建议在测试环境使用mock,生产环境使用ssh,当glusterfs以容器的方式部署在kubernetes上时,才使用kubernetes。咱们这里将glusterfs和heketi独立部署,使用ssh的方式。
启动Heketi:
nohup heketi --config=/etc/heketi/heketi.json &
Heketi添加Glusterfs
建立集群
# heketi-cli --user admin --server http://172.30.0.74:28080 --secret adminkey --json cluster create
{"id":"7e320f3f04068c0564eb92e865263bd4","nodes":[],"volumes":[]}
使用返回的惟一集群id将三个节点加入到集群中
# heketi-cli --user admin --server http://172.30.0.73:28080 --secret adminkey --json node add --cluster "7e320f3f04068c0564eb92e865263b" --management-host-name 172.30.0.73 --storage-host-name 172.30.0.74 --zone 1
# heketi-cli --user admin --server http://172.30.0.73:28080 --secret adminkey --json node add --cluster "7e320f3f04068c0564eb92e865263b" --management-host-name 172.30.0.81 --storage-host-name 172.30.0.82 --zone 1
# heketi-cli --user admin --server http://172.30.0.73:28080 --secret adminkey --json node add --cluster "7e320f3f04068c0564eb92e865263b" --management-host-name 172.30.0.89 --storage-host-name 172.30.0.90 --zone 1
在三个节点建立逻辑卷,做为Heketi的device,方便后续扩展,注意Heketi只支持裸分区或者裸磁盘,不须要格式化文件系统
# heketi-cli --json device add --name="/dev/myvg/mylv" --server http://172.30.0.74:28080 --node "78850cf6d4a44964b1fdf09970feb0"
# heketi-cli --json device add --name="/dev/myvg/mylv" --server http://172.30.0.74:28080 --node "560f238695f64479298429c062dc4c"
# heketi-cli --json device add --name="/dev/myvg/mylv" --server http://172.30.0.74:28080 --node "4e3e965421d26e6858d18e6ccaf19f"
生产实际配置
上面展现了如何手动一步步生成cluster,往cluster中添加节点,添加device的操做,在咱们实际生产配置中,能够直接经过配置文件完成。
建立一个/etc/heketi/topology-sample.json的文件,内容以下:
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"192.168.75.175"
],
"storage": [
"192.168.75.175"
]
},
"zone": 1
},
"devices": [
"/dev/vda2"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.75.176"
],
"storage": [
"192.168.75.176"
]
},
"zone": 1
},
"devices": [
"/dev/vda2"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.75.177"
],
"storage": [
"192.168.75.177"
]
},
"zone": 1
},
"devices": [
"/dev/vda2"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.75.178"
],
"storage": [
"192.168.75.178"
]
},
"zone": 1
},
"devices": [
"/dev/vda2"
]
}
]
}
]
}
建立:
heketi-cli topology load --json topology-sample.json
至此前期glusterfs以及Heketi的准备工做已完成
建立K8S的storageclass文件,使得K8S调用Heketi建立底层pv
建立storageclass
[root@consolefan-1 yaml]# cat glusterfs/storageclass-glusterfs.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://172.30.0.74:28080"
clusterid: "7e320f3f04068c0564eb92e865263b"
restauthenabled: "true"
restuser: "admin"
restuserkey: "adminkey"
gidMin: "40000"
gidMax: "50000"
volumetype: "replicate:2"
# kubectl apply -f storageclass-glusterfs.yaml、
建立pvc进行验证:
[root@consolefan-1 yaml]# cat glusterfs/pvc-glusterfs.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-test1
namespace: default
annotations:
volume.beta.kubernetes.io/storage-class: "glusterfs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
# kubectl apply -f pvc-glusterfs.yaml
构建成功