容器中持久化的文件生命周期是短暂的,若是容器中程序崩溃宕机,kubelet 就会从新启动,容器中的文件将会丢失,因此对于有状态的应用容器中持久化存储是相当重要的一个环节;另外不少时候一个 Pod 中可能包含多个 Docker 镜像,在 Pod 内数据也须要相互共享,Kubernetes 中 Pod 也能够增长副本数量,遇到故障时 Pod 能够转移到其它节点,为了浮动节点都可以访问统一的持久化存储以及容器间共享数据,Kubernetes 中定义了 Volume 来解决这些问题 ,从本质上讲,Volume 只是一个目录,可能包含一些数据,Pod 中的容器能够访问它。该目录是何种形式,是由所使用的 Volume 类型决定的。html
Kubernetes 支持不少 Volume(https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes)类型:node
若是使用公有云,根据不一样的云厂商提供的服务能够选择以下类型mysql
awsElasticBlockStore
azureDisk
azureFile
如下介绍一些经常使用的类型linux
emptyDirnginx
通常适用与临时文件场景,如上传图片运行时生成的流文件,Pod 中的容器都可以彻底读写,可是 Pod 若是被移除,数据也就被删除,容器宕机不会删除 Pod ,所以不会形成数据丢失。要使用 Volume ,Pod 中须要使用.spec.volumes 配置定义类型,而后使用 .spec.containers.volumeMounts 配置定义挂载的信息。git
apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: k8s.gcr.io/test-webserver name: test-container volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
hostPathgithub
hostPath Volume 为 Pod 挂载宿主机上的目录或文件,使得容器可使用宿主机的高速文件系统进行存储。缺点是,Pod 是动态在各个节点上调度。当一个 Pod 在当前节点上启动并经过 hostPath存储了文件到本地之后,下次调度到另外一个节点上启动时,就没法使用在以前节点上存储的文件。 web
apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - image: test-container name: test-name volumeMounts: - name: test-volume mountPath: /cache volumes: - name: test-volume hostPath: path: /data
nfsredis
咱们前面使用的 hostPath 和 emptyDir 类型的 Volume 有可能被 kubelet 清理掉,也不能被“迁移”到其余节点上,不具有持久化特性。 NFS(网络文件系统)服务须要搭建好,共享到 Pod 中,与 emptyDir
移除 Pod 时删除的内容不一样,NFS
卷的内容将被保留,下次运行 Pod 能够继续使用,对于一些 IO 与网络要求不高的的场景可使用。 sql
apiVersion: v1 kind: PersistentVolume metadata: name: test--nfs-pv spec: capacity: storage: 5Gi accessModes: - ReadWriteMany flexVolume: driver: "k8s/nfs" fsType: "nfs" options: server: "192.168.10.100" # NFS 服务器地址 path: "/"
cephfs
Cephfs 是一个分布式存储系统,诞生于2004年,最先致力于开发下一代高性能分布式文件系统的项目。提早是也须要提早搭建好存储集群服务,也可使用 Rook (支持 Ceph),现属于 CNCF 孵化项目。
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: fast provisioner: kubernetes.io/rbd parameters: monitors: 10.16.153.105:6789 adminId: kube adminSecretName: ceph-secret adminSecretNamespace: kube-system pool: kube userId: kube userSecretName: ceph-secret-user userSecretNamespace: default fsType: ext4 imageFormat: "2" imageFeatures: "layering"
glusterfs
GlusterFS 是一个开源的分布式文件系统,具备强大的横向扩展能力,经过扩展可以支持数 PB 存储容量和处理数千客户端。
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: slow provisioner: kubernetes.io/glusterfs parameters: resturl: "http://127.0.0.1:8081" clusterid: "630372ccdc720a92c681fb928f27b53f" restauthenabled: "true" restuser: "admin" secretNamespace: "default" secretName: "heketi-secret" gidMin: "40000" gidMax: "50000" volumetype: "replicate:3"
准备三台服务器
172.23.216.48 gfs_node_1 172.23.216.49 gfs_node_2 172.23.216.50 gfs_node_3
安装 Glusterfs
$ yum install centos-release-gluster $ yum install glusterfs-server $ systemctl start glusterd.service $ systemctl enable glusterd.service $ systemctl status glusterd.service
建立存储目录
$ mkdir /opt/gfs_data
添加节点
$ gluster peer probe node2
$ gluster peer probe node3
查看节点
$ gluster peer status
Number of Peers: 2 Hostname: 172.23.216.49 Uuid: 4dcfad42-e327-4a79-8a5a-a55dc92982ba State: Peer in Cluster (Connected) Hostname: 172.23.216.50 Uuid: 84e90bcf-af22-4cac-a6b1-e3e0d87d7eb4 State: Peer in Cluster (Connected)
建立数据卷(测试使用分布式模式,生产勿用)
# 复制模式 $ gluster volume create k8s-volume replica 3 transport tcp gfs_node_1:/opt/gfs_data gfs_node_2:/opt/gfs_data gfs_node_3:/opt/gfs_data force # 分布卷(默认模式) $ gluster volume create k8s-volume transport tcp 172.23.216.48:/opt/gfs_data 172.23.216.49:/opt/gfs_data 172.23.216.50:/opt/gfs_data force
备注:其余卷模式
CentOS7安装GlusterFS。
启动数据卷
$ gluster volume start k8s-volume
$ gluster volume info
Volume Name: k8s-volume Type: Distribute Volume ID: 1203a7ab-45c5-49f0-a920-cbbe8968fefa Status: Started Snapshot Count: 0 Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: 172.23.216.48:/opt/gfs_data Brick2: 172.23.216.49:/opt/gfs_data Brick3: 172.23.216.50:/opt/gfs_data Options Reconfigured: transport.address-family: inet nfs.disable: on
相关命令
#为存储池添加/移除服务器节点 $ gluster peer probe $ gluster peer detach $ gluster peer status #建立/启动/中止/删除卷 $ gluster volume create [stripe | replica ] [transport [tcp | rdma | tcp,rdma]] ... $ gluster volume start $ gluster volume stop $ gluster volume delete 注意,删除卷的前提是先中止卷。 #查看卷信息 $ gluster volume list $ gluster volume info [all] $ gluster volume status [all] $ gluster volume status [detail| clients | mem | inode | fd] #查看本节点的文件系统信息: $ df -h [] #查看本节点的磁盘信息: $ fdisk -l
要在一个 Pod 里声明 Volume,只要在 Pod 里加上 spec.volumes 字段便可。而后在这个字段里定义一个具体 Volume 的类型, 参考官方文档:https://github.com/kubernetes/examples/tree/master/staging/volumes/glusterfs
修改以下:
glusterfs-endpoints
{ "kind": "Endpoints", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "subsets": [ { "addresses": [ { "ip": "172.23.216.48" } ], "ports": [ { "port": 1000 } ] }, { "addresses": [ { "ip": "172.23.216.49" } ], "ports": [ { "port": 1000 } ] }, { "addresses": [ { "ip": "172.23.216.50" } ], "ports": [ { "port": 1000 } ] } ] }
glusterfs-service.json
{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "spec": { "ports": [ {"port": 1000} ] } }
glusterfs-pod.json(建立测试 Pod)
{ "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "glusterfs" }, "spec": { "containers": [ { "name": "glusterfs", "image": "nginx", "volumeMounts": [ { "mountPath": "/mnt/glusterfs", "name": "glusterfsvol" } ] } ], "volumes": [ { "name": "glusterfsvol", "glusterfs": { "endpoints": "glusterfs-cluster", "path": "k8s-volume", "readOnly": true } } ] } }
依次执行
$ kubectl apply -f glusterfs-endpoints.json $ kubectl get ep $ kubectl apply -f glusterfs-service.json $ kubectl get svc
# 查看测试 Pod
$ kubectl apply -f glusterfs-pod.json $ kubectl get pods $ kubectl describe pods/glusterfs $ kubectl exec glusterfs -- mount | grep gluster
在 Kubernetes 中还引入了一组叫做 Persistent Volume Claim(PVC)和 Persistent Volume(PV)的 API 对象,很大程度简化了用户声明和使用持久化 Volume 的门槛。好比 PV 是群集中已由管理员配置的一块存储,通常系统管理员建立 endpoint、Service、PV ;PVC 是由开发人员进行配置 ,Pod 挂载到 PVC 中,PVC 能够向 PV 申请指定大小的存储资源并设置访问模式,而不须要关注存储卷采用何种技术实现。
定义 PV
$ vi glusterfs-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: gluster-dev-volume spec: capacity: storage: 8Gi accessModes: - ReadWriteMany glusterfs: endpoints: "glusterfs-cluster" path: "k8s-volume" readOnly: false
执行
$ kubectl apply -f glusterfs-pv.yaml
$ kubectl get pv
定义 PVC
$ cat glusterfs-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterfs-nginx spec: accessModes: - ReadWriteMany resources: requests: storage: 2Gi
执行
$ kubectl apply -f glusterfs-pvc.yaml
$ kubectl get pvc
备注:访问模式
ReadWriteOnce – the volume can be mounted as read-write by a single node ReadOnlyMany – the volume can be mounted read-only by many nodes ReadWriteMany – the volume can be mounted as read-write by many nodes
能够在 Dashboard 中查看存储卷的信息
测试数据卷
$ wget https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/deployment.yaml $ vi deployment.yaml apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 volumeMounts: - name: gluster-dev-volume mountPath: "/usr/share/nginx/html" volumes: - name: gluster-dev-volume persistentVolumeClaim: claimName: glusterfs-nginx
执行
$ kubectl apply -f deployment.yaml $ kubectl describe deployment nginx-deployment $ kubectl get pods -l app=nginx
$ kubectl get pods -l app=nginx NAME READY STATUS RESTARTS AGE nginx-deployment-5c689d88bb-7rx7d 1/1 Running 0 2d21h nginx-deployment-5c689d88bb-hfqzm 1/1 Running 0 2d21h nginx-deployment-5c689d88bb-tlwmn 1/1 Running 0 2d21h
建立文件
$ kubectl exec -it nginx-deployment-5c689d88bb-7rx7d -- touch index.html
最后查询 GlusterFS数据卷上是否有数据验证下便可。
前面介绍 PV 和 PVC 的时候,通常 PV 这个对象的建立,是由运维人员完成,PVC 多是开发人员定义。在大规模的生产环境里,可能有不少 PVC ,这意味着运维人员必须得事先建立出成千上万个 PV,这实际上是一个很是麻烦的工做,因此在 Kubernetes 还引入了能够自动建立 PV 的机制 Dynamic Provisioning 概念,Dynamic Provisioning 机制工做的核心,在于一个名叫 StorageClass 对象。管理员能够定义 Storage Class 来描述他们提供的存储类型,根据 Storage Class 对象中设置的参数自动分配 PV ,好比能够分别定义两种 Storage Class :slow 和 fast。slow 对接 sc1(机械硬盘),fast 对接 gp2(固态硬盘)。应用能够根据业务的性能需求,分别选择不一样的存储方式。下面建立 StorageClass 使用 StorageClass 链接 Heketi,根据须要自动建立 GluserFS 的 Volume,StorageClass 仍是要系统管理员建立,不一样的 PVC 能够用同一个 StorageClass 配置。
官方的文档与视频介绍:
备注:并非全部的存储方式都支持 Dynamic Provisioning 特性,官方文档列出了默认支持 Dynamic Provisioning 的内置存储插件,固然也能够扩展第三方的存储组件 kubernetes-incubator/external-storage 实现。
配置 Heketi
GlusterFS 是个开源的分布式文件系统,而 Heketi 在其上提供了 REST 形式的 API,两者协同为 Kubernetes 提供了存储卷的自动供给能力。按照官方的 persistent-volume-provisioning 示列,这里须要配置 Heketi 提供了一个管理 GlusterFS 集群的 RESTTful 服务,提供 API 接口供 Kubernetes 调用 。
$ yum install epel-release $ yum install heketi heketi-client
查看版本
$ heketi --version Heketi 7.0.0 $ heketi --help Heketi is a restful volume management server Usage: heketi [flags] heketi [command] Examples: heketi --config=/config/file/path/ Available Commands: db heketi db management help Help about any command Flags: --config string Configuration file -h, --help help for heketi -v, --version Show version Use "heketi [command] --help" for more information about a command.
修改 Heketi 配置文件
vi /etc/heketi/heketi.json #修改端口,默认 8080(服务器 8080 占用了) "port": "8000", ...... # 容许认证 "use_auth": true, ...... # 修改admin用户的key "key": "testtoken" ...... # 配置ssh的所需证书,对集群中的机器免密登录 "executor": "ssh", "sshexec": { "keyfile": "/etc/heketi/heketi_key", "user": "root", "port": "22", "fstab": "/etc/fstab" }, ...... # 定义heketi数据库文件位置 "db": "/var/lib/heketi/heketi.db" ...... #修改日志级别 "loglevel" : "info"
配置 SSH 密钥
#生成 rsa ssh-keygen -t rsa -q -f /etc/heketi/heketi_key -N '' chmod 700 /etc/heketi/heketi_key.pub # 复制 ssh 公钥上传到 GlusterFS 三台服务器(heketi 也能够单独部署) ssh-copy-id -i /etc/heketi/heketi_key.pub root@172.23.216.48 ssh-copy-id -i /etc/heketi/heketi_key.pub root@172.23.216.49 ssh-copy-id -i /etc/heketi/heketi_key.pub root@172.23.216.50 # 验证是否能经过ssh密钥正常链接到 glusterfs 节点 ssh -i /etc/heketi/heketi_key root@172.23.216.49
启动 Heketi
$ nohup heketi --config=/etc/heketi/heketi.json & nohup: ignoring input and appending output to ‘nohup.out’ $ cat nohup.out Heketi 7.0.0 [heketi] INFO 2018/11/09 15:50:36 Loaded ssh executor [heketi] INFO 2018/11/09 15:50:36 GlusterFS Application Loaded [heketi] INFO 2018/11/09 15:50:36 Started Node Health Cache Monitor Authorization loaded Listening on port 8000
测试 Heketi 服务端$ curl http://localhost:8000/hello Hello from Heketi
Heketi 要求在每一个 GlusterFS 节点上配备裸磁盘 device,不支持文件系統,通常以下配置,能够经过 fdisk –l 命令查看。系统盘:/dev/vda 数据盘:/dev/vdb 云硬盘:/dev/vdc
查看磁盘$ fdisk -l Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000d3387 Device Boot Start End Blocks Id System /dev/sda1 * 2048 411647 204800 83 Linux /dev/sda2 411648 8800255 4194304 82 Linux swap / Solaris /dev/sda3 8800256 104857599 48028672 83 Linux $ df -lh Filesystem Size Used Avail Use% Mounted on /dev/sda3 46G 5.1G 41G 11% / devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 3.9G 172M 3.7G 5% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/sda1 197M 167M 30M 85% /boot overlay 46G 5.1G 41G 11% /var/lib/docker/containers/1c3c53802122a9ce7e3044e83f22934bb700baeda1bedc249558e9a068e892a7/mounts/shm overlay 46G 5.1G 41G 11% /var/lib/docker/overlay2/bbda116e3a230e59710afd2d9ec92817d65d71b82ccebf4d71bfc589c3605b75/merged tmpfs 3.9G 12K 3.9G 1% /var/lib/kubelet/pods/fb62839c-dc19-11e8-90ea-0050569f4a19/volumes/kubernetes.io~secret/coredns-token-v245h tmpfs 3.9G 12K 3.9G 1% /var/lib/kubelet/pods/fb638fce-dc19-11e8-90ea-0050569f4a19/volumes/kubernetes.io~secret/coredns-token-v245h overlay 46G 5.1G 41G 11% /var/lib/docker/overlay2/a85cbca8be37d9e00565d83350721091105b74e1609d399a0bb1bb91a2c56e09/merged shm 64M 0 64M 0% tmpfs 783M 0 783M 0% /run/user/0 vdb 3.9G 0 3.9G 0% /mnt/disks/vdb vdc 3.9G 0 3.9G 0% /mnt/disks/vdc
配置 topology-sample.json
{ "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "172.23.216.48" ], "storage": [ "172.23.216.48" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] }, { "node": { "hostnames": { "manage": [ "172.23.216.49" ], "storage": [ "172.23.216.49" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] }, { "node": { "hostnames": { "manage": [ "172.23.216.50" ], "storage": [ "172.23.216.50" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] } ] } ] }
添加节点
$ heketi-cli --server http://localhost:8000 --user admin --secret "testtoken" topology load --json=topology-sample.json Creating cluster ... ID: c2834ba9a3b5b6975150ad396b5ed7ca Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node 172.23.216.48 ... ID: 8c5cbad748520b529ea20f5296921928 Adding device /dev/vdb ... Unable to add device: Device /dev/vdb not found. Found node 172.23.216.49 on cluster c13ecf0a70808a3dc8abcd8de908c1ea Adding device /dev/vdb ... Unable to add device: Device /dev/vdb not found. Found node 172.23.216.50 on cluster c13ecf0a70808a3dc8abcd8de908c1ea Adding device /dev/vdb ... Unable to add device: Device /dev/vdb not found.
其余命令
#建立 cluster
heketi-cli --server http://localhost:8000 --user admin --secret "testtoken" topology load --json=topology-sample.json
#建立 volume
heketi-cli --server http://localhost:8000 --user admin --secret "testtoken" volume create --size=3 --replica=2
#查看节点
heketi-cli --server http://localhost:8000 --user admin --secret "testtoken" node list
!!!因为没有多余的挂载磁盘,参考其余文章吧。
参考文章:
建立 StorageClass
vi glusterfs-storageclass.yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: glusterfs-sc provisioner: kubernetes.io/glusterfs parameters: resturl: "http://172.23.216.48:8000" restauthenabled: "true" restuser: "admin" restuserkey: "testtoken" volumetype: "replicate:2"
上述 provisioner: kubernetes.io/glusterfs 是 Kubernetes 内置的存储插件的名字,不一样的存储方式不一样。
执行
$ kubectl apply -f glusterfs-storageclass.yaml $ kubectl get sc NAME PROVISIONER AGE glusterfs-sc kubernetes.io/glusterfs 59s
上述是 Gluster github 上的示列使用 restuserkey 的方式 , Kubernetes 官方推荐的方式把 key 使用 secret 保存apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: slow provisioner: kubernetes.io/glusterfs parameters: resturl: "http://127.0.0.1:8081" clusterid: "630372ccdc720a92c681fb928f27b53f" restuser: "admin" secretNamespace: "default" secretName: "heketi-secret" gidMin: "40000" gidMax: "50000" volumetype: "replicate:3" volumeoptions: "client.ssl on, server.ssl on" volumenameprefix: "dept-dev" snapfactor: "10" --- apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default data: # base64 encoded password. E.g.: echo -n "mypassword" | base64 key: bXlwYXNzd29yZA== type: kubernetes.io/glusterfs
示列:
建立 PVCvi glusterfs-mysql-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: glusterfs-mysql-pvc annotations: volume.beta.kubernetes.io/storage-class: glusterfs-sc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi
执行
$ kubectl apply -f glusterfs-mysql-pvc.yaml
persistentvolumeclaim/glusterfs-mysql-pvc created
有了 Dynamic Provisioning 机制,运维人员只须要在 Kubernetes 集群里建立出数量有限的 StorageClass 对象就能够了。运维人员在 Kubernetes 集群里建立出了各类各样的 PV 模板。开发人员提交了包含 StorageClass 字段的 PVC 以后,Kubernetes 就会根据这个 StorageClass 建立出对应的 PV。
实际运用中还有一种特殊的场景,好比在容器中部署数据库(主从同步数据,写数据频率高)对 IO 的性能与网络都会要求很高,用户但愿 Kubernetes 可以直接使用宿主机上的本地磁盘目录,而不依赖于远程存储服务,来提供“持久化”的容器 Volume。这样作的好处很明显,因为这个 Volume 直接使用的是本地磁盘,尤为是 SSD 盘,读写性能相比于大多数远程存储来讲要好不少。相比分布式存储缺点是数据一旦损坏,不具备备份与恢复的能力,须要定时备份到其余地方。
官方文档与资源:
有两种方式解决上述需求:
模拟建立两块数据盘(vdb、vdc)
$ mkdir /mnt/disks $ for vol in vdb vdc; do mkdir /mnt/disks/$vol mount -t tmpfs $vol /mnt/disks/$vol done
vi local-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /mnt/disks/vdb nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - kubernetes-node-1 #指定固定 node 节点
查看
$ kubectl create -f local-pv.yaml persistentvolume/example-pv created $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv 2Gi RWO Delete Available local-storage 12s $ kubectl describe pv example-pv Name: example-pv Labels: <none> Annotations: <none> Finalizers: [kubernetes.io/pv-protection] StorageClass: local-storage Status: Available Claim: Reclaim Policy: Delete Access Modes: RWO Capacity: 2Gi Node Affinity: Required Terms: Term 0: kubernetes.io/hostname in [kubernetes-node-1] Message: Source: Type: LocalVolume (a persistent volume backed by local storage on a node) Path: /mnt/disks/vdb Events: <none>
上述 PV 中 local 字段,指定了它是一个 Local Persistent Volume;而 path 字段,指定的是这个 PV 对应的本地磁盘的路径 ,意味着若是 Pod 要想使用这个 PV,那它就必须运行在 kubernetes-node-1 节点上。
建立 StorageClass
vi local-sc.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer
执行
$ kubectl create -f local-sc.yaml
$ kubectl get sc
NAME PROVISIONER AGE
local-storage kubernetes.io/no-provisioner 6s
建立 PVC(声明 storageClassName 是 local-storage)
vi local-pvc.yaml
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-local-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: local-storage #指定 sc
执行
$ kubectl apply -f local-pvc.yaml persistentvolumeclaim/example-local-claim created $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE example-local-claim Bound example-pv 2Gi RWO local-storage 2s
查看
上图显示 PV 与 PVC 已是 Bound 状态,因为个人 Node 节点有2个,可是模拟的磁盘只在 Node-1 节点,因此咱们要指定 Pod 运行的固定的 Node-1 节点上,咱们经过打标签的方式,这样 Kubernetes 会调度 Pod 到指定的 Node上。
打标签
$ kubectl label nodes kubernetes-node-1 zone=node-1 $ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS kubernetes-master Ready master 10d v1.12.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=kubernetes-master,node-role.kubernetes.io/master= kubernetes-node-1 Ready <none> 10d v1.12.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=kubernetes-node-1,zone=node-1 kubernetes-node-2 Ready <none> 10d v1.12.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=kubernetes-node-2
部署 Nginx 测试
vi nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: #nodeSelector: #zone: node-1 nodeName: kubernetes-node-1 #指定调度节点为 kubernetes-node-1 containers: - name: nginx-pv-container image: nginx:1.10.3 imagePullPolicy: IfNotPresent ports: - containerPort: 80 volumeMounts: - name: example-pv-storage mountPath: "/usr/share/nginx/html" volumes: - name: example-pv-storage persistentVolumeClaim: claimName: example-local-claim
执行
$ kubectl create -f nginx-deployment.yaml $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-deployment-56bc98977b-jqg44 1/1 Running 0 104s 10.40.0.4 kubernetes-node-1 <none> nginx-deployment-56bc98977b-tbkxr 1/1 Running 0 56s 10.40.0.5 kubernetes-node-1 <none>
建立文件
$ kubectl exec -it nginx-deployment-56bc98977b-jqg44 -- /bin/sh # cd /usr/share/nginx/html # touch test.html
[root@kubernetes-node-1 vdb]# ll total 0 -rw-r--r--. 1 root root 0 Nov 10 02:17 test.html
REFER:
https://kubernetes.io/docs/concepts/storage/volumes/
https://github.com/kubernetes/examples/tree/master/staging/volumes
https://www.ibm.com/developerworks/cn/opensource/os-cn-glusterfs-docker-volume/index.html
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
https://docs.gluster.org/en/latest/
https://jimmysong.io/posts/kubernetes-with-glusterfs/