在K8S中,数据卷是经过Pod实现持久化的,若是Pod删除,数据卷也会一块儿删除。k8s的数据卷是docker数据卷的扩展,K8S适配各类存储系统,包括本地存储EmptyDir、HostPath, 网络存储NFS、GlusterFS、PV/PVC等,以及云存储gce pcd、azure disk、OpenStack cinder、aws ebs、vSphere volume等。
1、本地存储
1) emptyDir
emptyDir 按需建立、随着pod的删除,它也会被删除,能够充当临时空间或cache;同一个pod内的多个containers 之间能够共享emptyDir类型的卷。
例一、html
[root@docker79 volume]# vim pod-vol-demo.yaml [root@docker79 volume]# cat pod-vol-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend annotations: inspiry.com/author: "cluster admin" spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 volumeMounts: - name: html mountPath: /data/web/html/ - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent volumeMounts: - name: html mountPath: /data/ command: - "/bin/sh" - "-c" - "sleep 7200" volumes: - name: html emptyDir: {} [root@docker79 volume]# kubectl apply -f pod-vol-demo.yaml pod/pod-demo created [root@docker79 volume]# kubectl get pods NAME READY STATUS RESTARTS AGE pod-demo 2/2 Running 0 11s [root@docker79 volume]# kubectl exec -it pod-demo -c busybox -- /bin/sh / # ls bin data dev etc home proc root sys tmp usr var / # ls /data / # echo $(date) >> /data/index.html / # [root@docker79 volume]# kubectl exec -it pod-demo -c myapp -- /bin/sh / # ls bin dev home media proc run srv tmp var data etc lib mnt root sbin sys usr / # ls /data/web/html/index.html /data/web/html/index.html / # cat /data/web/html/index.html Wed Sep 5 02:21:51 UTC 2018 / # [root@docker79 volume]# kubectl delete -f pod-vol-demo.yaml pod "pod-demo" deleted
例二、node
[root@docker79 volume]# vim pod-vol-demo.yaml [root@docker79 volume]# cat pod-vol-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend annotations: inspiry.com/author: "cluster admin" spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 volumeMounts: - name: html mountPath: /usr/share/nginx/html/ - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent volumeMounts: - name: html mountPath: /data/ command: - "/bin/sh" - "-c" - "while true; do echo $$(date) >> /data/index.html ; sleep 2; done" volumes: - name: html emptyDir: {} [root@docker79 volume]# kubectl apply -f pod-vol-demo.yaml pod/pod-demo created [root@docker79 volume]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE pod-demo 2/2 Running 0 23s 10.244.2.49 docker78 <none> [root@docker79 volume]# curl http://10.244.2.49 Wed Sep 5 02:43:32 UTC 2018 Wed Sep 5 02:43:34 UTC 2018 ......
2) hostPath
hostPath 在宿主机上建立,与容器创建关联关系。
例、nginx
[root@docker79 volume]# cat pod-vol-hostpath.yaml apiVersion: v1 kind: Pod metadata: name: pod-vol-hostpath namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html mountPath: /usr/share/nginx/html/ volumes: - name: html hostPath: path: /data/pod/volume1/ type: DirectoryOrCreate [root@docker79 volume]# [root@docker79 ~]# ssh docker78 Last login: Tue Sep 4 14:56:29 2018 from docker79 [root@docker78 ~]# mkdir -p /data/pod/volume1 [root@docker78 ~]# echo 78 > /data/pod/volume1/index.html [root@docker78 ~]# 登出 Connection to docker78 closed. [root@docker79 ~]# ssh docker77 Last login: Tue Aug 28 15:04:15 2018 from docker79 [root@docker77 ~]# mkdir -p /data/pod/volume1 [root@docker77 ~]# echo 77 > /data/pod/volume1/index.html [root@docker77 ~]# 登出 Connection to docker77 closed. [root@docker79 ~]# cd manifests/volume/ [root@docker79 volume]# kubectl apply -f pod-vol-hostpath.yaml pod/pod-vol-hostpath created [root@docker79 volume]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE pod-demo 2/2 Running 0 26m 10.244.2.49 docker78 <none> pod-vol-hostpath 1/1 Running 0 11s 10.244.2.50 docker78 <none> [root@docker79 volume]# [root@docker79 volume]# curl http://10.244.2.50 78 [root@docker79 volume]#
2、网络存储
1) nfs
Pod (Container) ------> NFS storageweb
[root@docker ~]# echo nfs > /data/volumes/index.html [root@docker ~]# [root@docker ~]# ip add show ens192 2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:0c:29:c7:42:5b brd ff:ff:ff:ff:ff:ff inet 192.168.20.223/24 brd 192.168.20.255 scope global noprefixroute ens192 valid_lft forever preferred_lft forever inet6 fe80::bdac:2fd7:290b:aba/64 scope link noprefixroute valid_lft forever preferred_lft forever [root@docker ~]# cat /etc/exports /data/volumes 192.168.20.0/24(rw,no_root_squash) [root@docker ~]# [root@docker79 volume]# vim pod-vol-nfs.yaml [root@docker79 volume]# cat pod-vol-nfs.yaml apiVersion: v1 kind: Pod metadata: name: pod-vol-nfs namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html mountPath: /usr/share/nginx/html/ volumes: - name: html nfs: path: /data/volumes server: 192.168.20.223 [root@docker79 volume]# kubectl apply -f pod-vol-nfs.yaml pod/pod-vol-nfs created [root@docker79 volume]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE pod-demo 2/2 Running 0 2h 10.244.2.49 docker78 <none> pod-vol-hostpath 1/1 Running 0 1h 10.244.2.50 docker78 <none> pod-vol-nfs 1/1 Running 0 6s 10.244.2.51 docker78 <none> [root@docker79 volume]# curl http://10.244.2.51 nfs [root@docker79 volume]#
2) persistentVolumeClaim
persistentVolumeClaim是 将分散式的存储层资源组成PV,而后在PV上建立PVC,最后将PVC挂载到Pod中的Container上进行存储数据的一种方式。底层的存储资源能够是NFS、iscsi、ceph、glusterfs等。docker
PV和PVC的生命周期
供应准备:经过集群外的存储系统或者公有云存储方案来提供存储持久化支持。
静态提供:管理员手动建立多个PV,供PVC使用。
动态提供:动态建立PVC特定的PV,并绑定。
绑定:用户建立pvc并指定须要的资源和访问模式。在找到可用pv以前,pvc会保持未绑定状态。
使用:用户可在pod中像使用volume同样使用pvc。
释放:用户删除pvc来回收存储资源,pv将变成“released”状态。因为还保留着以前的数据,这些数据须要根据不一样的策略来处理,不然这些存储资源没法被其余pvc使用。
回收(Reclaiming):pv能够设置三种回收策略:保留(Retain),回收(Recycle)和删除(Delete)
保留策略:容许人工处理保留的数据。
删除策略:将删除pv和外部关联的存储资源,须要插件支持。
回收策略:将执行清除操做,以后能够被新的pvc使用,须要插件支持。shell
PV卷阶段状态
Available – 资源还没有被PVC使用
Bound – 卷已经被绑定到PVC了
Released – PVC被删除,PV卷处于释放状态,但未被集群回收。
Failed – PV卷自动回收失败vim
PV卷的访问模式
ReadWriteOnce – 单node的读写
ReadOnlyMany – 多node的只读
ReadWriteMany – 多node的读写api
操做步骤
本文中使用nfs演示,以下:
Pod (Container) ----> PVC ----> PV ----> NFS storage网络
首先准备底层存储资源app
[root@docker ~]# cat /etc/exports /data/volumes/v1 192.168.20.0/24(rw,no_root_squash) /data/volumes/v2 192.168.20.0/24(rw,no_root_squash) /data/volumes/v3 192.168.20.0/24(rw,no_root_squash) /data/volumes/v4 192.168.20.0/24(rw,no_root_squash) /data/volumes/v5 192.168.20.0/24(rw,no_root_squash) [root@docker ~]# exportfs -rv exporting 192.168.20.0/24:/data/volumes/v5 exporting 192.168.20.0/24:/data/volumes/v4 exporting 192.168.20.0/24:/data/volumes/v3 exporting 192.168.20.0/24:/data/volumes/v2 exporting 192.168.20.0/24:/data/volumes/v1 [root@docker ~]#
而后利用存储资源 建立 PersistentVolume (PV)
[root@docker79 volume]# vim pv-vol-demo.yaml [root@docker79 volume]# cat pv-vol-demo.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv01 labels: name: pv01 spec: nfs: path: /data/volumes/v1 server: 192.168.20.223 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv02 labels: name: pv02 spec: nfs: path: /data/volumes/v2 server: 192.168.20.223 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 10Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv03 labels: name: pv03 spec: nfs: path: /data/volumes/v3 server: 192.168.20.223 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 15Gi [root@k8s-master-dev volumes]# kubectl apply -f pv-vol-demo.yaml persistentvolume/pv01 created persistentvolume/pv02 created persistentvolume/pv03 created [root@k8s-master-dev volumes]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv01 5Gi RWO,RWX Retain Available 13s pv02 10Gi RWO,RWX Retain Available 13s pv03 15Gi RWO,RWX Retain Available 13s [root@k8s-master-dev volumes]#
最后建立 PersistentVolumeClaim (PVC) ,并将其挂载到pod 中的container上。
[root@k8s-master-dev volumes]# vim pod-pvc-vol-demo.yaml [root@k8s-master-dev volumes]# cat pod-pvc-vol-demo.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc namespace: default spec: accessModes: ["ReadWriteMany"] resources: requests: storage: 6Gi --- apiVersion: v1 kind: Pod metadata: name: pod-pvc-vol namespace: default spec: containers: - name: myapp image: nginx:1.15-alpine volumeMounts: - name: html mountPath: /usr/share/nginx/html/ volumes: - name: html persistentVolumeClaim: claimName: mypvc [root@k8s-master-dev volumes]# kubectl apply -f pod-pvc-vol-demo.yaml persistentvolumeclaim/mypvc created pod/pod-pvc-vol created [root@k8s-master-dev volumes]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv01 5Gi RWO,RWX Retain Available 5m pv02 10Gi RWO,RWX Retain Bound default/mypvc 5m pv03 15Gi RWO,RWX Retain Available 5m [root@k8s-master-dev volumes]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc Bound pv02 10Gi RWO,RWX 13s [root@k8s-master-dev volumes]#
因为建立的PVC容量要求6G,因此建立PVC时它自动选择pv02(10G) 。上例中必须事先手工建立pv,而后才能够建立PVC并mount,这种方式属于静态提供。若是但愿了解动态提供(storageclass) ,可参考:http://docs.kubernetes.org.cn/803.html