本文在“建立PV,建立PVC挂载PV,建立POD挂载PVC”这个环境的基础上,进行各类删除实验,并记录、分析各资源的状态。html
实验建立了一个PV、一个PVC挂载了PV、一个POD挂载PVC,并编写了两个简单的小脚原本快速建立和删除环境。对应的脚本以下所示:docker
须要注意的是在建立PV时,PV并不会去检查你配置的server是否真的存在;也不会检查server上是否有一个可用的NFS服务;固然更不会检查你设置的storage大小是否真有那么大。我的感受PV的设置只是一个声明,系统并不会对此作任何检查。PVC的挂载也只是根据配额的大小和访问模式,过滤一下PV,并以最小的代价支持。api
[root@k8s-master pv]# cat nfs-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle nfs: path: "/data/disk1" server: 192.168.20.47 readOnly: false [root@k8s-master pv]# cat pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi [root@k8s-master pv]# cat test-pvc-pod.yaml apiVersion: v1 kind: Pod metadata: name: test-nfs-pvc labels: name: test-nfs-pvc spec: containers: - name: test-nfs-pvc image: registry:5000/back_demon:1.0 ports: - name: backdemon containerPort: 80 command: - /run.sh volumeMounts: - name: nfs-vol mountPath: /home/laizy/test/nfs-pvc volumes: - name: nfs-vol persistentVolumeClaim: claimName: nfs-pvc [root@k8s-master pv]# cat start.sh #!/bin/bash kubectl create -f nfs-pv.yaml kubectl create -f pvc.yaml kubectl create -f test-pvc-pod.yaml [root@k8s-master pv]# cat remove.sh #!/bin/bash kubectl delete pod test-nfs-pvc kubectl delete persistentvolumeclaim nfs-pvc kubectl delete persistentvolume pv0001 [root@k8s-master pv]#
建立PV,建立PVC挂载PV,建立POD挂载PVC。在删除PV后,PVC状态从Bound变为Lost,Pod中的Volume仍然能用,数据也没有被删除。bash
[root@k8s-master pv]# ./start.sh persistentvolume "pv0001" created persistentvolumeclaim "nfs-pvc" created pod "test-nfs-pvc" created [root@k8s-master pv]# kubectl get persistentvolume NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv0001 5Gi RWX Recycle Bound default/nfs-pvc 15s [root@k8s-master pv]# kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc Bound pv0001 5Gi RWX 18s [root@k8s-master pv]# kubectl get pod test-nfs-pvc NAME READY STATUS RESTARTS AGE test-nfs-pvc 1/1 Running 0 39s [root@k8s-master pv]# kubectl delete persistentvolume pv0001 persistentvolume "pv0001" deleted [root@k8s-master pv]# kubectl get persistentvolume No resources found. [root@k8s-master pv]# kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc Lost pv0001 0 1m [root@k8s-master pv]# kubectl get pod test-nfs-pvc NAME READY STATUS RESTARTS AGE test-nfs-pvc 1/1 Running 0 1m [root@k8s-master pv]# kubectl exec -ti test-nfs-pvc /bin/bash [root@test-nfs-pvc /]# cd /home/laizy/test/nfs-pvc/ [root@test-nfs-pvc nfs-pvc]# ls 2.out [root@test-nfs-pvc nfs-pvc]# exit exit [root@k8s-master pv]#
设置PV时,若是设置了回收策略是“回收”的时候,在删除PVC时,系统(Controller-Manager)会启动一个recycler的Pod,用于清理数据卷中的内容。每种数据卷的回收Pod是不一样的,都有本身特定的逻辑。本文以NFS为例,给出具体配置及Pod描述以下:服务器
[root@k8s-master ~]# cat /etc/kubernetes/controller-manager #配置完成后请重启controller-manager ### # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own! KUBE_CONTROLLER_MANAGER_ARGS="--pv-recycler-pod-template-filepath-nfs=/etc/kubernetes/recycler.yaml" [root@k8s-master ~]# cat /etc/kubernetes/recycler.yaml apiVersion: v1 kind: Pod metadata: name: pv-recycler- namespace: default spec: restartPolicy: Never volumes: - name: vol hostPath: path: /any/path/it/will/be/replaced containers: - name: pv-recycler image: "docker.io/busybox" imagePullPolicy: IfNotPresent command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"] volumeMounts: - name: vol mountPath: /scrub [root@k8s-master ~]#
若是不进行上面的设置的话,默认回收Pod用的image是gcr上的busybox,由于种种缘由,在国内是没法下载的(即便你的机器上有gcr的busybox也不能够,还须要设置镜像下载策略为IfNotPresent,不然会一直去gcr查询是否有新版本的镜像,这也会致使imagePullError)。因此必需要在Controller-Manager中进行设置,设置成随便哪一个busybox。ssh
建立PV,建立PVC挂载PV,建立POD挂载PVC。删除PVC后,PV状态从Bound变为Available,系统(controller-manager)调用持久化存储清理插件(recycler-for-pv0001),将PVC对应的PV清空。Pod中的Volume仍然能用,但volume中的数据被删除了。post
[root@k8s-master pv]# ./start.sh persistentvolume "pv0001" created persistentvolumeclaim "nfs-pvc" created pod "test-nfs-pvc" created [root@k8s-master pv]# kubectl get persistentvolume NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv0001 5Gi RWX Recycle Bound default/nfs-pvc 11s [root@k8s-master pv]# kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc Bound pv0001 5Gi RWX 14s [root@k8s-master pv]# kubectl get pod NAME READY STATUS RESTARTS AGE test-nfs-pvc 1/1 Running 0 19s [root@k8s-master pv]# kubectl exec -ti test-nfs-pvc /bin/bash [root@test-nfs-pvc /]# touch /home/laizy/test/nfs-pvc/1.out [root@test-nfs-pvc /]# cd /home/laizy/test/nfs-pvc/ [root@test-nfs-pvc nfs-pvc]# ls 1.out [root@test-nfs-pvc nfs-pvc]# exit exit [root@k8s-master pv]# kubectl delete persistentvolumeclaim nfs-pvc persistentvolumeclaim "nfs-pvc" deleted [root@k8s-master pv]# kubectl get pod NAME READY STATUS RESTARTS AGE recycler-for-pv0001 0/1 ContainerCreating 0 1s test-nfs-pvc 1/1 Running 0 1m [root@k8s-master pv]# kubectl get persistentvolume NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv0001 5Gi RWX Recycle Available 1m [root@k8s-master pv]# kubectl get persistentvolumeclaim No resources found. [root@k8s-master pv]# kubectl get pod test-nfs-pvc NAME READY STATUS RESTARTS AGE test-nfs-pvc 1/1 Running 0 2m [root@k8s-master pv]# kubectl exec -ti test-nfs-pvc /bin/bash [root@test-nfs-pvc /]# ls /home/laizy/test/nfs-pvc/ [root@test-nfs-pvc /]
建立PV,建立PVC挂载PV,建立POD挂载PVC。删除Pod后,PV、PVC状态没变,Pod中的Volume对应的NFS数据没有被删除。学习
[root@k8s-master pv]# ./start.sh persistentvolume "pv0001" created persistentvolumeclaim "nfs-pvc" created pod "test-nfs-pvc" created [root@k8s-master pv]# kubectl get persistentvolume NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv0001 5Gi RWX Recycle Bound default/nfs-pvc 11s [root@k8s-master pv]# kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc Bound pv0001 5Gi RWX 27s [root@k8s-master pv]# kubectl get pod NAME READY STATUS RESTARTS AGE test-nfs-pvc 1/1 Running 0 36s [root@k8s-master pv]# kubectl exec -ti test-nfs-pvc /bin/bash [root@test-nfs-pvc /]# cat /home/laizy/test/nfs-pvc/1.out 123456 [root@test-nfs-pvc /]# exit exit [root@k8s-master pv]# kubectl delete pod test-nfs-pvc pod "test-nfs-pvc" deleted [root@k8s-master pv]# kubectl get persistentvolume NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE pv0001 5Gi RWX Recycle Bound default/nfs-pvc 8m [root@k8s-master pv]# kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESSMODES AGE nfs-pvc Bound pv0001 5Gi RWX 8m [root@k8s-master pv]# ssh 192.168.20.47 #登陆到远程NFS服务器 root@192.168.20.47's password: Last failed login: Mon Mar 27 14:37:19 CST 2017 from :0 on :0 There was 1 failed login attempt since the last successful login. Last login: Mon Mar 20 10:49:18 2017 [root@localhost ~]# cd /data/disk1/1.out 123456