pod自己是无状态,因此不少有状态的应用,就须要将数据进行持久化。html
1:将数据挂在到宿主机。可是pod重启以后有可能到另一个节点,这样数据虽然不会丢可是仍是有可能会找不到nginx
apiVersion: v1 kind: Pod metadata: name: busybox labels: name: busybox spec: containers: - image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent name: busybox volumeMounts: - mountPath: /busybox-data name: data volumes: - hostPath: path: /tmp/data name: data
2:挂到外部存储,如nfsweb
apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 2 selector: app: web01 template: metadata: name: nginx labels: app: web01 spec: containers: - name: nginx image: reg.docker.tb/harbor/nginx ports: - containerPort: 80 volumeMounts: - mountPath: /usr/share/nginx/html readOnly: false name: nginx-data volumes: - name: nginx-data nfs: server: 10.0.10.31 path: "/data/www-data"
上述说的是简单的存储方法,直接在deployment中定义了具体的存储,可是这样会存在几个问题。docker
1:权限管理,任何一个pod均可以动任意一个路径api
2:磁盘大小限制,没法对某个存储块进行限制app
3:若是NFS的url变了,那么全部的配置都须要修改url
为了解决以上的问题,引入了PV-PVC的概念spa
建立一个卷PV,不属于任何namespaces,能够限制大小,读写权限code
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 labels: app: "my-nfs" spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle nfs: path: "/data/disk1" server: 192.168.20.47 readOnly: false
再对应的namespace下面建立PVC。server
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi selector: matchLabels: app: "my-nfs"
而后kubectl apply 建立PV,PVC
最后在应用用使用该PVC
apiVersion: v1 kind: Pod metadata: name: test-nfs-pvc labels: name: test-nfs-pvc spec: containers: - name: test-nfs-pvc image: registry:5000/back_demon:1.0 ports: - name: backdemon containerPort: 80 command: - /run.sh volumeMounts: - name: nfs-vol mountPath: /home/laizy/test/nfs-pvc volumes: - name: nfs-vol persistentVolumeClaim: claimName: nfs-pvc
这样能够方便的限制每一个pvc所在的子目录,同时万一nfs迁移后,只须要更改pv中的url便可