Kubernetes进阶之PersistentVolume 静态供给实现NFS网络存储
网络存储html
NFS是一种很早的技术,单机的存储在服务器方面仍是很是主流的,但nfs惟一的就是缺点比较大就是没有集群版,作集群化仍是比较费劲的,文件系统作不了,这是一个很大的弊端,大规模的仍是须要选择一些分布式的存储,nfs就是一个网络文件存储服务器,装完nfs以后,共享一个目录,其余的服务器就能够经过这个目录挂载到本地了,在本地写到这个目录的文件,就会同步到远程服务器上,实现一个共享存储的功能,通常都是作数据的共享存储,好比多台web服务器,确定须要保证这些web服务器的数据一致性,那就会用到这个共享存储了,要是将nfs挂载到多台的web服务器上,网站根目录下,网站程序就放在nfs服务器上,这样的话。每一个网站,每一个web程序都能读取到这个目录,一致性的数据,这样的话就能保证多个节点,提供一致性的程序了。node
单独拿一台服务器作nfs服务器,咱们这里先搭建一台NFS服务器用来存储咱们的网页根目录nginx
[root@nfs ~]# yum install nfs-utils -y
暴露目录,让是让其余服务器能挂载这个目录web
[root@nfs ~]# mkdir /opt/k8s [root@nfs ~]# vim /etc/exports /opt/k8s 192.168.30.0/24(rw,no_root_squash)
给这个网段加上权限,可读可写[root@nfs ~]# systemctl start nfs
vim
找个节点去挂载测试一下,只要去共享这个目录就要都去安装这个客户端centos
[root@k8s-node2 ~]# yum install nfs-utils -y [root@k8s-node2 ~]# mount -t nfs 192.168.30.27:/opt/k8s /mnt [root@k8s-node2 ~]# cd /mnt [root@k8s-node2 mnt]# df -h 192.168.30.27:/opt/k8s 36G 5.8G 30G 17% /mnt [root@k8s-node2 mnt]# touch a.txt
去服务器端查看已经数据共享过来了api
[root@nfs ~]# cd /opt/k8s/ [root@nfs k8s]# ls a.txt
删除nfs服务器的数据也会删除
接下来怎么将K8s进行使用
咱们把网页目录都放在这个目录下bash
[root@nfs k8s]# mkdir wwwroot [root@k8s-master demo]# vim nfs.yaml apiVersion: apps/v1beta1 kind: Deployment metadata: name: nfs spec: replicas: 3 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx volumeMounts: - name: wwwroot mountPath: /usr/share/nginx/html ports: - containerPort: 80 volumes: - name: wwwroot nfs: server: 192.168.30.27 path: /opt/k8s/wwwroot
[root@k8s-master demo]# kubectl create -f nfs.yaml
服务器
[root@k8s-master demo]# kubectl get pod NAME READY STATUS RESTARTS AGE mypod 1/1 Running 0 6h6m mypod2 1/1 Running 0 6h nginx-5ddcc6cb74-lplxl 1/1 Running 0 6h43m nginx-deployment-744d977b46-8q97k 1/1 Running 0 48s nginx-deployment-744d977b46-ftjfk 1/1 Running 0 48s nginx-deployment-744d977b46-nksph 1/1 Running 0 48s web-67fcf9bf8-mrlhd 1/1 Running 0 103m
进入容器并查看挂载,肯定挂载上网络
[root@k8s-master demo]# kubectl exec -it nginx-deployment-744d977b46-8q97k bash root@nginx-deployment-744d977b46-8q97k:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 17G 5.6G 12G 33% / tmpfs 64M 0 64M 0% /dev tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/mapper/centos-root 17G 5.6G 12G 33% /etc/hosts shm 64M 0 64M 0% /dev/shm 192.168.30.27:/opt/k8s/wwwroot 36G 5.8G 30G 17% /usr/share/nginx/html tmpfs 2.0G 12K 2.0G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 2.0G 0 2.0G 0% /proc/acpi tmpfs 2.0G 0 2.0G 0% /proc/scsi tmpfs 2.0G 0 2.0G 0% /sys/firmware
咱们在源pod的网页目录下写入数据,并查看咱们的nfs服务器目录下也会共享
root@nginx-deployment-744d977b46-8q97k:/# cd /usr/share/nginx/html/ root@nginx-deployment-744d977b46-8q97k:/usr/share/nginx/html# ls root@nginx-deployment-744d977b46-8q97k:/usr/share/nginx/html# echo "hello world" > index.html root@nginx-deployment-744d977b46-8q97k:/usr/share/nginx/html# cat index.html hello world
测试查看
[root@nfs k8s]# cd wwwroot/ [root@nfs wwwroot]# ls [root@nfs wwwroot]# ls index.html [root@nfs wwwroot]# cat index.html hello world
K8s为了作存储的编排
数据持久卷PersistentVolume 简称pv/pvc主要作容器存储的编排
• PersistentVolume(PV):对存储资源建立和使用的抽象,使得存储做为集群中的资源管理
pv都是运维去考虑,用来管理外部存储的
• 静态 :提早建立好pv,好比建立一个100G的pv,200G的pv,让有须要的人拿去用,就是说pvc链接pv,就是知道pv建立的是多少,空间大小是多少,建立的名字是多少,有必定的可匹配性
• 动态
• PersistentVolumeClaim(PVC):让用户不须要关心具体的Volume实现细节
使用多少个容量来定义,好比开发要部署一个服务要使用10个G,那么就可使用pvc这个资源对象来定义使用10个G,其余的就不用考虑了
PersistentVolume 静态供给
先建立一个容器应用
[root@k8s-master ~]# cd demo/ [root@k8s-master demo]# mkdir storage [root@k8s-master demo]# cd storage/ [root@k8s-master storage]# vim pod.yaml
apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 volumeMounts: - name: www mountPath: /usr/share/nginx/html volumes: - name: www persistentVolumeClaim: claimName: my-pvc
卷需求yaml,这里的名称必定要对应,通常两个文件都放在一块
[root@k8s-master storage]# vim pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi
接下来就是运维出场了,提早建立好pv
[root@k8s-master storage]# vim pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: zhaocheng spec: capacity: storage: 5Gi accessModes: - ReadWriteMany nfs: path: /opt/k8s/zhaocheng server: 192.168.30.27
提早建立好pv,以及挂载目录
[root@k8s-master storage]# kubectl create -f pv.yaml persistentvolume/my-pv created [root@k8s-master storage]# kubectl get pv zhaocheng 5Gi RWX Retain Available 5s
我再建立一个pv,在nfs服务器提早把目录建立好,名称修改一下
[root@localhost ~]# cd /opt/k8s/ [root@localhost k8s]# mkdir zhaocheng [root@k8s-master storage]# vim pv2.yaml apiVersion: v1 kind: PersistentVolume metadata: name: zhaochengcheng spec: capacity: storage: 10Gi accessModes: - ReadWriteMany nfs: path: /opt/k8s/zhaochengcheng server: 192.168.30.27
[root@k8s-master storage]# kubectl get pv zhaocheng 5Gi RWX Retain Available 13s zhaochengcheng 10Gi RWX Retain Available 4s
而后如今建立一下咱们的pod和pvc,这里我写在一块儿了
[root@k8s-master storage]# vim pod.yaml apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 volumeMounts: - name: www mountPath: /usr/share/nginx/html volumes: - name: www persistentVolumeClaim: claimName: my-pvc --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi [root@k8s-master storage]# kubectl create -f pod.yaml
这里会根据咱们pod的需求去匹配咱们静态的pv,咱们是建立了一个5G,一个10G,根据自身大小去匹配
[root@k8s-master storage]# kubectl get pod,pvc pod/my-pod 1/1 Running 0 13s pod/nfs-744d977b46-dh9xj 1/1 Running 0 12m pod/nfs-744d977b46-kcx6h 1/1 Running 0 12m pod/nfs-744d977b46-wqhc6 1/1 Running 0 12m persistentvolumeclaim/my-pvc Bound zhaocheng 5Gi RWX 13s
进入容器查看咱们的使用内存大小
[root@k8s-master storage]# kubectl exec -it pod/my-pod bash root@my-pod:/# df -Th Filesystem Type Size Used Avail Use% Mounted on overlay overlay 17G 4.9G 13G 29% / tmpfs tmpfs 64M 0 64M 0% /dev tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/mapper/centos-root xfs 17G 4.9G 13G 29% /etc/hosts shm tmpfs 64M 0 64M 0% /dev/shm 192.168.30.27:/opt/k8s/zhaocheng nfs4 36G 5.8G 30G 17% /usr/share/nginx/html tmpfs tmpfs 2.0G 12K 2.0G 1% /run/secrets/kubernetes.io/serviceaccount tmpfs tmpfs 2.0G 0 2.0G 0% /proc/acpi tmpfs tmpfs 2.0G 0 2.0G 0% /proc/scsi tmpfs tmpfs 2.0G 0 2.0G 0% /sys/firmware
去建立一个网页测试
root@my-pod:/# cd /usr/share/nginx/html/ root@my-pod:/usr/share/nginx/html# ls root@my-pod:/usr/share/nginx/html# echo "5G ready" > index.html root@my-pod:/usr/share/nginx/html# cat index.html 5G ready
去咱们的nfs服务器查看
[root@localhost ~]# cd /opt/k8s/ [root@localhost k8s]# ls wwwroot zhaocheng zhaochengcheng [root@localhost k8s]# cd zhaocheng [root@localhost zhaocheng]# cat index.html 5G ready
绑定一块儿了和咱们5G的
[root@k8s-master storage]# kubectl get pv zhaocheng 5Gi RWX Retain Bound default/my-pvc 8m52s zhaochengcheng 10Gi RWX Retain Available 7m51s