volume存储卷是Pod中可以被多个容器访问的共享目录,kubernetes的volume概念,用途和目的与docker的volume比较相似,但二者不能等价,首先,kubernetes中的volume被定义在Pod上,而后被一个Pod里的多个容器挂在到具体的文件目录下;其次,kubenetes中的volume与Pod的生命周期相同,但与容器生命周期不相关,当容器终止或者重启时,volume中的数据也不会丢失,最后Volume支持多种数据类型,好比:GlusterFS,Ceph等吸纳进的分布式文件系统html
emptyDir Volume是在Pod分配到node时建立的,从他的名称就能看得出来,它的出事内容为空,而且无需指定宿主机上对应的目录文件,由于这是kubernetes自动分配的一个目录,当Pod从node上移除时,emptyDir中的数据也会被永久删除emptyDir的用途有:node
emptyDir的使用也比较简单,在大多数状况下,咱们先在Pod生命一个Volume,而后在容器里引用该Volume并挂载到容器里的某个目录上,好比,咱们在一个Pod中定义2个容器,一个容器运行nginx,一个容器运行busybox,而后咱们在这个Pod上定义一个共享存储卷,里面的内容两个容器应该均可以看获得,拓扑图以下:nginx
如下标红的要注意,共享卷的名字要一致web
[root@master ~]# cat test.yaml apiVersion: v1 kind: Service metadata: name: serivce-mynginx namespace: default spec: type: NodePort selector: app: mynginx ports: - name: nginx port: 80 targetPort: 80 nodePort: 30080 --- apiVersion: apps/v1 kind: Deployment metadata: name: deploy namespace: default spec: replicas: 1 selector: matchLabels: app: mynginx template: metadata: labels: app: mynginx spec: containers: - name: mynginx image: lizhaoqwe/nginx:v1 volumeMounts: - mountPath: /usr/share/nginx/html/ name: share ports: - name: nginx containerPort: 80 - name: busybox image: busybox command: - "/bin/sh" - "-c" - "sleep 4444" volumeMounts: - mountPath: /data/ name: share volumes: - name: share emptyDir: {}
建立Poddocker
[root@master ~]# kubectl create -f test.yaml
查看Pod数据库
[root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-5cd657dd46-sx287 2/2 Running 0 2m1s
查看servicevim
[root@master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d10h serivce-mynginx NodePort 10.99.110.43 <none> 80:30080/TCP 2m27s
咱们进入到busybox容器当中建立一个index.html后端
[root@master ~]# kubectl exec -it deploy-5cd657dd46-sx287 -c busybox -- /bin/sh 容器内部: /data # cd /data /data # echo "fengzi" > index.html
打开浏览器验证一下api
到nginx容器中看一下有没有index.html文件浏览器
[root@master ~]# kubectl exec -it deploy-5cd657dd46-sx287 -c nginx -- /bin/sh 容器内部: # cd /usr/share/nginx/html # ls -ltr total 4 -rw-r--r-- 1 root root 7 Sep 9 17:06 index.html
ok,说明咱们在busybox里写入的文件被nginx读取到了!
hostPath为在Pod上挂载宿主机上的文件或目录,它一般能够用于如下几方面:
在使用这种类型的volume时,须要注意如下几点:
hostPath类型存储卷架构图
那么下面咱们就定义一个hostPath看一下效果:
[root@master ~]# cat test.yaml apiVersion: v1 kind: Service metadata: name: nginx-deploy namespace: default spec: selector: app: mynginx type: NodePort ports: - name: nginx port: 80 targetPort: 80 nodePort: 31111 --- apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy namespace: default spec: replicas: 2 selector: matchLabels: app: mynginx template: metadata: name: web labels: app: mynginx spec: containers: - name: mycontainer image: lizhaoqwe/nginx:v1 volumeMounts: - mountPath: /usr/share/nginx/html name: persistent-storage ports: - containerPort: 80 volumes: - name: persistent-storage hostPath: type: DirectoryOrCreate path: /mydata
在hostPath下的type要注意一下,咱们能够看一下帮助信息以下
[root@master data]# kubectl explain deploy.spec.template.spec.volumes.hostPath.type KIND: Deployment VERSION: extensions/v1beta1 FIELD: type <string> DESCRIPTION: Type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath
能够看到帮助信息并无太多信息,可是给我留了一个参考网站,咱们打开这个网站
能够看到hostPath下的type能够有这么多的选项,意思不在解释了,能够本身谷歌,咱们这里选第一个
执行yaml文件
[root@master ~]# kubectl create -f test.yaml service/nginx-deploy created deployment.apps/mydeploy created
而后咱们能够去两个节点去查看是否有/mydata这个目录
能够看到两边的mydata目录都已经建立完毕,接下来咱们在目录里写点东西
两边节点都写了一些东西,好了,如今咱们能够验证一下
能够看到访问没有问题,而且仍是负载均衡
相信NFS你们已经不陌生了,因此在这里我就不详细说明什么NFS,我只说如何在k8s集群当中挂在nfs文件系统
基于NFS文件系统挂载的卷的架构图为
开启集群之外的另外一台虚拟机,安装nfs-utils安装包
note:这里要注意的是须要在集群每一个节点都安装nfs-utils安装包,否则挂载会失败!
[root@master mnt]# yum install nfs-utils 已加载插件:fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.cn99.com * extras: mirrors.cn99.com * updates: mirrors.cn99.com 软件包 1:nfs-utils-1.3.0-0.61.el7.x86_64 已安装而且是最新版本 无须任何处理
编辑/etc/exports文件添加如下内容
[root@localhost share]# vim /etc/exports /share 192.168.254.0/24(insecure,rw,no_root_squash)
重启nfs服务
[root@localhost share]# service nfs restart
Redirecting to /bin/systemctl restart nfs.service
在/share目录中写一个index.html文件而且写入内容
[root@localhost share]# echo "nfs server" > /share/index.html
在kubernetes集群的master节点中建立yaml文件并写入
[root@master ~]# cat test.yaml apiVersion: v1 kind: Service metadata: name: nginx-deploy namespace: default spec: selector: app: mynginx type: NodePort ports: - name: nginx port: 80 targetPort: 80 nodePort: 31111 --- apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy namespace: default spec: replicas: 2 selector: matchLabels: app: mynginx template: metadata: name: web labels: app: mynginx spec: containers: - name: mycontainer image: lizhaoqwe/nginx:v1 volumeMounts: - mountPath: /usr/share/nginx/html name: nfs ports: - containerPort: 80 volumes: - name: nfs nfs: server: 192.168.254.11 #nfs服务器地址 path: /share #nfs服务器共享目录
建立yaml文件
[root@master ~]# kubectl create -f test.yaml service/nginx-deploy created deployment.apps/mydeploy created
验证
OK,没问题!!!
以前的volume是被定义在Pod上的,属于计算资源的一部分,而实际上,网络存储是相对独立于计算资源而存在的一种实体资源。好比在使用虚拟机的状况下,咱们一般会先定义一个网络存储,而后从中划出一个网盘并挂载到虚拟机上,Persistent Volume(pv)和与之相关联的Persistent Volume Claim(pvc)也起到了相似的做用
pv能够理解成为kubernetes集群中的某个网络存储对应的一块存储,它与Volume相似,但有如下区别:
在nfs server服务器上建立nfs卷的映射并重启
[root@localhost ~]# cat /etc/exports /share_v1 192.168.254.0/24(insecure,rw,no_root_squash) /share_v2 192.168.254.0/24(insecure,rw,no_root_squash) /share_v3 192.168.254.0/24(insecure,rw,no_root_squash) /share_v4 192.168.254.0/24(insecure,rw,no_root_squash) /share_v5 192.168.254.0/24(insecure,rw,no_root_squash)
[root@localhost ~]# service nfs restart
在nfs server服务器上建立响应目录
[root@localhost /]# mkdir /share_v{1,2,3,4,5}
在kubernetes集群中的master节点上建立pv,我这里建立了5个pv对应nfs server当中映射出来的5个目录
[root@master ~]# cat createpv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv01 spec: nfs: #存储类型 path: /share_v1 #要挂在的nfs服务器的目录位置 server: 192.168.254.11 #nfs server地址,也能够是域名,前提是能被解析 accessModes: #访问模式: - ReadWriteMany ReadWriteMany:读写权限,容许多个Node挂载 | ReadWriteOnce:读写权限,只能被单个Node挂在 | ReadOnlyMany:只读权限,容许被多个Node挂载 - ReadWriteOnce capacity: #存储容量 storage: 10Gi #pv存储卷为10G --- apiVersion: v1 kind: PersistentVolume metadata: name: pv02 spec: nfs: path: /share_v2 server: 192.168.254.11 accessModes: - ReadWriteMany capacity: storage: 20Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv03 spec: nfs: path: /share_v3 server: 192.168.254.11 accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 30Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv04 spec: nfs: path: /share_v4 server: 192.168.254.11 accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 40Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv05 spec: nfs: path: /share_v5 server: 192.168.254.11 accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 50Gi
执行yaml文件
[root@master ~]# kubectl create -f createpv.yaml persistentvolume/pv01 created persistentvolume/pv02 created persistentvolume/pv03 created persistentvolume/pv04 created persistentvolume/pv05 created
查看pv
[root@master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv01 10Gi RWO,RWX Retain Available 5m10s
pv02 20Gi RWX Retain Available 5m10s
pv03 30Gi RWO,RWX Retain Available 5m9s
pv04 40Gi RWO,RWX Retain Available 5m9s
pv05 50Gi RWO,RWX Retain Available 5m9s
来一波解释:
ACCESS MODES:
RWO:ReadWriteOnly
RWX:ReadWriteMany
ROX:ReadOnlyMany
RECLAIM POLICY:
Retain:保护pvc释放的pv及其上的数据,将不会被其余pvc绑定
recycle:保留pv但清空数据
delete:删除pvc释放的pv及后端存储volume
STATUS:
Available:空闲状态
Bound:已经绑定到某个pvc上
Released:对应的pvc已经被删除,可是资源没有被集群回收
Failed:pv自动回收失败
CLAIM:
被绑定到了那个pvc上面格式为:NAMESPACE/PVC_NAME
有了pv以后咱们就能够建立pvc了
[root@master ~]# cat test.yaml apiVersion: v1 kind: Service metadata: name: nginx-deploy namespace: default spec: selector: app: mynginx type: NodePort ports: - name: nginx port: 80 targetPort: 80 nodePort: 31111 --- apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy namespace: default spec: replicas: 2 selector: matchLabels: app: mynginx template: metadata: name: web labels: app: mynginx spec: containers: - name: mycontainer image: nginx volumeMounts: - mountPath: /usr/share/nginx/html name: html ports: - containerPort: 80 volumes: - name: html persistentVolumeClaim: claimName: mypvc --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc namespace: default spec: accessMode: - ReadWriteMany resources: requests: storage: 5Gi
执行yaml文件
[root@master ~]# kubectl create -f test.yaml service/nginx-deploy created deployment.apps/mydeploy created persistentvolumeclaim/mypvc created
再次查看pv,已经显示pvc被绑定到了pv02上
[root@master ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv01 10Gi RWO,RWX Retain Available 22m pv02 20Gi RWX Retain Bound default/mypvc 22m pv03 30Gi RWO,RWX Retain Available 22m pv04 40Gi RWO,RWX Retain Available 22m pv05 50Gi RWO,RWX Retain Available 22m
查看pvc
[root@master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc Bound pv02 20Gi RWX 113s
在nfs server服务器上找到相应的目录执行如下命令
[root@localhost share_v1]# echo 'test pvc' > index.html
而后打开浏览器
OK,没问题
应用部署的一个最佳实战是将应用所需的配置信息与程序进行分离,这样可使应用程序被更好的复用,经过不一样的配置也能实现更灵活的功能,将应用打包为容器镜像后,能够经过环境变量或者外挂文件的方式在建立容器时进行配置注入,但在大规模容器集群的环境中,对多个容器进行不一样的配置讲变得很是复杂,Kubernetes 1.2开始提供了一种统一的应用配置管理方案-configMap
ConfigMap供容器使用的典型用法以下:
好比咱们用configmap建立两个变量,一个是nginx_port=80,一个是nginx_server=192.168.254.13
[root@master ~]# kubectl create configmap nginx-var --from-literal=nginx_port=80 --from-literal=nginx_server=192.168.254.13 configmap/nginx-var created
查看configmap
[root@master ~]# kubectl get cm NAME DATA AGE nginx-var 2 5s [root@master ~]# kubectl describe cm nginx-var Name: nginx-var Namespace: default Labels: <none> Annotations: <none> Data ==== nginx_port: ---- 80 nginx_server: ---- 192.168.254.13 Events: <none>
而后咱们建立pod,把这2个变量注入到环境变量当中
[root@master ~]# cat test2.yaml apiVersion: v1 kind: Service metadata: name: service-nginx namespace: default spec: type: NodePort selector: app: nginx ports: - name: nginx port: 80 targetPort: 80 nodePort: 30080 --- apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy namespace: default spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: name: web labels: app: nginx spec: containers: - name: nginx image: nginx ports: - name: nginx containerPort: 80 volumeMounts: - name: html mountPath: /user/share/nginx/html/ env: - name: TEST_PORT valueFrom: configMapKeyRef: name: nginx-var key: nginx_port - name: TEST_HOST valueFrom: configMapKeyRef: name: nginx-var key: nginx_server volumes: - name: html emptyDir: {}
执行pod文件
[root@master ~]# kubectl create -f test2.yaml
service/service-nginx created
查看pod
[root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE mydeploy-d975ff774-fzv7g 1/1 Running 0 19s mydeploy-d975ff774-nmmqt 1/1 Running 0 19s
进入到容器中查看环境变量
[root@master ~]# kubectl exec -it mydeploy-d975ff774-fzv7g -- /bin/sh # printenv SERVICE_NGINX_PORT_80_TCP_PORT=80 KUBERNETES_PORT=tcp://10.96.0.1:443 SERVICE_NGINX_PORT_80_TCP_PROTO=tcp KUBERNETES_SERVICE_PORT=443 HOSTNAME=mydeploy-d975ff774-fzv7g SERVICE_NGINX_SERVICE_PORT_NGINX=80 HOME=/root PKG_RELEASE=1~buster SERVICE_NGINX_PORT_80_TCP=tcp://10.99.184.186:80 TEST_HOST=192.168.254.13 TEST_PORT=80 TERM=xterm KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 NGINX_VERSION=1.17.3 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KUBERNETES_PORT_443_TCP_PORT=443 NJS_VERSION=0.3.5 KUBERNETES_PORT_443_TCP_PROTO=tcp SERVICE_NGINX_SERVICE_HOST=10.99.184.186 SERVICE_NGINX_PORT=tcp://10.99.184.186:80 SERVICE_NGINX_SERVICE_PORT=80 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 KUBERNETES_SERVICE_HOST=10.96.0.1 PWD=/ SERVICE_NGINX_PORT_80_TCP_ADDR=10.99.184.186
能够发现configMap当中的环境变量已经注入到了pod容器当中
这里要注意的是,若是是用这种环境变量的注入方式,pod启动后,若是在去修改configMap当中的变量,对于pod是无效的,若是是以卷的方式挂载,是可的实时更新的,这一点要清楚
上面说到了configMap以变量的形式虽然能够注入到pod当中,可是若是在修改变量的话pod是不会更新的,若是想让configMap中的配置跟pod内部的实时更新,就须要以存储卷的形式挂载
[root@master ~]# cat test2.yaml apiVersion: v1 kind: Service metadata: name: service-nginx namespace: default spec: type: NodePort selector: app: nginx ports: - name: nginx port: 80 targetPort: 80 nodePort: 30080 --- apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy namespace: default spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: name: web labels: app: nginx spec: containers: - name: nginx image: nginx ports: - name: nginx containerPort: 80 volumeMounts: - name: html-config mountPath: /nginx/vars/ readOnly: true volumes: - name: html-config configMap: name: nginx-var
执行yaml文件
[root@master ~]# kubectl create -f test2.yaml service/service-nginx created deployment.apps/mydeploy created
查看pod
[root@master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE mydeploy-6f6b6c8d9d-pfzjs 1/1 Running 0 90s mydeploy-6f6b6c8d9d-r9rz4 1/1 Running 0 90s
进入到容器中
[root@master ~]# kubectl exec -it mydeploy-6f6b6c8d9d-pfzjs -- /bin/bash
在容器中查看configMap对应的配置
root@mydeploy-6f6b6c8d9d-pfzjs:/# cd /nginx/vars root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# ls nginx_port nginx_server root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# cat nginx_port 80 root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars#
修改configMap中的配置,把端口号从80修改为8080
[root@master ~]# kubectl edit cm nginx-var # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: nginx_port: "8080" nginx_server: 192.168.254.13 kind: ConfigMap metadata: creationTimestamp: "2019-09-13T14:22:20Z" name: nginx-var namespace: default resourceVersion: "248779" selfLink: /api/v1/namespaces/default/configmaps/nginx-var uid: dfce8730-f028-4c57-b497-89b8f1854630
修改完稍等片刻查看文件档中的值,已然更新成8080
root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars# cat nginx_port 8080 root@mydeploy-6f6b6c8d9d-pfzjs:/nginx/vars#
这里以nginx配置文件为例子,咱们在宿主机上配置好nginx的配置文件,建立configmap,最后经过configmap注入到容器中
建立nginx配置文件
[root@master ~]# vim www.conf server { server_name: 192.168.254.13; listen: 80; root /data/web/html/; }
建立configMap
[root@master ~]# kubectl create configmap nginx-config --from-file=/root/www.conf configmap/nginx-config created
查看configMap
[root@master ~]# kubectl get cm NAME DATA AGE nginx-config 1 3m3s nginx-var 2 63m
建立pod并挂载configMap存储卷
[root@master ~]# cat test2.yaml apiVersion: v1 kind: Service metadata: name: service-nginx namespace: default spec: type: NodePort selector: app: nginx ports: - name: nginx port: 80 targetPort: 80 nodePort: 30080 --- apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy namespace: default spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: name: web labels: app: nginx spec: containers: - name: nginx image: nginx ports: - name: nginx containerPort: 80 volumeMounts: - name: html-config mountPath: /etc/nginx/conf.d/ readOnly: true volumes: - name: html-config configMap: name: nginx-config
启动容器,并让容器启动的时候就加载configMap当中的配置
[root@master ~]# kubectl create -f test2.yaml service/service-nginx created deployment.apps/mydeploy created
查看容器
[root@master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mydeploy-fd46f76d6-jkq52 1/1 Running 0 22s 10.244.1.46 node1 <none> <none>
访问容器当中的网页,80端口是没问题的,8888端口访问不一样
[root@master ~]# curl 10.244.1.46
this is test web
[root@master ~]# curl 10.244.1.46:8888
curl: (7) Failed connect to 10.244.1.46:8888; 拒绝链接
接下来咱们去修改configMap当中的内容,吧80端口修改为8888
[root@master ~]# kubectl edit cm nginx-config # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: www.conf: | server { server_name 192.168.254.13; listen 8888; root /data/web/html/; } kind: ConfigMap metadata: creationTimestamp: "2019-09-13T15:22:22Z" name: nginx-config namespace: default resourceVersion: "252615" selfLink: /api/v1/namespaces/default/configmaps/nginx-config uid: f1881f87-5a91-4b8e-ab39-11a2f45733c2
进入到容器查看配置文件,能够发现配置文件已经修改过来了
root@mydeploy-fd46f76d6-jkq52:/usr/bin# cat /etc/nginx/conf.d/www.conf server { server_name 192.168.254.13; listen 8888; root /data/web/html/; }
在去测试访问,发现仍是报错,这是由于配置文件虽然已经修改了,可是nginx服务并无加载配置文件,咱们手动加载一下,之后能够用脚本形式自动完成加载文件
[root@master ~]# curl 10.244.1.46 this is test web [root@master ~]# curl 10.244.1.46:8888 curl: (7) Failed connect to 10.244.1.46:8888; 拒绝链接
在容器内部手动加载配置文件
root@mydeploy-fd46f76d6-jkq52:/usr/bin# nginx -s reload 2019/09/13 16:04:12 [notice] 34#34: signal process started
再去测试访问,能够看到80端口已经访问不通,反而是咱们修改的8888端口能够访问通
[root@master ~]# curl 10.244.1.46 curl: (7) Failed connect to 10.244.1.46:80; 拒绝链接 [root@master ~]# curl 10.244.1.46:8888 this is test web
完结!!