k8s中pod资源分为有状态(数据类型的container)和无状态(服务类型container)。html
k8s中的数据存储通常是以volume挂载存储卷,在多node集群模式下,最优的方案就是提供一个存储系统(存储系统的选择条件: 能够远程访问,能够多线程读写等),k8s提供的nfs,gluster,cepf等,具体的使用方法可使用kubectl explain pod.spec查看。通常的无状态服务类型直接把文件存储在存储系统固定的目录下,在k8s的pod建立时,mount该存储目录便可。可是有些服务不能使用该存储模式,例如redis集群,es等,这些数据都是分开存储的,当pod重启以后,pod的ip和主机名数据都已经变化了,因此对于有状态的服务平常volume就不适用。node
能够建立一个StatefulSet资源代替ReplicaSet来运行这类pod.它们是专门定制的一类应用,这类应用中每个实例都是不可替代的个体,都拥有稳定的名字和状态。nginx
对比StatefulSet 与 ReplicaSet 或 ReplicationControllerredis
RS或RC管理的pod副本比较像牛,它们都是无状态的,任什么时候候它们均可以被一个全新的pod替换。而后有状态的pod须要不一样的方法,当一个有状态的pod挂掉后,这个pod实例须要在别的节点上重建,可是新的实例必须与被替换的实例拥有相同的名称、网络标识和状态。这就是StatefulSet如何管理pod的。windows
StatefulSet 保证了pod在从新调度后保留它们的标识和状态。它让你方便地扩容、缩容。与RS相似,StatefulSet也会指按期望的副本数,它决定了在同一时间内运行的宠物数。也是依据pod模版建立的,与RS不一样的是,StatefulSet 建立的pod副本并非彻底同样的。每一个pod均可以拥有一组独立的数据卷(持久化状态)。另外pod的名字都是规律的(固定的),而不是每一个新pod都随机获取一个名字。后端
提供稳定的网络标识api
StatefulSet 建立的pod的名称,按照从零开始的顺序索引,这个会体如今pod的名称和主机名称上,一样还会体如今pod对应的固定存储上。浏览器
建立statefulset服务,存储使用nfsbash
先基于nfs建立pv,master和node节点上必须安装nfs-utils否则没法mount网络
查看nfs共享的目录:
[root@k8s-3 ~]# showmount -e 192.168.191.50 Export list for 192.168.191.50: /data/nfs/04 192.168.191.0/24 /data/nfs/03 192.168.191.0/24 /data/nfs/02 192.168.191.0/24 /data/nfs/01 192.168.191.0/24 /data/nfs 192.168.191.0/24
apiVersion: v1 kind: PersistentVolume metadata: name: pv02 labels: app: pv02 spec: storageClassName: nfs accessModes: ["ReadWriteMany"] capacity: storage: 2Mi nfs: path: /data/nfs/02 server: zy.nfs.com --- apiVersion: v1 kind: PersistentVolume metadata: name: pv03 labels: app: pv03 spec: storageClassName: nfs accessModes: ["ReadWriteMany"] capacity: storage: 2Mi nfs: path: /data/nfs/03 server: zy.nfs.com --- apiVersion: v1 kind: PersistentVolume metadata: name: pv04 labels: app: pv04 spec: storageClassName: nfs accessModes: ["ReadWriteMany"] capacity: storage: 2Mi nfs: path: /data/nfs/04 server: zy.nfs.com
查看pv
[root@k8s-3 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv02 2Mi RWX Retain Available nfs 5h27m pv03 2Mi RWX Retain Available nfs 5h27m pv04 2Mi RWX Retain Available nfs 5h27m
* access modes访问模式,
ReadWriteOnce -能够经过单个节点以读写方式安装该卷
ReadOnlyMany -该卷能够被许多节点只读挂载
ReadWriteMany -该卷能够被许多节点读写安装
* RECLAIM POLICY pv的回收策略
retain 删除pvc后,pv一直存储,数据不会丢失
delete 删除pvc后,pv自动删除
storageclass 自定义的内同
建立headless服务
[root@k8s-3 statefulset]# cat svc.yaml apiVersion: v1 kind: Service metadata: name: headless-svc spec: clusterIP: None selector: app: sfs ports: - name: http port: 80 protocol: TCP
查看headless服务,注意clusterIP: None
[root@k8s-3 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE headless-svc ClusterIP None <none> 80/TCP 11m
建立statefulset
[root@k8s-3 statefulset]# cat statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: nginx spec: serviceName: sfs replicas: 2 selector: matchLabels: app: sfs template: metadata: name: sfs labels: app: sfs spec: containers: - name: sfs image: nginx:latest ports: - name: http containerPort: 80 volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: ["ReadWriteMany"] storageClassName: nfs resources: requests: storage: 2Mi
执行后查看pod的建立状况和pv,pvc
pod的建立过程,pod命名 name-0/1/2/......,依次建立pod
[root@k8s-3 ~]# kubectl get pod -w NAME READY STATUS RESTARTS AGE nginx-0 0/1 Pending 0 0s nginx-0 0/1 Pending 0 0s nginx-0 0/1 Pending 0 1s nginx-0 0/1 ContainerCreating 0 1s nginx-0 1/1 Running 0 22s nginx-1 0/1 Pending 0 0s nginx-1 0/1 Pending 0 0s nginx-1 0/1 Pending 0 0s nginx-1 0/1 ContainerCreating 0 0s nginx-1 1/1 Running 0 25s # 最终的结果 [root@k8s-3 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-0 1/1 Running 0 2m53s nginx-1 1/1 Running 0 2m31s
pvc的建立与pv的绑定
[root@k8s-3 ~]# kubectl get pvc -w NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE www-nginx-0 Bound pv04 2Mi RWX nfs 16s www-nginx-1 Pending nfs 0s www-nginx-1 Pending pv02 0 nfs 0s www-nginx-1 Bound pv02 2Mi RWX nfs 0s [root@k8s-3 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv02 2Mi RWX Retain Bound default/www-nginx-1 nfs 10m pv03 2Mi RWX Retain Available nfs 10m pv04 2Mi RWX Retain Bound default/www-nginx-0 nfs 10m
查看headless与后端pod的关系
[root@k8s-3 ~]# kubectl describe svc headless-svc Name: headless-svc Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"headless-svc","namespace":"default"},"spec":{"clusterIP":"None","... Selector: app=sfs Type: ClusterIP IP: None Port: http 80/TCP TargetPort: 80/TCP Endpoints: 10.244.1.50:80,10.244.3.86:80 Session Affinity: None Events: <none>
headless由于没有clusterIP,因此没法在外网访问,本身测试的使用浏览器访问的话,能够把windows的host文件添加解析
10.244.1.50:80,10.244.3.86:80,这里就不配置了,在node节点直接curl
[root@k8s3-1 ~]# curl 10.244.1.50:80 this is 02 [root@k8s3-1 ~]# curl 10.244.3.86:80 this is 04
nfs共享目录设置
[root@zy nfs]# echo "this is 02" > 02/index.html [root@zy nfs]# echo "this is 03" > 03/index.html [root@zy nfs]# echo "this is 04" > 04/index.html
到此一个statefulset的服务与存储测试结束。对于statefulset类型的扩容和缩容,均可以使用使用kubectl get pod -w 查看,扩容新加pod-num+1(已存在最大num);缩容,删除pod-num(已存在最大num),这里就不在演示了,有兴趣的能够验证下。