目录html
从前面的学习咱们知道使用Deployment建立的pod是无状态的,当挂载了Volume以后,若是该pod挂了,Replication Controller会再启动一个pod来保证可用性,可是因为pod是无状态的,pod挂了就会和以前的Volume的关系断开,新建立的Pod没法找到以前的Pod。可是对于用户而言,他们对底层的Pod挂了是没有感知的,可是当Pod挂了以后就没法再使用以前挂载的存储卷。node
为了解决这一问题,就引入了StatefulSet用于保留Pod的状态信息。nginx
StatefulSet是为了解决有状态服务的问题(对应Deployments和ReplicaSets是为无状态服务而设计),其应用场景包括:web
从上面的应用场景能够发现,StatefulSet由如下几个部分组成:redis
- Headless Service(无头服务)用于为Pod资源标识符生成可解析的DNS记录。
- volumeClaimTemplates (存储卷申请模板)基于静态或动态PV供给方式为Pod资源提供专有的固定存储。
- StatefulSet,用于管控Pod资源。
在deployment中,每个pod是没有名称,是随机字符串,是无序的。而statefulset中是要求有序的,每个pod的名称必须是固定的。当节点挂了,重建以后的标识符是不变的,每个节点的节点名称是不能改变的。pod名称是做为pod识别的惟一标识符,必须保证其标识符的稳定而且惟一。
为了实现标识符的稳定,这时候就须要一个headless service 解析直达到pod,还须要给pod配置一个惟一的名称。docker
大部分有状态副本集都会用到持久存储,好比分布式系统来讲,因为数据是不同的,每一个节点都须要本身专用的存储节点。而在deployment中pod模板中建立的存储卷是一个共享的存储卷,多个pod使用同一个存储卷,而statefulset定义中的每个pod都不能使用同一个存储卷,由此基于pod模板建立pod是不适应的,这就须要引入volumeClainTemplate,当在使用statefulset建立pod时,会自动生成一个PVC,从而请求绑定一个PV,从而有本身专用的存储卷。Pod名称、PVC和PV关系图以下:
vim
在建立StatefulSet以前须要准备的东西,值得注意的是建立顺序很是关键,建立顺序以下:
一、Volume
二、Persistent Volume
三、Persistent Volume Claim
四、Service
五、StatefulSet
Volume能够有不少种类型,好比nfs、glusterfs等,咱们这里使用的ceph RBD来建立。api
[root@k8s-master ~]# kubectl explain statefulset KIND: StatefulSet VERSION: apps/v1 DESCRIPTION: StatefulSet represents a set of pods with consistent identities. Identities are defined as: - Network: A single stable DNS and hostname. - Storage: As many VolumeClaims as requested. The StatefulSet guarantees that a given network identity will always map to the same storage identity. FIELDS: apiVersion <string> kind <string> metadata <Object> spec <Object> status <Object> [root@k8s-master ~]# kubectl explain statefulset.spec KIND: StatefulSet VERSION: apps/v1 RESOURCE: spec <Object> DESCRIPTION: Spec defines the desired identities of pods in this set. A StatefulSetSpec is the specification of a StatefulSet. FIELDS: podManagementPolicy <string> #Pod管理策略 replicas <integer> #副本数量 revisionHistoryLimit <integer> #历史版本限制 selector <Object> -required- #选择器,必选项 serviceName <string> -required- #服务名称,必选项 template <Object> -required- #模板,必选项 updateStrategy <Object> #更新策略 volumeClaimTemplates <[]Object> #存储卷申请模板,列表对象形式
如上所述,一个完整的StatefulSet控制器由一个Headless Service、一个StatefulSet和一个volumeClaimTemplate组成。以下资源清单中的定义:tomcat
[root@k8s-master mainfests]# vim stateful-demo.yaml apiVersion: v1 kind: Service metadata: name: myapp-svc labels: app: myapp-svc spec: ports: - port: 80 name: web clusterIP: None selector: app: myapp-pod --- apiVersion: apps/v1 kind: StatefulSet metadata: name: myapp spec: serviceName: myapp-svc replicas: 3 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: myappdata mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: myappdata spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 2Gi
解析上例:因为StatefulSet资源依赖于一个实现存在的Headless类型的Service资源,因此须要先定义一个名为myapp-svc的Headless Service资源,用于为关联到每一个Pod资源建立DNS资源记录。接着定义了一个名为myapp的StatefulSet资源,它经过Pod模板建立了3个Pod资源副本,并基于volumeClaimTemplates向前面建立的PV进行了请求大小为2Gi的专用存储卷。网络
[root@k8s-master mainfests]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 1Gi RWO,RWX Retain Available 23h pv002 2Gi RWO Retain Available 23h pv003 2Gi RWO,RWX Retain Bound default/mypvc 23h pv004 4Gi RWO,RWX Retain Available 23h pv005 5Gi RWO,RWX Retain Available 23h [root@k8s-master mainfests]# kubectl delete pods pod-vol-pvc pod "pod-vol-pvc" deleted [root@k8s-master mainfests]# kubectl delete pods/pod-cm-3 pods/pod-secret-env pods/pod-vol-hostpath pod "pod-cm-3" deleted pod "pod-secret-env" deleted pod "pod-vol-hostpath" deleted [root@k8s-master mainfests]# kubectl delete deploy/myapp-backend-pod deploy/tomcat-deploy deployment.extensions "myapp-backend-pod" deleted deployment.extensions "tomcat-deploy" deleted [root@k8s-master mainfests]# kubectl delete pods pod-vol-pvc pod "pod-vol-pvc" deleted [root@k8s-master mainfests]# kubectl delete pods/pod-cm-3 pods/pod-secret-env pods/pod-vol-hostpath pod "pod-cm-3" deleted pod "pod-secret-env" deleted pod "pod-vol-hostpath" deleted [root@k8s-master mainfests]# kubectl delete deploy/myapp-backend-pod deploy/tomcat-deploy deployment.extensions "myapp-backend-pod" deleted deployment.extensions "tomcat-deploy" deleted persistentvolumeclaim "mypvc" deleted [root@k8s-master mainfests]# kubectl delete pv --all persistentvolume "pv001" deleted persistentvolume "pv002" deleted persistentvolume "pv003" deleted persistentvolume "pv004" deleted persistentvolume "pv005" deleted
[root@k8s-master ~]# cd mainfests/volumes [root@k8s-master volumes]# vim pv-demo.yaml [root@k8s-master volumes]# kubectl apply -f pv-demo.yaml persistentvolume/pv001 created persistentvolume/pv002 created persistentvolume/pv003 created persistentvolume/pv004 created persistentvolume/pv005 created [root@k8s-master volumes]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 1Gi RWO,RWX Retain Available 5s pv002 2Gi RWO Retain Available 5s pv003 2Gi RWO,RWX Retain Available 5s pv004 2Gi RWO,RWX Retain Available 5s pv005 2Gi RWO,RWX Retain Available 5s
[root@k8s-master mainfests]# kubectl apply -f stateful-demo.yaml service/myapp-svc created statefulset.apps/myapp created [root@k8s-master mainfests]# kubectl get svc #查看建立的无头服务myapp-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50d myapp-svc ClusterIP None <none> 80/TCP 38s [root@k8s-master mainfests]# kubectl get sts #查看statefulset NAME DESIRED CURRENT AGE myapp 3 3 55s [root@k8s-master mainfests]# kubectl get pvc #查看pvc绑定 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 2Gi RWO 1m myappdata-myapp-1 Bound pv003 2Gi RWO,RWX 1m myappdata-myapp-2 Bound pv004 2Gi RWO,RWX 1m [root@k8s-master mainfests]# kubectl get pv #查看pv绑定 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 1Gi RWO,RWX Retain Available 6m pv002 2Gi RWO Retain Bound default/myappdata-myapp-0 6m pv003 2Gi RWO,RWX Retain Bound default/myappdata-myapp-1 6m pv004 2Gi RWO,RWX Retain Bound default/myappdata-myapp-2 6m pv005 2Gi RWO,RWX Retain Available 6m [root@k8s-master mainfests]# kubectl get pods #查看Pod信息 NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 2m myapp-1 1/1 Running 0 2m myapp-2 1/1 Running 0 2m pod-vol-demo 2/2 Running 0 1d redis-5b5d6fbbbd-q8ppz 1/1 Running 1 2d
当删除的时候是从myapp-2开始进行删除的,关闭是逆向关闭
[root@k8s-master mainfests]# kubectl delete -f stateful-demo.yaml service "myapp-svc" deleted statefulset.apps "myapp" deleted [root@k8s-master ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE filebeat-ds-hxgdx 1/1 Running 1 33d filebeat-ds-s466l 1/1 Running 2 33d myapp-0 1/1 Running 0 3m myapp-1 1/1 Running 0 3m myapp-2 1/1 Running 0 3m pod-vol-demo 2/2 Running 0 1d redis-5b5d6fbbbd-q8ppz 1/1 Running 1 2d myapp-0 1/1 Terminating 0 3m myapp-2 1/1 Terminating 0 3m myapp-1 1/1 Terminating 0 3m myapp-1 0/1 Terminating 0 3m myapp-0 0/1 Terminating 0 3m myapp-2 0/1 Terminating 0 3m myapp-1 0/1 Terminating 0 3m myapp-1 0/1 Terminating 0 3m myapp-0 0/1 Terminating 0 4m myapp-0 0/1 Terminating 0 4m myapp-2 0/1 Terminating 0 3m myapp-2 0/1 Terminating 0 3m 此时PVC依旧存在的,再从新建立pod时,依旧会从新去绑定原来的pvc [root@k8s-master mainfests]# kubectl apply -f stateful-demo.yaml service/myapp-svc created statefulset.apps/myapp created [root@k8s-master mainfests]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 2Gi RWO 5m myappdata-myapp-1 Bound pv003 2Gi RWO,RWX 5m myappdata-myapp-2 Bound pv004 2Gi RWO,RWX 5m
[root@k8s-master mainfests]# kubectl delete -f stateful-demo.yaml service "myapp-svc" deleted statefulset.apps "myapp" deleted [root@k8s-master ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE filebeat-ds-hxgdx 1/1 Running 1 33d filebeat-ds-s466l 1/1 Running 2 33d myapp-0 1/1 Running 0 3m myapp-1 1/1 Running 0 3m myapp-2 1/1 Running 0 3m pod-vol-demo 2/2 Running 0 1d redis-5b5d6fbbbd-q8ppz 1/1 Running 1 2d myapp-0 1/1 Terminating 0 3m myapp-2 1/1 Terminating 0 3m myapp-1 1/1 Terminating 0 3m myapp-1 0/1 Terminating 0 3m myapp-0 0/1 Terminating 0 3m myapp-2 0/1 Terminating 0 3m myapp-1 0/1 Terminating 0 3m myapp-1 0/1 Terminating 0 3m myapp-0 0/1 Terminating 0 4m myapp-0 0/1 Terminating 0 4m myapp-2 0/1 Terminating 0 3m myapp-2 0/1 Terminating 0 3m 此时PVC依旧存在的,再从新建立pod时,依旧会从新去绑定原来的pvc [root@k8s-master mainfests]# kubectl apply -f stateful-demo.yaml service/myapp-svc created statefulset.apps/myapp created [root@k8s-master mainfests]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 2Gi RWO 5m myappdata-myapp-1 Bound pv003 2Gi RWO,RWX 5m myappdata-myapp-2 Bound pv004 2Gi RWO,RWX 5m
RollingUpdate 更新策略在 StatefulSet 中实现 Pod 的自动滚动更新。 当StatefulSet的 .spec.updateStrategy.type 设置为 RollingUpdate 时,默认为:RollingUpdate。StatefulSet 控制器将在 StatefulSet 中删除并从新建立每一个 Pod。 它将以与 Pod 终止相同的顺序进行(从最大的序数到最小的序数),每次更新一个 Pod。 在更新其前身以前,它将等待正在更新的 Pod 状态变成正在运行并就绪。以下操做的滚动更新是有2-0的顺序更新。
[root@k8s-master mainfests]# vim stateful-demo.yaml #修改image版本为v2 ..... image: ikubernetes/myapp:v2 .... [root@k8s-master mainfests]# kubectl apply -f stateful-demo.yaml service/myapp-svc unchanged statefulset.apps/myapp configured [root@k8s-master ~]# kubectl get pods -w #查看滚动更新的过程 NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 36m myapp-1 1/1 Running 0 36m myapp-2 1/1 Running 0 36m myapp-2 1/1 Terminating 0 36m myapp-2 0/1 Terminating 0 36m myapp-2 0/1 Terminating 0 36m myapp-2 0/1 Terminating 0 36m myapp-2 0/1 Pending 0 0s myapp-2 0/1 Pending 0 0s myapp-2 0/1 ContainerCreating 0 0s myapp-2 1/1 Running 0 2s myapp-1 1/1 Terminating 0 36m myapp-1 0/1 Terminating 0 36m myapp-1 0/1 Terminating 0 36m myapp-1 0/1 Terminating 0 36m myapp-1 0/1 Pending 0 0s myapp-1 0/1 Pending 0 0s myapp-1 0/1 ContainerCreating 0 0s myapp-1 1/1 Running 0 1s myapp-0 1/1 Terminating 0 37m myapp-0 0/1 Terminating 0 37m myapp-0 0/1 Terminating 0 37m myapp-0 0/1 Terminating 0 37m
在建立的每个Pod中,每个pod本身的名称都是能够被解析的,以下:
[root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-0 1/1 Running 0 8m 10.244.1.62 k8s-node01 myapp-1 1/1 Running 0 8m 10.244.2.49 k8s-node02 myapp-2 1/1 Running 0 8m 10.244.1.61 k8s-node01 [root@k8s-master mainfests]# kubectl exec -it myapp-0 -- /bin/sh / # nslookup myapp-0.myapp-svc.default.svc.cluster.local nslookup: can't resolve '(null)': Name does not resolve Name: myapp-0.myapp-svc.default.svc.cluster.local Address 1: 10.244.1.62 myapp-0.myapp-svc.default.svc.cluster.local / # nslookup myapp-1.myapp-svc.default.svc.cluster.local nslookup: can't resolve '(null)': Name does not resolve Name: myapp-1.myapp-svc.default.svc.cluster.local Address 1: 10.244.2.49 myapp-1.myapp-svc.default.svc.cluster.local / # nslookup myapp-2.myapp-svc.default.svc.cluster.local nslookup: can't resolve '(null)': Name does not resolve Name: myapp-2.myapp-svc.default.svc.cluster.local Address 1: 10.244.1.61 myapp-2.myapp-svc.default.svc.cluster.local 从上面的解析,咱们能够看到在容器当中能够经过对Pod的名称进行解析到ip。其解析的域名格式以下: pod_name.service_name.ns_name.svc.cluster.local eg: myapp-0.myapp.default.svc.cluster.local
[root@k8s-master mainfests]# kubectl scale sts myapp --replicas=4 #扩容副本增长到4个 statefulset.apps/myapp scaled [root@k8s-master ~]# kubectl get pods -w #动态查看扩容 NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 23m myapp-1 1/1 Running 0 23m myapp-2 1/1 Running 0 23m myapp-3 0/1 Pending 0 0s myapp-3 0/1 Pending 0 0s myapp-3 0/1 ContainerCreating 0 0s myapp-3 1/1 Running 0 1s [root@k8s-master mainfests]# kubectl get pv #查看pv绑定 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 1Gi RWO,RWX Retain Available 1h pv002 2Gi RWO Retain Bound default/myappdata-myapp-0 1h pv003 2Gi RWO,RWX Retain Bound default/myappdata-myapp-1 1h pv004 2Gi RWO,RWX Retain Bound default/myappdata-myapp-2 1h pv005 2Gi RWO,RWX Retain Bound default/myappdata-myapp-3 1h [root@k8s-master mainfests]# kubectl patch sts myapp -p '{"spec":{"replicas":2}}' #打补丁方式缩容 statefulset.apps/myapp patched [root@k8s-master ~]# kubectl get pods -w #动态查看缩容 NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 25m myapp-1 1/1 Running 0 25m myapp-2 1/1 Running 0 25m myapp-3 1/1 Running 0 1m myapp-3 1/1 Terminating 0 2m myapp-3 0/1 Terminating 0 2m myapp-3 0/1 Terminating 0 2m myapp-3 0/1 Terminating 0 2m myapp-2 1/1 Terminating 0 26m myapp-2 0/1 Terminating 0 26m myapp-2 0/1 Terminating 0 27m myapp-2 0/1 Terminating 0 27m
修改更新策略,以partition方式进行更新,更新值为2,只有myapp编号大于等于2的才会进行更新。相似于金丝雀部署方式。
[root@k8s-master mainfests]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":2}}}}' statefulset.apps/myapp patched [root@k8s-master ~]# kubectl get sts myapp NAME DESIRED CURRENT AGE myapp 4 4 1h [root@k8s-master ~]# kubectl describe sts myapp Name: myapp Namespace: default CreationTimestamp: Wed, 10 Oct 2018 21:58:24 -0400 Selector: app=myapp-pod Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"name":"myapp","namespace":"default"},"spec":{"replicas":3,"selector":{"match... Replicas: 4 desired | 4 total Update Strategy: RollingUpdate Partition: 2 ......
版本升级,将image的版本升级为v3,升级后对比myapp-2和myapp-1的image版本是不一样的。这样就实现了金丝雀发布的效果。
[root@k8s-master mainfests]# kubectl set image sts/myapp myapp=ikubernetes/myapp:v3 statefulset.apps/myapp image updated [root@k8s-master ~]# kubectl get sts -o wide NAME DESIRED CURRENT AGE CONTAINERS IMAGES myapp 4 4 1h myapp ikubernetes/myapp:v3 [root@k8s-master ~]# kubectl get pods myapp-2 -o yaml |grep image - image: ikubernetes/myapp:v3 imagePullPolicy: IfNotPresent image: ikubernetes/myapp:v3 imageID: docker-pullable://ikubernetes/myapp@sha256:b8d74db2515d3c1391c78c5768272b9344428035ef6d72158fd9f6c4239b2c69 [root@k8s-master ~]# kubectl get pods myapp-1 -o yaml |grep image - image: ikubernetes/myapp:v2 imagePullPolicy: IfNotPresent image: ikubernetes/myapp:v2 imageID: docker-pullable://ikubernetes/myapp@sha256:85a2b81a62f09a414ea33b74fb8aa686ed9b168294b26b4c819df0be0712d358
将剩余的Pod也更新版本,只须要将更新策略的partition值改成0便可,以下:
[root@k8s-master mainfests]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}' statefulset.apps/myapp patched [root@k8s-master ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 58m myapp-1 1/1 Running 0 58m myapp-2 1/1 Running 0 13m myapp-3 1/1 Running 0 13m myapp-1 1/1 Terminating 0 58m myapp-1 0/1 Terminating 0 58m myapp-1 0/1 Terminating 0 58m myapp-1 0/1 Terminating 0 58m myapp-1 0/1 Pending 0 0s myapp-1 0/1 Pending 0 0s myapp-1 0/1 ContainerCreating 0 0s myapp-1 1/1 Running 0 2s myapp-0 1/1 Terminating 0 58m myapp-0 0/1 Terminating 0 58m myapp-0 0/1 Terminating 0 58m myapp-0 0/1 Terminating 0 58m myapp-0 0/1 Pending 0 0s myapp-0 0/1 Pending 0 0s myapp-0 0/1 ContainerCreating 0 0s myapp-0 1/1 Running 0 2s