http://blog.itpub.net/28916011/viewspace-2215046/ html
在应用程序中,能够分为有状态应用和无状态应用。 python
无状态的应用更关注于群体,任何一个成员均可以被取代。 mysql
对有状态的应用是关注个体。 nginx
像咱们前面用deployment控制器管理的nginx、myapp等都属于无状态应用。 web
像mysql、redis,zookeeper等都属于有状态应用,他们有的还有主从之分、前后顺序之分。 redis
statefulset控制器能实现有状态应用的管理,但实现起来也是很是麻烦的。须要把咱们运维管理的过程写入脚本并注入到statefulset中才能使用。虽然互联网上有人作好了stateful的脚本,可是仍是建议你们不要轻易的把redis、mysql等这样有状态的应用迁移到k8s上。sql
在k8s中,statefulset主要管理一下特效的应用: docker
a)、每个Pod稳定且有惟一的网络标识符;api
b)、稳定且持久的存储设备; tomcat
c)、要求有序、平滑的部署和扩展;
d)、要求有序、平滑的终止和删除;
e)、有序的滚动更新,应该先更新从节点,再更新主节点;
statefulset由三个组件组成:
a) headless service(无头的服务,即没名字);
b)statefulset控制器
c)volumeClaimTemplate(存储卷申请模板,由于每一个pod要有专用存储卷,而不能共用存储卷)
[root@master ~]# kubectl explain sts #stateful的简称
建立以前删除以前建立的多余的pod和svc避免待会冲突出错,固然也能够不删,只不过yaml里有些是冲突的,本身得另行定义
kubectl delete pods pod-vol-pvc kubectl delete pod pod-cm-3 kubectl delete pods pod-secret-1 kubectl delete deploy myapp-deploy kubectl delete deploy tomcat-deploy kubectl delete pvc mypvc kubectl delete pv --all kubectl delete svc myapp kubectl delete svc tomcat
而后从新生成pv
[root@master volumes]# cat pv-demo.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv001 labels: name: pv001 spec: nfs: path: /data/volumes/v1 server: 172.16.100.64 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv002 labels: name: pv002 spec: nfs: path: /data/volumes/v2 server: 172.16.100.64 accessModes: ["ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv003 labels: name: pv003 spec: nfs: path: /data/volumes/v3 server: 172.16.100.64 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv004 labels: name: pv004 spec: nfs: path: /data/volumes/v4 server: 172.16.100.64 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv005 labels: name: pv005 spec: nfs: path: /data/volumes/v5 server: 172.16.100.64 accessModes: ["ReadWriteMany","ReadWriteOnce"] capacity: storage: 9Gi
[root@master stateful]# cat stateful-demo.yaml apiVersion: v1 kind: Service metadata: name: myapp-svc labels: app: myapp-svc spec: ports: - port: 80 name: web clusterIP: None selector: app: myapp-pod --- apiVersion: apps/v1 kind: StatefulSet metadata: name: myapp spec: serviceName: myapp-svc replicas: 2 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: myappdata mountPath: /usr/share/nginx/html volumeClaimTemplates: #存储卷申请模板,能够为每一个pod定义volume;能够为pod所在的名称空间自动建立pvc。 - metadata: name: myappdata spec: accessModes: ["ReadWriteOnce"] #storageClassName: "gluster-dynamic" resources: requests: storage: 5Gi #2G的pvc
[root@master stateful]# kubectl apply -f stateful-demo.yaml service/myapp-svc unchanged statefulset.apps/myapp created
[root@master stateful]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myapp-svc ClusterIP None <none> 80/TCP 12m
看到myapp-svc是无头服务。
[root@master stateful]# kubectl get sts NAME DESIRED CURRENT AGE myapp 2 2 6m
[root@master stateful]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 2Gi RWO 3s myappdata-myapp-1 Bound pv003 1Gi RWO,RWX 1s
[root@master stateful]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 1Gi RWO,RWX Retain Available 1d pv002 2Gi RWO Retain Bound default/myappdata-myapp-0 1d pv003 1Gi RWO,RWX Retain Bound default/myappdata-myapp-1 1d pv004 1Gi RWO,RWX Retain Bound default/mypvc 1d pv005 1Gi RWO,RWX Retain Available
[root@master stateful]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 4m myapp-1 1/1 Running 0 4m
[root@master stateful]# kubectl delete -f stateful-demo.yaml service "myapp-svc" deleted statefulset.apps "myapp" deleted
上面删除会使pod和service删除,可是pvc是不会删除,因此还能恢复。
[root@master stateful]# kubectl exec -it myapp-0 -- /bin/sh / # nslookup myapp-0.myapp-svc.default.svc.cluster.local nslookup: can't resolve '(null)': Name does not resolve Name: myapp-0.myapp-svc.default.svc.cluster.local Address 1: 10.244.1.110 myapp-0.myapp-svc.default.svc.cluster.local / # / # / # nslookup myapp-1.myapp-svc.default.svc.cluster.local nslookup: can't resolve '(null)': Name does not resolve Name: myapp-1.myapp-svc.default.svc.cluster.local Address 1: 10.244.2.97 myapp-1.myapp-svc.default.svc.cluster.local
myapp-0.myapp-svc.default.svc.cluster.local
格式为:pod_name.service_name.namespace.svc.cluster.local
下面扩展myapp pod为5个:
[root@master stateful]# kubectl scale sts myapp --replicas=5 statefulset.apps/myapp scaled
[root@master stateful]# kubectl get pods NAME READY STATUS RESTARTS AGE client 0/1 Error 0 17d myapp-0 1/1 Running 0 37m myapp-1 1/1 Running 0 37m myapp-2 1/1 Running 0 46s myapp-3 1/1 Running 0 43s myapp-4 0/1 Pending 0 41s
[root@master stateful]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 2Gi RWO 52m myappdata-myapp-1 Bound pv003 1Gi RWO,RWX 52m myappdata-myapp-2 Bound pv005 1Gi RWO,RWX 2m myappdata-myapp-3 Bound pv001 1Gi RWO,RWX 2m myappdata-myapp-4 Pending 2m
另外也能够用patch打补丁的方法来进行扩容和缩容:
[root@master stateful]# kubectl patch sts myapp -p '{"spec":{"replicas":2}}' statefulset.apps/myapp patched
下面咱们再来介绍一下滚动更新。
[root@master stateful]# kubectl explain sts.spec.updateStrategy.rollingUpdate
假设有4个pod(pod0,pod1,pod2,pod3),若是设置partition为5,那么说明大于等于5的pod更新,咱们四个Pod就都不更新;若是partition为4,那么说明大于等于4的pod更新,即pod3更新,其余pod都不更新;若是partiton为3,那么说明大于等于3的pod更新,那么就是pod2和pod3更新,其余pod都不更新。
[root@master stateful]# kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}' statefulset.apps/myapp patched
1.13和视频中partition是不同,视频版本1.11??出现的是4,1.13.怎么该也不是4显示的是一大串数字。,可是更新是按照上面的策略更新的
[root@master stateful]# kubectl describe sts myapp Update Strategy: RollingUpdate Partition: 4
下面把myapp升级为v2版本
[root@master stateful]# kubectl set image sts/myapp myapp=ikubernetes/myapp:v2 statefulset.apps/myapp image updated [root@master ~]# kubectl get sts -o wide NAME DESIRED CURRENT AGE CONTAINERS IMAGES myapp 2 2 1h myapp ikubernetes/myapp:v2 [root@master ~]# kubectl get pods myapp-4 -o yaml containerStatuses: - containerID: docker://898714f2e5bf4f642e2a908e7da67eebf6d3074c89bbd0d798d191a2061a3115 image: ikubernetes/myapp:v2
能够看到pod myapp-4使用的模板版本是v2了。