k8s中的容器通常是经过deployment管理的,那么一次滚动升级理论上会更新全部pod,这由deployment资源特性保证的,但在实际的工做场景下,须要灰度发布进行服务验证,即只发布部分节点,这彷佛与k8s的deployment原理相违背,可是灰度发布的必要性,运维同窗都很是清楚,如何解决这一问题?git
最佳实践:
定义两个不一样的deployment,例如:fop-gate和fop-gate-canary,可是管理的pod所使用的镜像、配置文件所有相同,不一样的是什么呢?
答案是:replicas (灰度的fop-gate-canary的replicas是1,fop-gate的副本数是9)api
cat deployment.yaml apiVersion: apps/v1beta1 kind: Deployment metadata: {{if eq .system.SERVICE "fop-gate-canary"}} name: fop-gate-canary {{else if eq .system.SERVICE "fop-gate"}} name: fop-gate {{end}} namespace: dora-apps labels: app: fop-gate team: dora type: basic annotations: log.qiniu.com/global.agent: "logexporter" log.qiniu.com/global.version: "v2" spec: {{if eq .system.SERVICE "fop-gate-canary"}} replicas: 1 {{else if eq .system.SERVICE "fop-gate"}} replicas: 9 {{end}} minReadySeconds: 30 strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 0 template: metadata: labels: app: fop-gate team: dora type: basic spec: terminationGracePeriodSeconds: 90 containers: - name: fop-gate image: reg.qiniu.com/dora-apps/fop-gate:20190218210538-6-master ...........
咱们都知道, deployment 会为本身建立的 pod 自动加一个 “pod-template-hash” label 来区分,也就是说,每一个deployment只管理本身的pod,不会混乱,那么此时endpoint列表中就会有fop-gate和fop-gate-canary的pod,其余服务调用fop-gate的时候就会同时把请求发到这10个pod上。bash
灰度发布该怎么作呢?
最佳实践:建立两个不一样pipeline,先灰度发布fop-gate-canary的pipeline,再全局发布fop-gate的pipeline(这里给出的是渲染前的配置文件,注意pipeline不一样):app
"fop-gate": "templates": - "dora/jjh/fop-gate/configmap.yaml" - "dora/jjh/fop-gate/service.yaml" - "dora/jjh/fop-gate/deployment.yaml" - "dora/jjh/fop-gate/ingress.yaml" - "dora/jjh/fop-gate/ingress_debug.yaml" - "dora/jjh/fop-gate/log-applog-configmap.yaml" - "dora/jjh/fop-gate/log-auditlog-configmap.yaml" "pipeline": "569325e6-6d6e-45ca-b21e-24016a9ef326" "fop-gate-canary": "templates": - "dora/jjh/fop-gate/configmap.yaml" - "dora/jjh/fop-gate/service.yaml" - "dora/jjh/fop-gate/deployment.yaml" - "dora/jjh/fop-gate/ingress.yaml" - "dora/jjh/fop-gate/log-applog-configmap.yaml" - "dora/jjh/fop-gate/log-auditlog-configmap.yaml" "pipeline": "15f7dd6a-bd01-41bc-bac5-8266d63fc3a5"
注意发布的前后顺序:运维
灰度发布完成后,能够登录pod查看日志,并观察相关的grafana监控,查看TPS2XX和TPS5XX的变化状况,再决定是否继续发布fop-gate,实现灰度发布的目的ide
➜ dora git:(daixuan) ✗ kubectl get pod -o wide | grep fop-gate fop-gate-685d66768b-5v6q4 2/2 Running 0 15d 172.20.122.161 jjh304 <none> fop-gate-685d66768b-69c6q 2/2 Running 0 4d21h 172.20.129.52 jjh1565 <none> fop-gate-685d66768b-79fhd 2/2 Running 0 15d 172.20.210.227 jjh219 <none> fop-gate-685d66768b-f68zq 2/2 Running 0 15d 172.20.177.98 jjh322 <none> fop-gate-685d66768b-k5l9s 2/2 Running 0 15d 172.20.189.147 jjh1681 <none> fop-gate-685d66768b-m5n55 2/2 Running 0 15d 172.20.73.78 jjh586 <none> fop-gate-685d66768b-rr7t6 2/2 Running 0 15d 172.20.218.225 jjh302 <none> fop-gate-685d66768b-tqvp7 2/2 Running 0 15d 172.20.221.15 jjh592 <none> fop-gate-685d66768b-xnqn7 2/2 Running 0 15d 172.20.133.80 jjh589 <none> fop-gate-canary-7cb6dc676f-62n24 2/2 Running 0 15d 172.20.208.28 jjh574 <none> ➜ dora git:(daixuan) ✗ kubectl exec -it fop-gate-canary-7cb6dc676f-62n24 -c fop-gate bash root@fop-gate-canary-7cb6dc676f-62n24:/# cd app/auditlog/ root@fop-gate-canary-7cb6dc676f-62n24:/app/auditlog# tail -n5 144 | awk -F'\t' '{print $8}' 200 200 200 200 200
此外,spinnaker具备发布具备pause、resume、undo功能,实际测试可行
pause 暂停功能(相似于kubectl rollout pause XXX的功能)
resume恢复功能(相似于kubectl rollout resume XXX的功能)
undo取消功能(相似于kubectl rollout undo XXX功能)测试
spinnaker的这几种功能能够在正常发布服务的过程当中发现问题,及时暂停和恢复,注意,spinnaker取消发布必定是针对正在发布的操做,pause状态中的发布没法取消,这与kubectl操做一致spa
咱们尝试执行一次,发布,暂停,恢复,取消 操做,整个过程会产生4个version,每次变更会对应一个新version,由于无论是暂停仍是恢复,在spinnaker中都将认为是一次新的发布,会更新version版本debug
总结:k8s中灰度发布最好方法就是定义两个不一样的deployment管理相同类型的服务,建立不一样的pipeline进行发布管理,避免干扰,同时在正常发布过程当中,也能够利用spinnaker的pause,resume,undo等功能进行发布控制。日志