自主式
Pod
对象由调度器调度到目标工做节点后即由相应节点上的kubelet
负责监控其容器的存活状态,容器主进程崩溃后,kubelet
可以自动重启相应的容器。但对出现非主进程崩溃类的容器错误却无从感知,这便依赖于pod
资源对象定义的存活探测,以便kubelet
可以探知到此类故障。但若pod
被删除或者工做节点自身发生故障(工做节点上都有kubelet
,kubelet
不可用,所以其健康状态便没法保证),则便须要控制器来处理相应的容器重启和配置。node
Pod
控制器由master
的kube-controller-manager
组件提供,常见的此类控制器有:nginxReplicationControllervim
ReplicaSet:代用户建立指定数量的
pod
副本数量,确保pod
副本数量符合预期状态,而且支持滚动式自动扩容和缩容功能apiDeployment:工做在
ReplicaSet
之上,用于管理无状态应用,目前来讲最好的控制器。支持滚动更新和回滚功能,还提供声明式配置。mvcDaemonSet:用于确保集群中的每个节点只运行特定的
pod
副本,经常使用于实现系统级**后台任务。好比ELK
服务appStatefulSet:管理有状态应用ide
Job:只要完成就当即退出,不须要重启或重建测试
CronJob:周期性任务控制,不须要持续后台运行spa
Kubernetes
的核心功能之一还在于要确保各资源对象的当前状态(status
)以匹配用户指望的状态(spec
),使当前状态不断地向指望状态“和解”(reconciliation
)来完成容器应用管理。而这些则是kube-controller-manager
的任务。命令行建立为具体的控制器对象以后,每一个控制器均经过
API Server
提供的接口持续监控相关资源对象的当前状态,并在因故障、更新或其余缘由致使系统状态发生变化时,尝试让资源的当前状态想指望状态迁移和逼近。
List-Watch
是kubernetes
实现的核心机制之一,在资源对象的状态发生变更时,由API Server
负责写入etcd
并经过水平触发(level-triggered
)机制主动通知给相关的客户端程序以确保其不会错过任何一个事件。控制器经过API Server
的watch
接口实时监控目标资源对象的变更并执行和解操做,但并不会与其余控制器进行任何交互。
Pod
控制器资源经过持续性地监控集群中运行着的Pod
资源对象来确保受其管控的资源严格符合用户指望的状态,例如资源副本的数量要精确符合指望等。一般,一个Pod
控制器资源至少应该包含三个基本的组成部分:标签选择器:匹配并关联
Pod
资源对象,并据此完成受其管控的Pod
资源计数。指望的副本数:指望在集群中精确运行着的
Pod
资源的对象数量。Pod模板:用于新建
Pod
资源对象的Pod
模板资源。
ReplicaSe
t是取代早期版本中的ReplicationController
控制器,其功能基本上与ReplicationController
相同
确保Pod资源对象的数量精确反映指望值:
ReplicaSet
须要确保由其控制运行的Pod副本数量精确吻合配置中定义的指望值,不然就会自动补足所缺或终止所余。确保Pod健康运行:探测到由其管控的
Pod
对象因其所在的工做节点故障而不可用时,自动请求由调度器于其余工做节点建立缺失的Pod
副本。弹性伸缩:可经过
ReplicaSet
控制器动态扩容或者缩容Pod
资源对象的数量。必要时还能够经过HPA
控制器实现Pod
资源规模的自动伸缩。
spec字段通常嵌套使用如下几个属性字段:
replicas <integer>:指按期望的Pod对象副本数量 selector <Object>:当前控制器匹配Pod对象副本的标签选择器,支持matchLabels和matchExpressions两种匹配机制 template <Object>:用于定义Pod时的Pod资源信息 minReadySeconds <integer>:用于定义Pod启动后多长时间为可用状态,默认为0秒
#(1)命令行查看ReplicaSet清单定义规则 [root@k8s-master ~]# kubectl explain rs [root@k8s-master ~]# kubectl explain rs.spec [root@k8s-master ~]# kubectl explain rs.spec.template #(2)建立ReplicaSet示例 [root@k8s-master ~]# vim manfests/rs-demo.yaml apiVersion: apps/v1 #api版本定义 kind: ReplicaSet #定义资源类型为ReplicaSet metadata: #元数据定义 name: myapp namespace: default spec: #ReplicaSet的规格定义 replicas: 2 #定义副本数量为2个 selector: #标签选择器,定义匹配Pod的标签 matchLabels: app: myapp release: canary template: #Pod的模板定义 metadata: #Pod的元数据定义 name: myapp-pod #自定义Pod的名称 labels: #定义Pod的标签,须要和上面的标签选择器内匹配规则中定义的标签一致,能够多出其余标签 app: myapp release: canary spec: #Pod的规格定义 containers: #容器定义 - name: myapp-containers #容器名称 image: ikubernetes/myapp:v1 #容器镜像 imagePullPolicy: IfNotPresent #拉取镜像的规则 ports: #暴露端口 - name: http #端口名称 containerPort: 80 #(3)建立ReplicaSet定义的Pod [root@k8s-master ~]# kubectl apply -f manfests/rs-demo.yaml replicaset.apps/myapp created [root@k8s-master ~]# kubectl get rs #查看建立的ReplicaSet控制器 NAME DESIRED CURRENT READY AGE myapp 4 4 4 3m23s [root@k8s-master ~]# kubectl get pods #经过查看pod能够看出pod命令是规则是前面是replicaset控制器的名称加随机生成的字符串 NAME READY STATUS RESTARTS AGE myapp-bln4v 1/1 Running 0 6s myapp-bxpzt 1/1 Running 0 6s #(4)修改Pod的副本数量 [root@k8s-master ~]# kubectl edit rs myapp replicas: 4 [root@k8s-master ~]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR myapp 4 4 4 2m50s myapp-containers ikubernetes/myapp:v2 app=myapp,release=canary [root@k8s-master ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-8hkcr 1/1 Running 0 2m2s app=myapp,release=canary myapp-bln4v 1/1 Running 0 3m40s app=myapp,release=canary myapp-bxpzt 1/1 Running 0 3m40s app=myapp,release=canary myapp-ql2wk 1/1 Running 0 2m2s app=myapp,release=canary
[root@k8s-master ~]# vim manfests/rs-demo.yaml spec: #Pod的规格定义 containers: #容器定义 - name: myapp-containers #容器名称 image: ikubernetes/myapp:v2 #容器镜像 imagePullPolicy: IfNotPresent #拉取镜像的规则 ports: #暴露端口 - name: http #端口名称 containerPort: 80 [root@k8s-master ~]# kubectl apply -f manfests/rs-demo.yaml #执行apply让其重载 [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image Name Image myapp-bln4v ikubernetes/myapp:v1 myapp-bxpzt ikubernetes/myapp:v1 #说明:这里虽然重载了,可是已有的pod所使用的镜像仍然是v1版本的,只是新建pod时才会使用v2版本,这里测试先手动删除已有的pod。 [root@k8s-master ~]# kubectl delete pods -l app=myapp #删除标签app=myapp的pod资源 pod "myapp-bln4v" deleted pod "myapp-bxpzt" deleted [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image #再次查看经过ReplicaSet新建的pod资源对象。镜像已使用v2版本 Name Image myapp-mdn8j ikubernetes/myapp:v2 myapp-v5bgr ikubernetes/myapp:v2
扩容和缩容
[root@k8s-master ~]# kubectl get rs #查看ReplicaSet NAME DESIRED CURRENT READY AGE myapp 2 2 2 154m [root@k8s-master ~]# kubectl get pods #查看Pod NAME READY STATUS RESTARTS AGE myapp-mdn8j 1/1 Running 0 5m26s myapp-v5bgr 1/1 Running 0 5m26s #扩容 [root@k8s-master ~]# kubectl scale replicasets myapp --replicas=5 #将上面的Deployments控制器myapp的Pod副本数量提高为5个 replicaset.extensions/myapp scaled [root@k8s-master ~]# kubectl get rs #查看ReplicaSet NAME DESIRED CURRENT READY AGE myapp 5 5 5 156m [root@k8s-master ~]# kubectl get pods #查看Pod NAME READY STATUS RESTARTS AGE myapp-lrrp8 1/1 Running 0 8s myapp-mbqf8 1/1 Running 0 8s myapp-mdn8j 1/1 Running 0 6m48s myapp-ttmf5 1/1 Running 0 8s myapp-v5bgr 1/1 Running 0 6m48s #收缩 [root@k8s-master ~]# kubectl scale replicasets myapp --replicas=3 replicaset.extensions/myapp scaled [root@k8s-master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp 3 3 3 159m [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-mdn8j 1/1 Running 0 10m myapp-ttmf5 1/1 Running 0 3m48s myapp-v5bgr 1/1 Running 0 10m
[root@k8s-master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp 3 3 3 162m [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-mdn8j 1/1 Running 0 12m myapp-ttmf5 1/1 Running 0 6m18s myapp-v5bgr 1/1 Running 0 12m [root@k8s-master ~]# kubectl delete replicasets myapp --cascade=false replicaset.extensions "myapp" deleted [root@k8s-master ~]# kubectl get rs No resources found. [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-mdn8j 1/1 Running 0 13m myapp-ttmf5 1/1 Running 0 7m myapp-v5bgr 1/1 Running 0 13m #经过上面的示例能够看出,添加--cascade=false参数后再删除ReplicaSet资源对象时并无将其管控的Pod资源对象一并删除。
Deployment
(简写为deploy
)是kubernetes
控制器的又一种实现,它构建于ReplicaSet
控制器之上,可为Pod
和ReplicaSet
资源提供声明式更新。
Deployment
控制器资源的主要职责是为了保证Pod
资源的健康运行,其大部分功能都可经过调用ReplicaSet
实现,同时还增添部分特性。
事件和状态查看:必要时能够查看
Deployment
对象升级的详细进度和状态。回滚:升级操做完成后发现问题时,支持使用回滚机制将应用返回到前一个或由用户指定的历史记录中的版本上。
版本记录:对
Deployment
对象的每个操做都予以保存,以供后续可能执行的回滚操做使用。暂停和启动:对于每一次升级,都可以随时暂停和启动。
多种自动更新方案:一是
Recreate
,即重建更新机制,全面中止、删除旧有的Pod
后用新版本替代;另外一个是RollingUpdate
,即滚动升级机制,逐步替换旧有的Pod
至新的版本。
Deployment其核心资源和ReplicaSet类似
#(1)命令行查看ReplicaSet清单定义规则 [root@k8s-master ~]# kubectl explain deployment [root@k8s-master ~]# kubectl explain deployment.spec [root@k8s-master ~]# kubectl explain deployment.spec.template #(2)建立Deployment示例 [root@k8s-master ~]# vim manfests/deploy-demo.yaml apiVersion: apps/v1 #api版本定义 kind: Deployment #定义资源类型为Deploymant metadata: #元数据定义 name: deploy-demo #deployment控制器名称 namespace: default #名称空间 spec: #deployment控制器的规格定义 replicas: 2 #定义副本数量为2个 selector: #标签选择器,定义匹配Pod的标签 matchLabels: app: deploy-app release: canary template: #Pod的模板定义 metadata: #Pod的元数据定义 labels: #定义Pod的标签,须要和上面的标签选择器内匹配规则中定义的标签一致,能够多出其余标签 app: deploy-app release: canary spec: #Pod的规格定义 containers: #容器定义 - name: myapp #容器名称 image: ikubernetes/myapp:v1 #容器镜像 ports: #暴露端口 - name: http #端口名称 containerPort: 80 #(3)建立Deployment对象 [root@k8s-master ~]# kubectl apply -f manfests/deploy-demo.yaml deployment.apps/deploy-demo created #(4)查看资源对象 [root@k8s-master ~]# kubectl get deployment #查看Deployment资源对象 NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 2/2 2 2 10s [root@k8s-master ~]# kubectl get replicaset #查看ReplicaSet资源对象 NAME DESIRED CURRENT READY AGE deploy-demo-78c84d4449 2 2 2 20s [root@k8s-master ~]# kubectl get pods #查看Pod资源对象 NAME READY STATUS RESTARTS AGE deploy-demo-78c84d4449-22btc 1/1 Running 0 23s deploy-demo-78c84d4449-5fn2k 1/1 Running 0 23s --- 说明: 经过查看资源对象能够看出,Deployment会自动建立相关的ReplicaSet控制器资源,并以"[DEPLOYMENT-name]-[POD-TEMPLATE-HASH-VALUE]"格式为其命名,其中的hash值由Deployment自动生成。而Pod名则是以ReplicaSet控制器的名称为前缀,后跟5位随机字符。
ReplicaSet控制器的应用更新须要手动分红多步并以特定的次序进行,过程繁杂且容易出错,而Deployment却只须要由用户指定在Pod模板中要改动的内容,(如镜像文件的版本),余下的步骤便会由其自动完成。Pod副本数量也是同样。
Deployment控制器支持两种更新策略:滚动更新(rollingUpdate)和重建创新(Recreate),默认为滚动更新
滚动更新(rollingUpdate):即在删除一部分旧版本Pod资源的同时,补充建立一部分新版本的Pod对象进行应用升级,其优点是升级期间,容器中应用提供的服务不会中断,但更新期间,不一样客户端获得的相应内容可能会来自不一样版本的应用。
从新建立(Recreate):即首先删除现有的Pod对象,然后由控制器基于新模板重行建立出新版本的资源对象。
Deployment控制器的滚动更新操做并不是在同一个ReplicaSet控制器对象下删除并建立Pod资源,新控制器的Pod对象数量不断增长,直到旧控制器再也不拥有Pod对象,而新控制器的副本数量变得彻底符合指望值为止。如图所示
maxUnavailable:升级期间正常可用的Pod
副本数(包括新旧版本)最多不能低于指望值的个数,其值能够是0
或正整数,也能够是指望值的百分比;默认值为1
,该值意味着若是指望值是3
,则升级期间至少要有两个Pod
对象处于正常提供服务的状态。
注:为了保存版本升级的历史,须要在建立
Deployment
对象时于命令中使用“--record”
选项。
#打开1个终端进行升级 [root@k8s-master ~]# kubectl set image deployment/deploy-demo myapp=ikubernetes/myapp:v2 deployment.extensions/deploy-demo image updated #同时打开终端2进行查看pod资源对象升级过程 [root@k8s-master ~]# kubectl get pods -l app=deploy-app -w NAME READY STATUS RESTARTS AGE deploy-demo-78c84d4449-2rvxr 1/1 Running 0 33s deploy-demo-78c84d4449-nd7rr 1/1 Running 0 33s deploy-demo-7c66dbf45b-7k4xz 0/1 Pending 0 0s deploy-demo-7c66dbf45b-7k4xz 0/1 Pending 0 0s deploy-demo-7c66dbf45b-7k4xz 0/1 ContainerCreating 0 0s deploy-demo-7c66dbf45b-7k4xz 1/1 Running 0 2s deploy-demo-78c84d4449-2rvxr 1/1 Terminating 0 49s deploy-demo-7c66dbf45b-r88qr 0/1 Pending 0 0s deploy-demo-7c66dbf45b-r88qr 0/1 Pending 0 0s deploy-demo-7c66dbf45b-r88qr 0/1 ContainerCreating 0 0s deploy-demo-7c66dbf45b-r88qr 1/1 Running 0 1s deploy-demo-78c84d4449-2rvxr 0/1 Terminating 0 50s deploy-demo-78c84d4449-nd7rr 1/1 Terminating 0 51s deploy-demo-78c84d4449-nd7rr 0/1 Terminating 0 51s deploy-demo-78c84d4449-nd7rr 0/1 Terminating 0 57s deploy-demo-78c84d4449-nd7rr 0/1 Terminating 0 57s deploy-demo-78c84d4449-2rvxr 0/1 Terminating 0 60s deploy-demo-78c84d4449-2rvxr 0/1 Terminating 0 60s #同时打开终端3进行查看pod资源对象变动过程 [root@k8s-master ~]# kubectl get deployment deploy-demo -w NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 2/2 2 2 37s deploy-demo 2/2 2 2 47s deploy-demo 2/2 2 2 47s deploy-demo 2/2 0 2 47s deploy-demo 2/2 1 2 47s deploy-demo 3/2 1 3 49s deploy-demo 2/2 1 2 49s deploy-demo 2/2 2 2 49s deploy-demo 3/2 2 3 50s deploy-demo 2/2 2 2 51s # 升级完成再次查看rs的状况,如下能够看到原的rs做为备份,而如今启动的是新的rs [root@k8s-master ~]# kubectl get rs NAME DESIRED CURRENT READY AGE deploy-demo-78c84d4449 0 0 0 4m41s deploy-demo-7c66dbf45b 2 2 2 3m54s
#一、使用kubectl scale命令扩容 [root@k8s-master ~]# kubectl scale deployment deploy-demo --replicas=3 deployment.extensions/deploy-demo scaled [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-7c66dbf45b-7k4xz 1/1 Running 0 10m deploy-demo-7c66dbf45b-gq2tw 1/1 Running 0 3s deploy-demo-7c66dbf45b-r88qr 1/1 Running 0 10m #二、使用直接修改配置清单方式进行扩容 [root@k8s-master ~]# vim manfests/deploy-demo.yaml spec: #deployment控制器的规格定义 replicas: 4 #定义副本数量为2个 [root@k8s-master ~]# kubectl apply -f manfests/deploy-demo.yaml deployment.apps/deploy-demo configured [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-78c84d4449-6rmnm 1/1 Running 0 61s deploy-demo-78c84d4449-9xfp9 1/1 Running 0 58s deploy-demo-78c84d4449-c2m6h 1/1 Running 0 61s deploy-demo-78c84d4449-sfxps 1/1 Running 0 57s #三、使用kubectl patch打补丁的方式进行扩容 [root@k8s-master ~]# kubectl patch deployment deploy-demo -p '{"spec":{"replicas":5}}' deployment.extensions/deploy-demo patched [root@k8s-master ~]# [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE deploy-demo-78c84d4449-6rmnm 1/1 Running 0 3m44s deploy-demo-78c84d4449-9xfp9 1/1 Running 0 3m41s deploy-demo-78c84d4449-c2m6h 1/1 Running 0 3m44s deploy-demo-78c84d4449-sfxps 1/1 Running 0 3m40s deploy-demo-78c84d4449-t7jxb 1/1 Running 0 3s
1)添加其总数多余指望值一个
[root@k8s-master ~]# kubectl patch deployment deploy-demo -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}' deployment.extensions/deploy-demo patched
2)启动更新过程,在修改相应容器的镜像版本后当即暂停更新进度。
[root@k8s-master ~]# kubectl set image deployment/deploy-demo myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment deploy-demo deployment.extensions/deploy-demo image updated deployment.extensions/deploy-demo paused #查看 [root@k8s-master ~]# kubectl get deployment #查看deployment资源对象 NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 6/5 1 6 37m [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image #查看pod资源对象的name和image Name Image deploy-demo-6bf8dbdc9f-fjnzn ikubernetes/myapp:v3 deploy-demo-78c84d4449-6rmnm ikubernetes/myapp:v1 deploy-demo-78c84d4449-9xfp9 ikubernetes/myapp:v1 deploy-demo-78c84d4449-c2m6h ikubernetes/myapp:v1 deploy-demo-78c84d4449-sfxps ikubernetes/myapp:v1 deploy-demo-78c84d4449-t7jxb ikubernetes/myapp:v1 [root@k8s-master ~]# kubectl rollout status deployment/deploy-demo #查看更新状况 Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... --- #经过上面查看能够看出,当前的pod数量为6个,由于此前咱们定义的指望值为5个,这里多出了一个,且这个镜像版本为v3版本。 #所有更新 [root@k8s-master ~]# kubectl rollout resume deployment deploy-demo deployment.extensions/deploy-demo resumed #再次查看 [root@k8s-master ~]# kubectl get deployment #查看deployment资源对象 NAME READY UP-TO-DATE AVAILABLE AGE deploy-demo 5/5 5 5 43m [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image #查看pod资源对象的name和image Name Image deploy-demo-6bf8dbdc9f-2z6gt ikubernetes/myapp:v3 deploy-demo-6bf8dbdc9f-f79q2 ikubernetes/myapp:v3 deploy-demo-6bf8dbdc9f-fjnzn ikubernetes/myapp:v3 deploy-demo-6bf8dbdc9f-pjf4z ikubernetes/myapp:v3 deploy-demo-6bf8dbdc9f-x7fnk ikubernetes/myapp:v3 [root@k8s-master ~]# kubectl rollout status deployment/deploy-demo #查看更新状况 Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment spec update to be observed... Waiting for deployment spec update to be observed... Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 1 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 2 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 4 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 4 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 4 out of 5 new replicas have been updated... Waiting for deployment "deploy-demo" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "deploy-demo" rollout to finish: 1 old replicas are pending termination... deployment "deploy-demo" successfully rolled out
1)回到上一个版本
[root@k8s-master ~]# kubectl rollout undo deployment/deploy-demo deployment.extensions/deploy-demo rolled back [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image Name Image deploy-demo-78c84d4449-2xspz ikubernetes/myapp:v1 deploy-demo-78c84d4449-f8p46 ikubernetes/myapp:v1 deploy-demo-78c84d4449-mnmvc ikubernetes/myapp:v1 deploy-demo-78c84d4449-tsl7r ikubernetes/myapp:v1 deploy-demo-78c84d4449-xdt8j ikubernetes/myapp:v1
2)回滚到指定版本
#经过该命令查看更新历史记录 [root@k8s-master ~]# kubectl rollout history deployment/deploy-demo deployment.extensions/deploy-demo REVISION CHANGE-CAUSE 2 <none> 4 <none> 5 <none> #回滚到版本2 [root@k8s-master ~]# kubectl rollout undo deployment/deploy-demo --to-revision=2 deployment.extensions/deploy-demo rolled back [root@k8s-master ~]# kubectl get pods -o custom-columns=Name:metadata.name,Image:spec.containers[0].image Name Image deploy-demo-7c66dbf45b-42nj4 ikubernetes/myapp:v2 deploy-demo-7c66dbf45b-8zhf5 ikubernetes/myapp:v2 deploy-demo-7c66dbf45b-bxw7x ikubernetes/myapp:v2 deploy-demo-7c66dbf45b-gmq8x ikubernetes/myapp:v2 deploy-demo-7c66dbf45b-mrfdb ikubernetes/myapp:v2
DaemonSet
用于在集群中的所有节点上同时运行一份指定Pod
资源副本,后续新加入集群的工做节点也会自动建立一个相关的Pod
对象,当从集群移除借点时,此类Pod
对象也将被自动回收而无需重建。管理员也可使用节点选择器及节点标签指定仅在具备特定特征的节点上运行指定的Pod
对象。
应用场景
在各个节点上运行日志收集守护进程,如
fluentd
和logstash
。在各个节点上运行监控系统的代理守护进程,如
Prometheus Node Exporter
、collectd
、Datadog agent
、New Relic agent
和Ganglia gmond
等。
#(1) 定义清单文件 [root@k8s-master ~]# vim manfests/daemonset-demo.yaml apiVersion: apps/v1 #api版本定义 kind: DaemonSet #定义资源类型为DaemonSet metadata: #元数据定义 name: daemset-nginx #daemonset控制器名称 namespace: default #名称空间 labels: #设置daemonset的标签 app: daem-nginx spec: #DaemonSet控制器的规格定义 selector: #指定匹配pod的标签 matchLabels: #指定匹配pod的标签 app: daem-nginx #注意:这里须要和template中定义的标签同样 template: #Pod的模板定义 metadata: #Pod的元数据定义 name: nginx labels: #定义Pod的标签,须要和上面的标签选择器内匹配规则中定义的标签一致,能够多出其余标签 app: daem-nginx spec: #Pod的规格定义 containers: #容器定义 - name: nginx-pod #容器名字 image: nginx:1.12 #容器镜像 ports: #暴露端口 - name: http #端口名称 containerPort: 80 #暴露的端口 #(2)建立上面定义的daemonset控制器 [root@k8s-master ~]# kubectl apply -f manfests/daemonset-demo.yaml daemonset.apps/daemset-nginx created #(3)查看验证 [root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES daemset-nginx-7s474 1/1 Running 0 80s 10.244.1.61 k8s-node1 <none> <none> daemset-nginx-kxpl2 1/1 Running 0 94s 10.244.2.58 k8s-node2 <none> <none> [root@k8s-master ~]# kubectl describe daemonset/daemset-nginx ...... Name: daemset-nginx Selector: app=daem-nginx Node-Selector: <none> ...... Desired Number of Nodes Scheduled: 2 Current Number of Nodes Scheduled: 2 Number of Nodes Scheduled with Up-to-date Pods: 2 Number of Nodes Scheduled with Available Pods: 2 Number of Nodes Misscheduled: 0 Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed ......
注意
DaemonSet自Kubernetes1.6版本起也开始支持更新机制,相关配置嵌套在kubectl explain daemonset.spec.updateStrategy字段中。其支持RollingUpdate(滚动更新)和OnDelete(删除时更新)两种策略,滚动更新为默认的更新策略。
#(1)查看镜像版本 [root@k8s-master ~]# kubectl get pods -l app=daem-nginx -o custom-columns=NAME:metadata.name,NODE:spec.nodeName,Image:spec.containers[0].image NAME NODE Image daemset-nginx-7s474 k8s-node1 nginx:1.12 daemset-nginx-kxpl2 k8s-node2 nginx:1.12 #(2)更新 [root@k8s-master ~]# kubectl set image daemonset/daemset-nginx nginx-pod=nginx:1.14 [root@k8s-master ~]# kubectl get pods -l app=daem-nginx -o custom-columns=NAME:metadata.name,NODE:spec.nodeName,Image:spec.containers[0].image #再次查看 NAME NODE Image daemset-nginx-74c95 k8s-node2 nginx:1.14 daemset-nginx-nz6n9 k8s-node1 nginx:1.14 #(3)查坎详细信息 [root@k8s-master ~]# kubectl describe daemonset daemset-nginx ...... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 49m daemonset-controller Created pod: daemset-nginx-6kzg6 Normal SuccessfulCreate 49m daemonset-controller Created pod: daemset-nginx-jjnc2 Normal SuccessfulDelete 40m daemonset-controller Deleted pod: daemset-nginx-jjnc2 Normal SuccessfulCreate 40m daemonset-controller Created pod: daemset-nginx-kxpl2 Normal SuccessfulDelete 40m daemonset-controller Deleted pod: daemset-nginx-6kzg6 Normal SuccessfulCreate 40m daemonset-controller Created pod: daemset-nginx-7s474 Normal SuccessfulDelete 15s daemonset-controller Deleted pod: daemset-nginx-7s474 Normal SuccessfulCreate 8s daemonset-controller Created pod: daemset-nginx-nz6n9 Normal SuccessfulDelete 5s daemonset-controller Deleted pod: daemset-nginx-kxpl2
DaemonSet
控制器的滚动更新机制也能够借助于minReadySeconds
字段控制滚动节奏;必要时也能够执行暂停和继续操做。其也能够进行回滚操做。