经过yaml格式建立的pod资源咱们手动delete以后,是不会重建的,由于这个属于自主式pod,不是属于控制器控制的pod。以前咱们直接经过run启动是经过控制器来管理的,delete以后还能经过控制器来重构一个新的如出一辙的pod,控制器会严格控制其控制的pod数量符合用户的指望
并且控制器管理端的pod是不建议直接delete的,能够经过修改控制器管理相应的pod的数量从而达到咱们的预期。
pod控制器主要功能也就是带咱们去管理端pod的中间层,并帮咱们确保每个pod资源始终处于咱们指望的状态,例如pod里面的容器出现故障,控制器会去尝试重启容器,当一直重启不成功,就会基于内部策略来进行从新的编排和部署。
若是容器的数量低于用户的目标数据就会新建pod资源,多余则会终止。
控制器是一种泛称,真正的控制器资源有多种类型:
1:ReplicaSet:(带用户建立指定数量的pod副本,并确保pod副本数量一直处于知足用户指望的数量,另外ReplicaSet还支持扩缩容机制,并且已经替代了ReplicationController)
ReplicaSet的三种核心资源:
1:用户定义的pod副本
2:标签选择器
3:pod模板
ReplicaSet功能如此强大,可是咱们却不能直接使用ReplicaSet,并且连k8s也建议用户不直接使用ReplicaSet,而是转而使用Deployment。
Deployment(也是一种控制器,可是Deployment不是直接替代了ReplicaSet来控制pod的,而是经过控制ReplicaSet,再由ReplicaSet来控制pod,由此Deployment是建构在ReplicaSet之上的,不是创建在pod之上的,除了控制ReplicaSet自身所带的两个功能以外,Deployment还支持滚动更新和回滚等强大的功能,还支持声明式配置的功能,声明式可使得咱们建立的时候根据声明的逻辑来定义,方便咱们随时动态修改在apiservice上定义的目标指望状态)
Deployment也是目前最好的控制器之一
Deployment指用于管理无状态应用,指须要关注群体,无需关注个体的时候更加须要Deployment来完成。
控制器管理pod的工做特色:
1:pod的数量能够大于node节点数量,pod的数量不和node的数量造成精准匹配的关系,大于node节点数量的pod会经过策略分派不通的node节点上,可能一个node有5个,一个node有3个这样的状况,可是这对某些服务来讲一个节点出现多个相同pod是彻底没有必要的,例如elk的日志收集服务亦或者监控工具等,一个节点只须要跑一个pod便可来完成node节点上全部的pod所产生的日志收集工做,多起就等于在消耗资源
对于这种状况,Deployment就不能很好的完成,我既要日志收集pod数量每一个节点是惟一的,又要保证一旦pod挂掉以后还能精准的从挂掉的pod上重构起来,那么就须要另一种控制器DaemonSet。
DaemonSet:
用于控制运行的集群每个节点只运行一个特定的pod副本,这样不只能规避咱们上面的问题,还能完成当新的节点加入集群的时候,上面能运行一个特定的pod,那这种控制器控制的pod数量就直接取决于你的集群的规模,固然pod模板和标签选择器依然是不能少的
Job
Job能够用于指须要在计划内按照指定的时间节点取执行一次,执行完成以后就退出,无需长期运行在后台,例如数据库的备份操做,当备份完成应当当即退出,可是还有特殊的状况,例如mysql程序链接数满了或者mysql挂了,这个时候job控制器控制的pod就须要把指定的任务完成才能结束,若是中途退出了须要重建来直道任务完成才能退出。Job适合完成一次性的任务。
Cronjob:
Cronjob和job的实现的功能相似,可是适合完成周期性的计划任务,面对周期性计划任务咱们须要考虑到就是上一次任务执行尚未完成下一次的时间节点又到了应该怎么处理。
StatefulSet
StatefulSet就适合管理有状态的应用,更加关系个体,例如咱们建立了一个redis集群,若是集群中某一个redis挂了,新起的pod是没法替代以前的redis的,由于以前的redis存储的数据可能被redis一块儿带走了。
StatefulSet是将没一个pod单独管理的,每个pod都有本身独有的标识和独有的数据集,一旦出现故障新的pod加进来以前须要作不少初始化操做才能被加进来,可是咱们对于这些有状态并且有数据的应用若是是出现故障须要重构的时候,会变得很麻烦,由于redis和mysql重构和主从复制的配置是彻底不同的,这就意味须要将这些内容编写脚本的形式放到StatefulSet的模板中,这就须要人为的去作大量的验证,由于控制器一旦加载模块都是自动完成的,可能弄很差数据就丢失了。
无论是k8s仍是直接部署的应用,只要是有状态的应用都会面临这种难题,一旦故障怎么保证数据不会丢失,并且能快速用新的应用顶上来接着以前的数据继续工做,可能在直接部署的应用上完成了,可是移植到k8s上的时候将会面临的又是另一种状况。
在k8s上还支持一种特殊类型的资源TPR,可是在1.8版本以后就被CDR取代了,其主要功能就是自定义资源,能够将目标资源管理成一种独特的管理逻辑,而后将这种管理逻辑灌注到Operator里面,可是这种难度会变的很大,以致于到目前支持这种形式的pod资源并很少。
k8s为了使得使用变得简单,后面也提供了一种Helm的工具,这个工具相似centos上的yum工具同样,咱们只须要定义存储卷在哪里,使用多少内存空间等等资源,而后直接安装便可,helm如今已经支持不少主流的应用,可是这些应用不少时候都适用于咱们的环境,因此也致使helm使用的人也不是不少。html
咱们能够经过kubectl explain rc(ReplicaSet的简写)node
[root@www kubeadm]# kubectl explain rc 能够看到一级字段也咱们看 KIND: ReplicationController VERSION: v1 DESCRIPTION: ReplicationController represents the configuration of a replication controller. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources kind <string> Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds metadata <Object> If the Labels of a ReplicationController are empty, they are defaulted to be the same as the Pod(s) that the replication controller manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata spec <Object> Spec defines the specification of the desired behavior of the replication controller. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status status <Object> Status is the most recently observed status of the replication controller. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status spec: [root@www kubeadm]# kubectl explain rc.spec KIND: ReplicationController VERSION: v1 RESOURCE: spec <Object> DESCRIPTION: Spec defines the specification of the desired behavior of the replication controller. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status ReplicationControllerSpec is the specification of a replication controller. FIELDS: minReadySeconds <integer> Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) replicas <integer> Replicas is the number of desired replicas. This is a pointer to distinguish between explicit zero and unspecified. Defaults to 1. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#what-is-a-replicationcontroller selector <map[string]string> Selector is a label query over pods that should match the Replicas count. If Selector is empty, it is defaulted to the labels present on the Pod template. Label keys and values that must match in order to be controlled by this replication controller, if empty defaulted to labels on Pod template. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template <Object> Template is the object that describes the pod that will be created if insufficient replicas are detected. This takes precedence over a TemplateRef. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template [root@www kubeadm]#
ReplicaSet的spec中最主要须要定义的内容是:
1:副本数,
2:标签选择器,
3:pod模板mysql
案例:
apiVersion: apps/v1
kind: ReplicaSet 使用类型是ReplicaSet
metadata:
name: myapp
namespace: default
spec:
replicas: 2 建立两个pods资源
selector: 使用什么样的标签选择器
matchLabels: 若是使用多个标签就是逻辑域的关系,就须要使用matchLabels字段
app: myapp 可使用多个标签
release: Public survey 声明两个标签就意味标签选择的时候必须知足两个标签内容
template: 定义资源模板
metadata: 资源模板下有两个字段就是matadata和spec,这个用法就是kind类型是pod的同样了
name: myapp-pod
labels: 注意这里的labels的标签必须包含上面matchLabels的两个标签,能够多,可是不能少,若是控制器建立一个发现不能知足就会又建一个,周而复始环境可能被建立的pod给撑死了
app: myapp
release: Public survey
time: current
spec:
containers:
- name: myapp-test
image: ikubernetes/myapp:v1
ports:
- name: http
containerPort: 80git
[root@www TestYaml]# cat pp.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp namespace: default spec: replicas: 2 selector: matchLabels: app: myapp template: metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-containers image: ikubernetes/myapp:v1 [root@www TestYaml]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-7ttch 1/1 Running 0 3m31s myapp-8w2f2 1/1 Running 0 3m31s 咱们看到咱们在yaml文件里面定义的名字控制器会自动的生成在后面跟上随机串 [root@www TestYaml]# kubectl get rs NAME DESIRED CURRENT READY AGE myapp 2 2 2 3m35s [root@www TestYaml]# kubectl describe pods myapp-7ttch Name: myapp-7ttch Namespace: default Priority: 0 PriorityClassName: <none> Node: www.kubernetes.node1.com/192.168.181.140 Start Time: Sun, 07 Jul 2019 16:07:42 +0800 Labels: app=myapp Annotations: <none> Status: Running IP: 10.244.1.27 Controlled By: ReplicaSet/myapp Containers: myapp-containers: Container ID: docker://17288f7aed7f62a983c35cabfd061a22f94c8e315da475fcfe4b276d49b22e33 Image: ikubernetes/myapp:v1 Image ID: docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513 Port: <none> Host Port: <none> State: Running Started: Sun, 07 Jul 2019 16:07:45 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-h5ddf (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-h5ddf: Type: Secret (a volume populated by a Secret) SecretName: default-token-h5ddf Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 16m default-scheduler Successfully assigned default/myapp-7ttch to www.kubernetes.node1.com Normal Pulled 16m kubelet, www.kubernetes.node1.com Container image "ikubernetes/myapp:v1" already present on machine Normal Created 16m kubelet, www.kubernetes.node1.com Created container myapp-containers Normal Started 16m kubelet, www.kubernetes.node1.com Started container myapp-containers [root@www TestYaml]# kubectl delete pods myapp-7ttch 当咱们删除7ttch这个pods的时候,发现控制器立马帮忙建立了一个n8lt4后缀的pods pod "myapp-7ttch" deleted [root@www ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE myapp-7ttch 1/1 Running 0 18m myapp-8w2f2 1/1 Running 0 18m myapp-7ttch 1/1 Terminating 0 18m myapp-n8lt4 0/1 Pending 0 0s myapp-n8lt4 0/1 Pending 0 0s myapp-n8lt4 0/1 ContainerCreating 0 0s myapp-7ttch 0/1 Terminating 0 18m myapp-n8lt4 1/1 Running 0 2s myapp-7ttch 0/1 Terminating 0 18m myapp-7ttch 0/1 Terminating 0 18m 若是咱们建立一个新的pod,把标签设置成myapp同样,这个控制器或怎么去控制副本的数量 [root@www ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-8w2f2 1/1 Running 0 26m app=myapp myapp-n8lt4 1/1 Running 0 7m53s app=myapp [root@www ~]# [root@www TestYaml]# kubectl create -f pod-test.yaml pod/myapp created [root@www TestYaml]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp 0/1 ContainerCreating 0 2s <none> myapp-8w2f2 1/1 Running 1 41m app=myapp myapp-n8lt4 1/1 Running 0 22m app=myapp,time=july mypod-g7rgq 1/1 Running 0 10m app=mypod,time=july mypod-z86bg 1/1 Running 0 10m app=mypod,time=july [root@www TestYaml]# kubectl label pods myapp app=myapp 给新建的pod打上myapp的标签 pod/myapp labeled [root@www TestYaml]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp 0/1 Terminating 1 53s app=myapp myapp-8w2f2 1/1 Running 1 42m app=myapp myapp-n8lt4 1/1 Running 0 23m app=myapp,time=july mypod-g7rgq 1/1 Running 0 11m app=mypod,time=july mypod-z86bg 1/1 Running 0 11m app=mypod,time=july [root@www TestYaml]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS myapp-8w2f2 1/1 Running 1 42m app=myapp 能够发现只要标签和控制器定义的pod标签一致了可能就会被误杀掉 myapp-n8lt4 1/1 Running 0 23m app=myapp,time=july mypod-g7rgq 1/1 Running 0 11m app=mypod,time=july mypod-z86bg 1/1 Running 0 11m app=mypod,time=july
ReplicaSet的特性之一就是指关心集体不关心个体,严格按照内部定义的pod数量,标签来控制pods资源,因此在定义ReplicaSet控制器的时候须要把条件设置复杂,避免出现上面的状况
使用ReplicaSet建立的集体pods的时候,须要注意到一旦pods的挂了,控制器新起的pods地址确定会变化,这个时候就须要在外面加一层service,让service的标签和ReplicaSet一致,经过标签选择器关联至后端的pods,这样就避免地址变化致使访问中断的状况。
ReplicaSet的动态手动扩缩容也很简单。golang
[root@www TestYaml]# kubectl edit rs myapp 使用edit参数进入myapp的模板信息,直接修改replicas值便可 ..... spec: replicas: 5 selector: matchLabels: app: myapp ........ replicaset.extensions/myapp edited [root@www TestYaml]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-6d4nd 1/1 Running 0 10s myapp-8w2f2 1/1 Running 1 73m myapp-c85dt 1/1 Running 0 10s myapp-n8lt4 1/1 Running 0 54m myapp-prdmq 1/1 Running 0 10s mypod-g7rgq 1/1 Running 0 42m mypod-z86bg 1/1 Running 0 42m
[root@www TestYaml]# curl 10.244.2.8 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> [root@www TestYaml]# kubectl edit rs myapp ....... spec: containers: - image: ikubernetes/myapp:v2 升级为v2版本 imagePullPolicy: IfNotPresent ....... replicaset.extensions/myapp edited NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR myapp 3 3 3 79m myapp-containers ikubernetes/myapp:v2 app=myapp 能够看到镜像版本已是v2版本 [root@www TestYaml]# curl 10.244.2.8 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> 可是咱们访问结果仍是v1的版本,这个是由于pods一直处于运行中,并无被重建,只有重建的pod资源才会是v2版本 [root@www TestYaml]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6d4nd 1/1 Running 0 10m 10.244.1.30 www.kubernetes.node1.com <none> <none> myapp-8w2f2 1/1 Running 1 83m 10.244.2.8 www.kubernetes.node2.com <none> <none> myapp-n8lt4 1/1 Running 0 64m 10.244.1.28 www.kubernetes.node1.com <none> <none> mypod-g7rgq 1/1 Running 0 52m 10.244.1.29 www.kubernetes.node1.com <none> <none> mypod-z86bg 1/1 Running 0 52m 10.244.2.9 www.kubernetes.node2.com <none> <none> [root@www TestYaml]# curl 10.244.1.30 咱们访问myapp-6d4nd版本仍是v1 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> [root@www TestYaml]# kubectl delete pods myapp-6d4nd 删除这个pods资源让其重构 pod "myapp-6d4nd" deleted [root@www TestYaml]# kubectl get pods -o wide 重构以后的pods是myapp-bsdlk NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-8w2f2 1/1 Running 1 83m 10.244.2.8 www.kubernetes.node2.com <none> <none> myapp-bsdlk 1/1 Running 0 17s 10.244.2.16 www.kubernetes.node2.com <none> <none> myapp-n8lt4 1/1 Running 0 65m 10.244.1.28 www.kubernetes.node1.com <none> <none> mypod-g7rgq 1/1 Running 0 52m 10.244.1.29 www.kubernetes.node1.com <none> <none> mypod-z86bg 1/1 Running 0 52m 10.244.2.9 www.kubernetes.node2.com <none> <none> [root@www TestYaml]# curl 10.244.2.16 访问对应的地址,发现如今已是v2版本 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a> [root@www TestYaml]# curl 10.244.2.8 尚未被重构的pods仍是属于v1版本 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@www TestYaml]# kubectl delete rs myapp mypod replicaset.extensions "myapp" deleted replicaset.extensions "mypod" deleted
这样有个好处就是在更新版本的时候是平滑过渡的,我留有缓冲期,当访问v2版本的用户无问题了,我再快速的更新剩下的v1版本,而后经过脚本等形式发布v2版本,这个就属于金丝雀发布。
如图:redis
若是是一些重要的pods,可能金丝雀不是一种好的更新方式,咱们可使用蓝绿发布的方式,在建立一个其模板一直,标签选择器相似的新pods资源,可是这种状况须要考虑到访问地址,因此service须要同时关联新老两边的pods资源。sql
还能够经过deployment来关联至后端的多个service上,service在关联pods资源,例如pods资源副本是3个,此时关闭一个pods资源,同时新建一个版本是v2的pods资源,这个pods资源对应的service是一个新的service资源,这个时候用户的请求一部分请求会被
deployment引导至新service资源后端的v2版本上,而后在中止一个v1版本的pods资源同时建立v2版本的资源,直到把全部的pods资源更新完毕。docker
一个deployment默认最多只能管理10个rc控制资源,固然也能够手动的去调整这个数
deployment还能提供声明式更新配置,这个时候就不使用create来建立pods,而是使用apply声明式更新,并且这种形式建立的pods,不须要edit来去改相关的pods模板信息了,能够经过patch打补丁的形式,直接经过命令行纯命令的形式对pods资源的内部进行修改。
对于deployment更新时还能控制更新节奏和更新逻辑
假如如今服务器的ReplicaSet控制的pods数量有5个,这5个刚恰好知足用户的访问请求,当咱们使用上面的办法删除一个在重建一个的方式就不太可取,由于删除和建立中间须要消耗时间,这时间足以致使用户访问请求过大致使其余pods承载不了而崩溃,
这个时候就须要咱们采用另外的方式了,咱们能够指定控制在滚动更新期间能临时多起几个pods,咱们彻底能够控制,控制最多能多余咱们定义的副本数量几个,最少能少于咱们定义副本数量的几个,这样咱们定义最多多1个出来,这样更新的适合就是先起一个新的,而后删除一个老的,在起一个新的,在删除一个老的。
若是是pods资源过多,一个个更新过慢,能够一次多起几个新的,例如一次建立新的5个,删除5个老的,咱们经过这样更新也能够控制更新的粒度。
最少能少于咱们定义副本数量的几个的更新形式就和最多的反过来,先删一个老的,在建立新的,先减后加。
那若是是最多多一个,最少少一个,若是基数是5,那么最少是4个,最可能是6个,这个时候更新就是先加1删2,而后加2删2。
基数5,一个都不能少,最多能够到5个,那么这种就是直接删加5删5,这个就属于蓝绿部署。
这些更新的方式默认是滚动更新。
上面这些更新方式必定要考虑到就绪性状态和存活性状态,避免加1的尚未就绪,老的直接就删掉了。数据库
上面咱们说明了不少种依赖Deployment更新的方式,那在Deployment下主要会用到这些字段:json
[root@www TestYaml]# kubectl explain deploy(Deployment的简写) KIND: Deployment VERSION: extensions/v1beta1 DESCRIPTION: DEPRECATED - This group version of Deployment is deprecated by apps/v1beta2/Deployment. See the release notes for more information. Deployment enables declarative updates for Pods and ReplicaSets. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources kind <string> Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds metadata <Object> Standard object metadata. spec <Object> Specification of the desired behavior of the Deployment. status <Object> Most recently observed status of the Deployment. 能够看到所包含的以及字段名称和ReplicaSet同样,并且注意这个VERSION: extensions/v1beta1群组是特殊的,因为k8s提供的文档是落后于实际的版本信息的,咱们能够看到如今已经挪动到另一个群组了 apps/v1beta2/Deployment属于apps群组了 [root@www TestYaml]# kubectl explain deploy.spec spec字段的内容和ReplicaSet区别又不大。 KIND: Deployment VERSION: extensions/v1beta1 RESOURCE: spec <Object> DESCRIPTION: Specification of the desired behavior of the Deployment. DeploymentSpec is the specification of the desired behavior of the Deployment. FIELDS: minReadySeconds <integer> Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused <boolean> Indicates that the deployment is paused and will not be processed by the deployment controller. progressDeadlineSeconds <integer> The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. This is set to the max value of int32 (i.e. 2147483647) by default, which means "no deadline". replicas <integer> Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. revisionHistoryLimit <integer> The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. This is set to the max value of int32 (i.e. 2147483647) by default, which means "retaining all old RelicaSets". rollbackTo <Object> DEPRECATED. The config this deployment is rolling back to. Will be cleared after rollback is done. selector <Object> Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. strategy <Object> The deployment strategy to use to replace existing pods with new ones. template <Object> -required- Template describes the pods that will be created. 除了部分字段和ReplicaSet同样以外,还多了几个重要的字段,strategy(定义更新策略) strategy支持的更新策略: [root@www TestYaml]# kubectl explain deploy.spec.strategy KIND: Deployment VERSION: extensions/v1beta1 RESOURCE: strategy <Object> DESCRIPTION: The deployment strategy to use to replace existing pods with new ones. DeploymentStrategy describes how to replace existing pods with new ones. FIELDS: rollingUpdate <Object> Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. type <string> Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate. 1:Recreate(重建式更新,删1建1的策略,此类型rollingUpdate对其是无效的) 2:RollingUpdate(滚动更新,若是type的更新类型是RollingUpdate,那么还可使用上面的rollingUpdate来定义) rollingUpdate(主要功能就是来定义更新粒度的) [root@www TestYaml]# kubectl explain deploy.spec.strategy.rollingUpdate KIND: Deployment VERSION: extensions/v1beta1 RESOURCE: rollingUpdate <Object> DESCRIPTION: Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. Spec to control the desired behavior of rolling update. FIELDS: maxSurge (对应的更新过程当中,最多能超出以前定义的目标副本数有几个) <string> The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. By default, a value of 1 is used. Example: when this is set to 30%, the new RC can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new RC can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods. maxSurge有两种取值方式,一种是 Value can be an absolute number (ex: 5)直接指定数量,还有一种是a percentage of desired pods (ex: 10%).指定百分比 maxUnavailable (定义最多有几个不可用) <string> The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. By default, a fixed value of 1 is used. Example: when this is set to 30%, the old RC can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old RC can be scaled down further, followed by scaling up the new RC, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods. 若果这两个字段都设置为0,那等于怎么更新都更新不了,因此这两个字段只能有一个为0,另一个为指定数字 revisionHistoryLimit(表明咱们滚动更新以后,最多能保留几个历史版本,方便咱们回滚) [root@www TestYaml]# kubectl explain deploy.spec.revisionHistoryLimit KIND: Deployment VERSION: extensions/v1beta1 FIELD: revisionHistoryLimit <integer> DESCRIPTION: The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. This is set to the max value of int32 (i.e. 2147483647) by default, which means "retaining all old RelicaSets". 默认是10个 paused(暂停,当咱们滚动更新以后,若是不想当即启动,就能够经过paused来控制暂停一下子,默认都是不暂停的) [root@www TestYaml]# kubectl explain deploy.spec.paused KIND: Deployment VERSION: extensions/v1beta1 FIELD: paused <boolean> DESCRIPTION: Indicates that the deployment is paused and will not be processed by the deployment controller. template(Deployment会控制ReplicaSet自动来建立pods) [root@www TestYaml]# kubectl explain deploy.spec.template KIND: Deployment VERSION: extensions/v1beta1 RESOURCE: template <Object> DESCRIPTION: Template describes the pods that will be created. PodTemplateSpec describes the data a pod should have when created from a template FIELDS: metadata <Object> Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata spec <Object> Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
[root@www TestYaml]# cat deploy.test.yaml apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy namespace: default spec: replicas: 2 selector: matchLabels: app: mydeploy release: Internal-measurement template: metadata: labels: app: mydeploy release: Internal-measurement spec: containers: - name: myapp-containers image: ikubernetes/myapp:v1 [root@www TestYaml]# kubectl apply -f deploy.test.yaml 这个时候不是使用create来建立了而是使用apply声明的方式来建立pods资源 deployment.apps/mydeploy created [root@www TestYaml]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE mydeploy 2/2 2 2 2m [root@www TestYaml]# kubectl get pods NAME READY STATUS RESTARTS AGE mydeploy-74b7786d9b-kq88g 1/1 Running 0 2m4s mydeploy-74b7786d9b-mp2mb 1/1 Running 0 2m4s [root@www TestYaml]# kubectl get rs 能够看到咱们建立deployment的时候自动帮忙建立了rs pod资源,并且能够看到命名方式就知道deployment,rs和pods之间的关系了 NAME DESIRED CURRENT READY AGE mydeploy-74b7786d9b 2 2 2 2m40s [root@www TestYaml]# deployment的名字是mydeploy,rs的名字是mydeploy-74b7786d9b(注意这个随机数值串,它是模板的hash值),pods的名字是mydeploy-74b7786d9b-kq88g 因而可知rs和pods资源是由deployment控制自动去建立的
deployment扩缩容不一样于rs的扩缩容,咱们直接经过修yaml模板,而后经过apply声明就能够达到扩缩容的机制。 [root@www TestYaml]# cat deploy.test.yaml apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy namespace: default spec: replicas: 3 直接加到三个 selector: matchLabels: app: mydeploy release: Internal-measurement template: metadata: labels: app: mydeploy release: Internal-measurement spec: containers: - name: myapp-containers image: ikubernetes/myapp:v1 [root@www TestYaml]# kubectl get pods NAME READY STATUS RESTARTS AGE mydeploy-74b7786d9b-4bcln 1/1 Running 0 7s 能够看到直接加了一个新的pods资源 mydeploy-74b7786d9b-kq88g 1/1 Running 0 13m mydeploy-74b7786d9b-mp2mb 1/1 Running 0 13m [root@www TestYaml]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE mydeploy 3/3 3 3 14m [root@www TestYaml]# kubectl get rs NAME DESIRED CURRENT READY AGE mydeploy-74b7786d9b 3 3 3 14m deployment和rs的状态数量也随之更新 咱们改变模板以后,使用apply声明资源变化状况,这个变化直接回存储到etcd或者apiservice里面,而后通知下游节点作出相应的改变 [root@www TestYaml]# kubectl describe deploy mydeploy Name: mydeploy Namespace: default CreationTimestamp: Sun, 07 Jul 2019 21:31:01 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 1 咱们每一次的变化都会存在Annotations里面,并且是自动维护的 kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"mydeploy","namespace":"default"},"spec":{"replicas":3,"se... Selector: app=mydeploy,release=Internal-measurement Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate 默认的更新策略就是滚动更新 MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge 这里的最大和最小都是25% Pod Template: Labels: app=mydeploy release=Internal-measurement Containers: myapp-containers: Image: ikubernetes/myapp:v1 Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Progressing True NewReplicaSetAvailable Available True MinimumReplicasAvailable OldReplicaSets: <none> NewReplicaSet: mydeploy-74b7786d9b (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 17m deployment-controller Scaled up replica set mydeploy-74b7786d9b to 2 Normal ScalingReplicaSet 3m42s deployment-controller Scaled up replica set mydeploy-74b7786d9b to 3 对于deployment的更新也很简单,若是是单纯的更新镜像资源能够直接使用set image参数来更新,也能够直接修改配置文件的形式来更新 [root@www TestYaml]# cat deploy.test.yaml ....... spec: containers: - name: myapp-containers image: ikubernetes/myapp:v2 升级到v2版本 [root@www TestYaml]# kubectl apply -f deploy.test.yaml deployment.apps/mydeploy configured [root@www ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE mydeploy-74b7786d9b-8jjvv 1/1 Running 0 82s mydeploy-74b7786d9b-mp84r 1/1 Running 0 84s mydeploy-74b7786d9b-qdzc5 1/1 Running 0 86s mydeploy-6fbdd45d4c-kbcmh 0/1 Pending 0 0s 能够看到更新逻辑是先多一个 mydeploy-6fbdd45d4c-kbcmh 0/1 Pending 0 0s mydeploy-6fbdd45d4c-kbcmh 0/1 ContainerCreating 0 0s 而后终止一个,一次的轮询直到所有完成 mydeploy-6fbdd45d4c-kbcmh 1/1 Running 0 1s mydeploy-74b7786d9b-8jjvv 1/1 Terminating 0 99s mydeploy-6fbdd45d4c-qqgb8 0/1 Pending 0 0s mydeploy-6fbdd45d4c-qqgb8 0/1 Pending 0 0s mydeploy-6fbdd45d4c-qqgb8 0/1 ContainerCreating 0 0s mydeploy-74b7786d9b-8jjvv 0/1 Terminating 0 100s mydeploy-6fbdd45d4c-qqgb8 1/1 Running 0 1s mydeploy-74b7786d9b-mp84r 1/1 Terminating 0 102s mydeploy-6fbdd45d4c-ng99s 0/1 Pending 0 0s mydeploy-6fbdd45d4c-ng99s 0/1 Pending 0 0s mydeploy-6fbdd45d4c-ng99s 0/1 ContainerCreating 0 0s mydeploy-74b7786d9b-mp84r 0/1 Terminating 0 103s mydeploy-6fbdd45d4c-ng99s 1/1 Running 0 2s mydeploy-74b7786d9b-qdzc5 1/1 Terminating 0 106s mydeploy-74b7786d9b-qdzc5 0/1 Terminating 0 107s mydeploy-74b7786d9b-qdzc5 0/1 Terminating 0 113s mydeploy-74b7786d9b-qdzc5 0/1 Terminating 0 113s mydeploy-74b7786d9b-8jjvv 0/1 Terminating 0 109s mydeploy-74b7786d9b-8jjvv 0/1 Terminating 0 109s mydeploy-74b7786d9b-mp84r 0/1 Terminating 0 113s mydeploy-74b7786d9b-mp84r 0/1 Terminating 0 113s 全成自动完成自动更新,只须要指定版本号。 [root@www TestYaml]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR mydeploy-6fbdd45d4c 3 3 3 25m myapp-containers ikubernetes/myapp:v2 app=mydeploy,pod-template-hash=6fbdd45d4c,release=Internal-measurement mydeploy-74b7786d9b 0 0 0 33m myapp-containers ikubernetes/myapp:v1 app=mydeploy,pod-template-hash=74b7786d9b,release=Internal-measurement 能够看到咱们又要两个版本的镜像,而后使用v2版本的有三个,使用v1的是没有的,还能够看到两个模板的标签信息基本是一致的,保留老版本随时等待回滚。 [root@www TestYaml]# kubectl rollout history deployment mydeploy 咱们还用过命令rollout history来查看滚动更新的次数和痕迹 deployment.extensions/mydeploy REVISION CHANGE-CAUSE 3 <none> 4 <none> [root@www TestYaml]# kubectl rollout undo deployment mydeploy 回滚直接使用rollout undo来进行回滚,它会根据保留的老版本模板来进行回滚,回滚的逻辑和升级的也同样,加1停1。 deployment.extensions/mydeploy rolled back [root@www TestYaml]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR mydeploy-6fbdd45d4c 0 0 0 34m myapp-containers ikubernetes/myapp:v2 app=mydeploy,pod-template-hash=6fbdd45d4c,release=Internal-measurement mydeploy-74b7786d9b 3 3 3 41m myapp-containers ikubernetes/myapp:v1 app=mydeploy,pod-template-hash=74b7786d9b,release=Internal-measurement [root@www TestYaml]# 能够看到v1的版本又回来了
[root@www TestYaml]# kubectl patch --help Update field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch. JSON and YAML formats are accepted. Examples: # Partially update a node using a strategic merge patch. Specify the patch as JSON. kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Partially update a node using a strategic merge patch. Specify the patch as YAML. kubectl patch node k8s-node-1 -p $'spec:\n unschedulable: true' # Partially update a node identified by the type and name specified in "node.json" using strategic merge patch. kubectl patch -f node.json -p '{"spec":{"unschedulable":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key. kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' # Update a container's image using a json patch with positional arrays. kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]' Options: --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. --dry-run=false: If true, only print the object that would be sent, without sending it. -f, --filename=[]: Filename, directory, or URL to files identifying the resource to update -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f or -R. --local=false: If true, patch will operate on the content of the file, not the server-side resource. -o, --output='': Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file. -p, --patch='': The patch to be applied to the resource JSON file. --record=false: Record current kubectl command in the resource annotation. If set to false, do not record the command. If set to true, record the command. If not set, default to updating the existing annotation value only if one already exists. -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory. --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview]. --type='strategic': The type of patch being provided; one of [json merge strategic] Usage: kubectl patch (-f FILENAME | TYPE NAME) -p PATCH [options] Use "kubectl options" for a list of global command-line options (applies to all commands). patch不只能扩充资源还能完成其它的操做 [root@www TestYaml]# kubectl patch deployment mydeploy -p '{"spec":{"replicas":5}}' -p选项能够用来指定一级菜单下二级三级菜单指的变更,可是注意的是外面使用单引号,里面一级字段的词就须要用双引号 deployment.extensions/mydeploy patched [root@www ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE mydeploy-74b7786d9b-qnqg2 1/1 Running 0 8m41s mydeploy-74b7786d9b-tz6xk 1/1 Running 0 8m43s mydeploy-74b7786d9b-vt659 1/1 Running 0 8m45s mydeploy-74b7786d9b-hlwbp 0/1 Pending 0 0s mydeploy-74b7786d9b-hlwbp 0/1 Pending 0 0s mydeploy-74b7786d9b-zpcxb 0/1 Pending 0 0s mydeploy-74b7786d9b-zpcxb 0/1 Pending 0 0s mydeploy-74b7786d9b-hlwbp 0/1 ContainerCreating 0 0s mydeploy-74b7786d9b-zpcxb 0/1 ContainerCreating 0 0s mydeploy-74b7786d9b-hlwbp 1/1 Running 0 2s mydeploy-74b7786d9b-zpcxb 1/1 Running 0 2s 能够看到更新的过程,由于咱们回滚过版本,可是deploy版本定义的是v2的版本,如今应该是v1有3个,v2有两个 [root@www TestYaml]# kubectl get rs -o wide NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR mydeploy-6fbdd45d4c 0 0 0 45m myapp-containers ikubernetes/myapp:v2 app=mydeploy,pod-template-hash=6fbdd45d4c,release=Internal-measurement mydeploy-74b7786d9b 5 5 5 52m myapp-containers ikubernetes/myapp:v1 app=mydeploy,pod-template-hash=74b7786d9b,release=Internal-measurement 可是实际不是这样的,当你只指定某一个字段进行打补丁的时候,是不会改变其它字段的值的,除非将image的版本也给到v2版本 patch的好处在于若是只想对某些字段的值进行变动,不想去调整yaml模板的值,就可使用patch,可是patch绝对不适合完成不少字段的调整,由于会使得命令行结构变的复杂 [root@www TestYaml]# kubectl patch deployment mydeploy -p '{"spec":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}' 例如咱们去改最少0个,最多1个,就会使得很复杂,若是是不少指,这个结构变的就会复杂,若是是修改多个指,直接apply更加方便 deployment.extensions/mydeploy patched (no change) [root@www TestYaml]# kubectl set image deployment mydeploy myapp-containers=ikubernetes/myapp:v2 && kubectl rollout pause deployment mydeploy 咱们使用直接set image来直接更新镜像的版本,并且更新1个以后就直接暂停 deployment.extensions/mydeploy image updated deployment.extensions/mydeploy paused 能够看到更新一个以后就直接paused了 [root@www ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE mydeploy-74b7786d9b-hlwbp 1/1 Running 0 30m mydeploy-74b7786d9b-qnqg2 1/1 Running 0 40m mydeploy-74b7786d9b-tz6xk 1/1 Running 0 40m mydeploy-74b7786d9b-vt659 1/1 Running 0 40m mydeploy-74b7786d9b-zpcxb 1/1 Running 0 30m mydeploy-6fbdd45d4c-phcp4 0/1 Pending 0 0s mydeploy-6fbdd45d4c-phcp4 0/1 Pending 0 0s mydeploy-74b7786d9b-hlwbp 1/1 Terminating 0 33m mydeploy-6fbdd45d4c-wllm7 0/1 Pending 0 0s mydeploy-6fbdd45d4c-wllm7 0/1 Pending 0 0s mydeploy-6fbdd45d4c-wllm7 0/1 ContainerCreating 0 0s mydeploy-6fbdd45d4c-dc84z 0/1 Pending 0 0s mydeploy-6fbdd45d4c-dc84z 0/1 Pending 0 0s mydeploy-6fbdd45d4c-phcp4 0/1 ContainerCreating 0 0s mydeploy-6fbdd45d4c-dc84z 0/1 ContainerCreating 0 0s mydeploy-74b7786d9b-hlwbp 0/1 Terminating 0 33m mydeploy-6fbdd45d4c-wllm7 1/1 Running 0 2s mydeploy-6fbdd45d4c-phcp4 1/1 Running 0 3s mydeploy-6fbdd45d4c-dc84z 1/1 Running 0 3s mydeploy-74b7786d9b-hlwbp 0/1 Terminating 0 33m mydeploy-74b7786d9b-hlwbp 0/1 Terminating 0 33m [root@www TestYaml]# kubectl rollout status deployment mydeploy 也可使用其余命令来监控更新的过程 Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated... 由于咱们前面执行暂停了,结果更新几个以后就暂停下来了,若是咱们已经更新几个小时了,没有用户反馈有问题,想继续把剩下的更新掉,就可使用resume命令来继续更新 [root@www ~]# kubectl rollout resume deployment mydeploy 直接继续更新 deployment.extensions/mydeploy resumed [root@www TestYaml]# kubectl rollout status deployment mydeploy Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment spec update to be observed... Waiting for deployment spec update to be observed... Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated... Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "mydeploy" rollout to finish: 4 of 5 updated replicas are available... deployment "mydeploy" successfully rolled out 能够看到所有更新完毕,这个就是金丝雀更新。
[root@www TestYaml]# kubectl rollout undo --help Rollback to a previous rollout. Examples: # Rollback to the previous deployment kubectl rollout undo deployment/abc # Rollback to daemonset revision 3 kubectl rollout undo daemonset/abc --to-revision=3 能指定回滚到那个版本 # Rollback to the previous deployment with dry-run kubectl rollout undo --dry-run=true deployment/abc 不指定默认是上一个版本 Options: --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. --dry-run=false: If true, only print the object that would be sent, without sending it. -f, --filename=[]: Filename, directory, or URL to files identifying the resource to get from a server. -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f or -R. -o, --output='': Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file. -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory. --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview]. --to-revision=0: The revision to rollback to. Default to 0 (last revision). Usage: kubectl rollout undo (TYPE NAME | TYPE/NAME) [flags] [options] Use "kubectl options" for a list of global command-line options (applies to all commands). [root@www TestYaml]# kubectl rollout undo deployment mydeploy --to-revision=1 咱们能够经过命令快速进行版本的回滚操做
DaemonSet的主要是在集群的每个节点上运行一个指定的pod,并且此pod只有一个副本,或者是符合选择器的节点上运行指定的pod(例若有些机器是实体机,有些是虚拟机,那么上面跑的一些程序是不一样的,这个时候就须要选择器来选择运行pod)
还能够将某些目录关联至pod中,来实现某些特定的功能。
[root@www TestYaml]# kubectl explain ds.spec (Daemonset简写ds,也是包含5个一级字段) KIND: DaemonSet VERSION: extensions/v1beta1 RESOURCE: spec <Object> DESCRIPTION: The desired behavior of this daemon set. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status DaemonSetSpec is the specification of a daemon set. FIELDS: minReadySeconds <integer> The minimum number of seconds for which a newly created DaemonSet pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready). revisionHistoryLimit(保存历史版本数) <integer> The number of old history to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector <Object> A label query over pods that are managed by the daemon set. Must match in order to be controlled. If empty, defaulted to labels on Pod template. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template <Object> -required- An object that describes the pod that will be created. The DaemonSet will create exactly one copy of this pod on every node that matches the template's node selector (or on every node if no node selector is specified). More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template templateGeneration <integer> DEPRECATED. A sequence number representing a specific generation of the template. Populated by the system. It can be set only during the creation. updateStrategy (更新策略) <Object> An update strategy to replace existing DaemonSet pods with new pods.
[root@www TestYaml]# cat ds.test.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: myds namespace: default spec: selector: matchLabels: app: myds release: Only template: metadata: labels: app: myds release: Only spec: containers: - name: mydaemonset image: ikubernetes/filebeat:5.6.5-alpine env: 由于filebeat监控日志须要指定服务名称和日志级别,这个不能在启动以后传,咱们须要提早定义 - name: REDIS_HOST value: redis.default.svc.cluster.local 这个值是redis名称+名称空间default+域 - name: REDIS_LOG value: info 日志级别咱们定义为info级别 [root@www TestYaml]# kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE myds 2 2 1 2 1 <none> 4m28s [root@www TestYaml]# kubectl get pods NAME READY STATUS RESTARTS AGE myds-9kt2j 0/1 ImagePullBackOff 0 2m18s myds-jt8kd 1/1 Running 0 2m14s [root@www TestYaml]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myds-9kt2j 0/1 ImagePullBackOff 0 2m24s 10.244.1.43 www.kubernetes.node1.com <none> <none> myds-jt8kd 1/1 Running 0 2m20s 10.244.2.30 www.kubernetes.node2.com <none> <none> 能够看到整个节点上至跑了两个pods,不会多也不会少,不管咱们怎么定义,一个节点只能运行一个由DaemonSet控制的pods资源
[root@www TestYaml]# cat ds.test.yaml apiVersion: apps/v1 kind: Deployment metadata: name: redis namespace: default spec: replicas: 1 selector: matchLabels: app: redis role: loginfo template: metadata: labels: app: redis role: loginfo spec: containers: - name: redis image: redis:4.0-alpine ports: - name: redis containerPort: 6379 --- 能够将两个资源定义的yaml写在一个文件当中,可是须要注意的是这样写最好是有关联的两个资源对象,若是没有关联仍是建议分开写。 apiVersion: apps/v1 kind: DaemonSet metadata: name: myds namespace: default spec: selector: matchLabels: app: myds release: Only template: metadata: labels: app: myds release: Only spec: containers: - name: mydaemonset image: ikubernetes/filebeat:5.6.5-alpine env: - name: REDIS_HOST value: redis.default.svc.cluster.local - name: REDIS_LOG value: info 经过定义清单文件,咱们就能经过filebeat来收集redis日志。
[root@www TestYaml]# kubectl explain ds.spec.updateStrategy KIND: DaemonSet VERSION: extensions/v1beta1 RESOURCE: updateStrategy <Object> DESCRIPTION: An update strategy to replace existing DaemonSet pods with new pods. FIELDS: rollingUpdate <Object> Rolling update config params. Present only if type = "RollingUpdate". type <string> 默认更新的方式也是有两种,一种是滚动更新,还有一种是在删除时候更新 Type of daemon set update. Can be "RollingUpdate" or "OnDelete". Default is OnDelete. rollingUpdate滚动更新 [root@www TestYaml]# kubectl explain ds.spec.updateStrategy.rollingUpdate KIND: DaemonSet VERSION: extensions/v1beta1 RESOURCE: rollingUpdate <Object> DESCRIPTION: Rolling update config params. Present only if type = "RollingUpdate". Spec to control the desired behavior of daemon set rolling update. FIELDS: maxUnavailable ds控制器的更新策略只能支持先删在更新,由于一个节点支持一个pods资源,此处的数量是和节点数量相关的,一次更新几个节点的pods资源 <string> The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0. Default value is 1. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their pods stopped for an update at any given time. The update starts by stopping at most 30% of those DaemonSet pods and then brings up new DaemonSet pods in their place. Once the new pods are available, it then proceeds onto other DaemonSet pods, thus ensuring that at least 70% of original number of DaemonSet pods are available at all times during the update. [root@www TestYaml]# kubectl set image --help Update existing container image(s) of resources. Possible resources include (case insensitive): pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), replicaset (rs) set image目前支持更新的控制器类别 [root@www TestYaml]# kubectl set image daemonsets myds mydaemonset=ikubernetes/filebeat:5.6.6-alpine daemonset.extensions/myds image updated [root@www TestYaml]# kubectl get ds NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE myds 2 2 1 0 1 <none> 19m [root@www TestYaml]# kubectl get pods NAME READY STATUS RESTARTS AGE myds-lmw5d 0/1 ContainerCreating 0 7s myds-mhw89 1/1 Running 0 19m redis-fdc8c666b-spqlc 1/1 Running 0 19m [root@www TestYaml]# kubectl get pods -w NAME READY STATUS RESTARTS AGE myds-lmw5d 0/1 ContainerCreating 0 15s 能够看到更新的时候先停一个,而后去pull镜像来更新 myds-mhw89 1/1 Running 0 19m redis-fdc8c666b-spqlc 1/1 Running 0 19m ....... myds-546lq 1/1 Running 0 46s 能够看到一件更新完毕
容器是能够共享使用主机的网络名称空间,这样容器监听的端口将是监听了宿主机至上了
[root@www TestYaml]# kubectl explain pod.spec.hostNetwork
KIND: Pod
VERSION: v1
FIELD: hostNetwork <boolean>
DESCRIPTION:
Host networking requested for this pod. Use the host's network namespace.
If this option is set, the ports that will be used must be specified.
Default to false.
能够看到pods直接使用主机的网络名称空间,那么在建立ds控制器的时候,直接共享使用宿主机的网络名称空间,这样咱们直接可使用节点ip来进行访问了,无需经过service来进行暴露端口
还能够共享的有hostPID,hostIPC等字段。