cp deployment-user-v1.yaml deployment-user-v2.yaml
apiVersion: apps/v1 #API 配置版本
kind: Deployment #资源类型
metadata:
+ name: user-v2 #资源名称
spec:
selector:
matchLabels:
+ app: user-v2 #告诉deployment根据规则匹配相应的Pod进行控制和管理,matchLabels字段匹配Pod的label值
replicas: 3 #声明一个 Pod,副本的数量
template:
metadata:
labels:
+ app: user-v2 #Pod的名称
spec: #组内建立的 Pod 信息
containers:
- name: nginx #容器的名称
+ image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v2
ports:
- containerPort: 80 #容器内映射的端口
复制代码
service-user-v2.yaml前端
apiVersion: v1
kind: Service
metadata:
+ name: service-user-v2
spec:
selector:
+ app: user-v2
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
kubectl apply -f deployment-user-v2.yaml service-user-v2.yaml
复制代码
基于 Cookie 切分流量。这种实现原理主要根据用户请求中的 Cookie 是否存在灰度标示 Cookie 去判断是否为灰度用户,再决定是否返回灰度版本服务node
nginx.ingress.kubernetes.io/canary
:可选值为 true / false 。表明是否开启灰度功能mysql
nginx.ingress.kubernetes.io/canary-by-cookie
复制代码
:灰度发布 cookie 的 key。当 key 值等于 always 时,灰度触发生效。等于其余值时,则不会走灰度环境 ingress-gray.yamlnginx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: user-canary
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-cookie: "vip_user"
spec:
rules:
- http:
paths:
- backend:
serviceName: service-user-v2
servicePort: 80
backend:
serviceName: service-user-v2
servicePort: 80
复制代码
生效配置文件正则表达式
kubectl apply -f ./ingress-gray.yaml
复制代码
来获取 ingress 的外部端口sql
-n: 根据资源名称进行模糊查询docker
kubectl -n ingress-nginx get svc
复制代码
curl http://172.31.178.169:31234/user
curl http://118.190.156.138:31234/user
curl --cookie "vip_user=always" http://172.31.178.169:31234/user
复制代码
基于 Header 切分流量,这种实现原理主要根据用户请求中的 header 是否存在灰度标示 header 去判断是否为灰度用户,再决定是否返回灰度版本服务shell
vi ingress-gray.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: user-canary
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
+ nginx.ingress.kubernetes.io/canary-by-header: "name"
+ nginx.ingress.kubernetes.io/canary-by-header-value: "vip"
spec:
rules:
- http:
paths:
- backend:
serviceName: service-user-v2
servicePort: 80
backend:
serviceName: service-user-v2
servicePort: 80
kubectl apply -f ingress-gray.yaml
curl --header "name:vip" http://172.31.178.169:31234/user
复制代码
nginx.ingress.kubernetes.io/canary-weight
:值是字符串,为 0-100 的数字,表明灰度环境命中几率。若是值为 0,则表示不会走灰度。值越大命中几率越大。当值 = 100 时,表明全走灰度vi ingress-gray.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: user-canary
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
+ nginx.ingress.kubernetes.io/canary-weight: "50"
spec:
rules:
- http:
paths:
- backend:
serviceName: service-user-v2
servicePort: 80
backend:
serviceName: service-user-v2
servicePort: 80
kubectl apply -f ingress-gray.yaml
for ((i=1; i<=10; i++)); do curl http://172.31.178.169:31234/user; done
复制代码
先扩容为 10 个副本数据库
kubectl get deploy
kubectl scale deployment user-v1 --replicas=10
复制代码
deployment-user-v1.yamljson
apiVersion: apps/v1 #API 配置版本
kind: Deployment #资源类型
metadata:
name: user-v1 #资源名称
spec:
minReadySeconds: 1
+ strategy:
+ type: RollingUpdate
+ rollingUpdate:
+ maxSurge: 1
+ maxUnavailable: 0
+ selector:
+ matchLabels:
+ app: user-v1 #告诉deployment根据规则匹配相应的Pod进行控制和管理,matchLabels字段匹配Pod的label值
replicas: 10 #声明一个 Pod,副本的数量
template:
metadata:
labels:
app: user-v1 #Pod的名称
spec: #组内建立的 Pod 信息
containers:
- name: nginx #容器的名称
+ image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3 #使用哪一个镜像
ports:
- containerPort: 80 #容器内映射的端口
复制代码
参数 | 含义 |
---|---|
minReadySeconds | 容器接受流量延缓时间:单位为秒,默认为 0。若是没有设置的话,k8s 会认为容器启动成功后就能够用了。设置该值能够延缓容器流量切分 |
strategy.type = RollingUpdate | ReplicaSet 发布类型,声明为滚动发布,默认也为滚动发布 |
strategy.rollingUpdate.maxSurge | 最多 Pod 数量:为数字类型/百分比。若是 maxSurge 设置为 1,replicas 设置为 10,则在发布过程当中 pod 数量最多为 10 + 1 个(多出来的为旧版本 pod,平滑期不可用状态)。maxUnavailable 为 0 时,该值也不能设置为 0 |
strategy.rollingUpdate.maxUnavailable | 升级中最多不可用 pod 的数量:为数字类型/百分比。当 maxSurge 为 0 时,该值也不能设置为 0 |
kubectl apply -f ./deployment-user-v1.yaml
deployment.apps/user-v1 configured
kubectl rollout status deployment/user-v1
Waiting for deployment "user-v1" rollout to finish: 3 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 3 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
deployment "user-v1" successfully rolled out
复制代码
第一种是存活探针。存活探针是对运行中的容器检测的。若是想检测你的服务在运行中有没有发生崩溃,服务有没有中途退出或无响应,可使用这个探针
若是探针探测到错误, Kubernetes 就会杀掉这个 Pod;不然就不会进行处理。若是默认没有配置这个探针, Pod 不会被杀死
探针名称 | 在哪一个环节触发 | 做用 | 检测失败对 Pod 的反应 |
---|---|---|---|
启动探针 | Pod 运行时 | 检测服务是否启动成功 | 杀死 Pod 并重启 |
存活探针 | Pod 运行时 | 检测服务是否崩溃,是否须要重启服务 | 杀死 Pod 并重启 |
可用探针 | Pod 运行时 | 检测服务是否是容许被访问到 | 中止 Pod 的访问调度,不会被杀死重启 |
vi shell-probe.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: shell-probe
name: shell-probe
spec:
containers:
- name: shell-probe
image: registry.aliyuncs.com/google_containers/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
kubectl apply -f liveness.yaml
kubectl get pods | grep liveness-exec
kubectl describe pods liveness-exec
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m44s default-scheduler Successfully assigned default/liveness-exec to node1
Normal Pulled 2m41s kubelet Successfully pulled image "registry.aliyuncs.com/google_containers/busybox" in 1.669600584s
Normal Pulled 86s kubelet Successfully pulled image "registry.aliyuncs.com/google_containers/busybox" in 605.008964ms
Warning Unhealthy 41s (x6 over 2m6s) kubelet Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Normal Killing 41s (x2 over 116s) kubelet Container liveness failed liveness probe, will be restarted
Normal Created 11s (x3 over 2m41s) kubelet Created container liveness
Normal Started 11s (x3 over 2m41s) kubelet Started container liveness
Normal Pulling 11s (x3 over 2m43s) kubelet Pulling image "registry.aliyuncs.com/google_containers/busybox"
Normal Pulled 11s kubelet Successfully pulled image "registry.aliyuncs.com/google_containers/busybox" in 521.70892ms
复制代码
tcp-probe.yaml
apiVersion: v1
kind: Pod
metadata:
name: tcp-probe
labels:
app: tcp-probe
spec:
containers:
- name: tcp-probe
image: nginx
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
kubectl apply -f tcp-probe.yaml
kubectl get pods | grep tcp-probe
kubectl describe pods tcp-probe
kubectl exec -it tcp-probe -- /bin/sh
apt-get update
apt-get install vim -y
vi /etc/nginx/conf.d/default.conf
80=>8080
nginx -s reload
kubectl describe pod tcp-probe
Warning Unhealthy 6s kubelet Readiness probe failed: dial tcp 10.244.1.47:80: connect: connection
复制代码
vi http-probe.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: http-probe
name: http-probe
spec:
containers:
- name: http-probe
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/http-probe:1.0.0
livenessProbe:
httpGet:
path: /liveness
port: 80
httpHeaders:
- name: source
value: probe
initialDelaySeconds: 3
periodSeconds: 3
vim ./http-probe.yaml
kubectl apply -f ./http-probe.yaml
kubectl describe pods http-probe
Normal Killing 5s kubelet Container http-probe failed liveness probe, will be restarted
docker pull registry.cn-beijing.aliyuncs.com/zhangyaohuang/http-probe:1.0.0
kubectl replace --force -f http-probe.yaml
复制代码
Dockerfile
FROM node
COPY ./app /app
WORKDIR /app
EXPOSE 3000
CMD node index.js
let http = require('http');
let start = Date.now();
http.createServer(function(req,res){
if(req.url === '/liveness'){
let value = req.headers['source'];
if(value === 'probe'){
let duration = Date.now()-start;
if(duration>10*1000){
res.statusCode=500;
res.end('error');
}else{
res.statusCode=200;
res.end('success');
}
}else{
res.statusCode=200;
res.end('liveness');
}
}else{
res.statusCode=200;
res.end('liveness');
}
}).listen(3000,function(){console.log("http server started on 3000")}); 复制代码
kubectl create secret generic mysql-account --from-literal=username=james --from-literal=password=123456
kubectl get secret
复制代码
字段 | 含义 |
---|---|
NAME | Secret 的名称 |
TYPE | Secret 的类型 |
DATA | 存储内容的数量 |
AGE | 建立到如今的时间 |
//编辑值
kubectl edit secret account
//输出yaml格式
kubectl get secret account -o yaml
//输出json格式
kubectl get secret account -o json
//对Base64进行解码
echo MTIzNDU2 | base64 -d
复制代码
mysql-account.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysql-account
stringData:
username: root
password: root
type: Opaque
kubectl apply -f mysql-account.yaml
secret/mysql-account created
kubectl get secret mysql-account -o yaml
复制代码
kubectl create secret docker-registry private-registry \
--docker-username=[用户名] \
--docker-password=[密码] \
--docker-email=[邮箱] \
--docker-server=[私有镜像库地址]
//查看私有库密钥组
kubectl get secret private-registry -o yaml
echo [value] | base64 -d
复制代码
vi private-registry-file.yaml
apiVersion: v1
kind: Secret
metadata:
name: private-registry-file
data:
.dockerconfigjson: eyJhdXRocyI6eyJodHRwczo
type: kubernetes.io/dockerconfigjson
kubectl apply -f ./private-registry-file.yaml
kubectl get secret private-registry-file -o yaml
复制代码
apiVersion: apps/v1 #API 配置版本
kind: Deployment #资源类型
metadata:
name: user-v1 #资源名称
spec:
minReadySeconds: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: user-v1 #告诉deployment根据规则匹配相应的Pod进行控制和管理,matchLabels字段匹配Pod的label值
+ replicas: 1 #声明一个 Pod,副本的数量
template:
metadata:
labels:
app: user-v1 #Pod的名称
spec: #组内建立的 Pod 信息
+ volumes:
+ - name: mysql-account
+ secret:
+ secretName: mysql-account
containers:
- name: nginx #容器的名称
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3 #使用哪一个镜像
+ volumeMounts:
+ - name: mysql-account
+ mountPath: /mysql-account
+ readOnly: true
ports:
- containerPort: 80 #容器内映射的端口
kubectl describe pods user-v1-b88799944-tjgrs
kubectl exec -it user-v1-b88799944-tjgrs -- ls /root
复制代码
deployment-user-v1.yaml
apiVersion: apps/v1 #API 配置版本
kind: Deployment #资源类型
metadata:
name: user-v1 #资源名称
spec:
minReadySeconds: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: user-v1 #告诉deployment根据规则匹配相应的Pod进行控制和管理,matchLabels字段匹配Pod的label值
replicas: 1 #声明一个 Pod,副本的数量
template:
metadata:
labels:
app: user-v1 #Pod的名称
spec: #组内建立的 Pod 信息
volumes:
- name: mysql-account
secret:
secretName: mysql-account
containers:
- name: nginx #容器的名称
+ env:
+ - name: USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: mysql-account
+ key: username
+ - name: PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: mysql-account
+ key: password
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3 #使用哪一个镜像
volumeMounts:
- name: mysql-account
mountPath: /mysql-account
readOnly: true
ports:
- containerPort: 80 #容器内映射的端口
kubectl apply -f deployment-user-v1.yaml
kubectl get pods
kubectl describe pod user-v1-5f48f78d86-hjkcl
kubectl exec -it user-v1-688486759f-9snpx -- env | grep USERNAME
复制代码
vi v4.yaml
image: [仅有镜像库地址]/[镜像名称]:[镜像标签]
kubectl apply -f v4.yaml
kubectl get pods
kubectl describe pods [POD_NAME]
复制代码
vi v4.yaml
+imagePullSecrets:
+ - name: private-registry-file
containers:
- name: nginx
kubectl apply -f v4.yaml
复制代码
服务发现
kubectl -n kube-system get all -l k8s-app=kube-dns -o wide
复制代码
kubectl exec -it [PodName] -- [Command]
kubectl get pods
kubectl get svc
kubectl exec -it user-v1-688486759f-9snpx -- /bin/sh
curl http://service-user-v2
复制代码
[ServiceName].[NameSpace].svc.cluster.local
curl http://service-user-v2.default.svc.cluster.local
复制代码
kubectl create configmap [config_name] --from-literal=[key]=[value]
kubectl create configmap mysql-config --from-literal=MYSQL_HOST=192.168.1.172 --from-literal=MYSQL_PORT=3306
复制代码
[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
kubectl get cm
kubectl describe cm mysql-config
复制代码
mysql-config-file.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config-file
data:
MYSQL_HOST: "192.168.1.172"
MYSQL_PORT: "3306"
kubectl apply -f ./mysql-config-file.yaml
kubectl describe cm mysql-config-file
复制代码
--from-file
表明一个文件key
是文件在 configmap
内的 keyfile_path
是文件的路径kubectl create configmap [configname] --from-file=[key]=[file_path]
复制代码
env.config
HOST: 192.168.0.1
PORT: 8080
kubectl create configmap env-from-file --from-file=env=./env.config
configmap/env-from-file created
kubectl get cm env-from-file -o yaml
复制代码
kubectl create configmap [configname] --from-file=[dir_path]
mkdir env && cd ./env
echo 'local' > env.local
echo 'test' > env.test
echo 'prod' > env.prod
kubectl create configmap env-from-dir --from-file=./
kubectl get cm env-from-dir -o yaml
复制代码
containers:
- name: nginx #容器的名称
+ env:
+ - name: MYSQL_HOST
+ valueFrom:
+ configMapKeyRef:
+ name: mysql-config
+ key: MYSQL_HOST
kubectl apply -f ./v1.yaml
//kubectl exec -it [POD_NAME] -- env | grep MYSQL_HOST
kubectl exec -it user-v1-744f48d6bd-9klqr -- env | grep MYSQL_HOST
kubectl exec -it user-v1-744f48d6bd-9klqr -- env | grep MYSQL_PORT
containers:
- name: nginx #容器的名称
env:
+ envFrom:
+ - configMapRef:
+ name: mysql-config
+ optional: true
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3 #使用哪一个镜像
volumeMounts:
- name: mysql-account
mountPath: /mysql-account
readOnly: true
ports:
- containerPort: 80 #容器内映射的端口
复制代码
template:
metadata:
labels:
app: user-v1 #Pod的名称
spec: #组内建立的 Pod 信息
volumes:
- name: mysql-account
secret:
secretName: mysql-account
+ - name: envfiles
+ configMap:
+ name: env-from-dir
containers:
- name: nginx #容器的名称
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: mysql-account
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: mysql-account
key: password
envFrom:
- configMapRef:
name: mysql-config
optional: true
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3 #使用哪一个镜像
volumeMounts:
- name: mysql-account
mountPath: /mysql-account
readOnly: true
+ - name: envfiles
+ mountPath: /envfiles
+ readOnly: true
ports:
- containerPort: 80 #容器内映射的端口
kubectl apply -f deployment-user-v1.yaml
kubectl get pods
kubectl describe pod user-v1-79b8768f54-r56kd
kubectl exec -it user-v1-744f48d6bd-9klqr -- ls /envfiles
复制代码
spec: #组内建立的 Pod 信息
volumes:
- name: mysql-account
secret:
secretName: mysql-account
- name: envfiles
configMap:
name: env-from-dir
+ items:
+ - key: env.local
+ path: env.local
复制代码
在 Kubernetes 中, Pod 被部署到 Node 上面去的规则和逻辑是由 Kubernetes 的调度组件根据 Node 的剩余资源,地位,以及其余规则自动选择调度的
但前端和后端每每服务器资源的分配都是不均衡的,甚至有的服务只能让特定的服务器来跑
在这种状况下,咱们选择自动调度是不均衡的,就须要人工去干预匹配选择规则了
这时候,就须要在给 Node 添加一个叫作污点的东西,以确保 Node 不被 Pod 调度到
当你给 Node 设置一个污点后,除非给 Pod 设置一个相对应的容忍度,不然 Pod 才能被调度上去。这也就是污点和容忍的来源
污点的格式是 key=value
,能够自定义本身的内容,就像是一组 Tag 同样
Node_Name 为要添加污点的 node 名称
key 和 value 为一组键值对,表明一组标示标签
NoSchedule 则为不被调度的意思,和它同级别的还有其余的值:PreferNoSchedule 和 NoExecute
kubectl taint nodes [Node_Name] [key]=[value]:NoSchedule
//添加污点
kubectl taint nodes node1 user-v4=true:NoSchedule
//查看污点
kubectl describe node node1
kubectl describe node master
Taints: node-role.kubernetes.io/master:NoSchedule
复制代码
vi deployment-user-v4.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-v4
spec:
minReadySeconds: 1
selector:
matchLabels:
app: user-v4
replicas: 1
template:
metadata:
labels:
app: user-v4
spec:
containers:
- name: nginx
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3
ports:
- containerPort: 80
kubectl apply -f deployment-user-v4.yaml
复制代码
vi deployment-user-v4.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-v4
spec:
minReadySeconds: 1
selector:
matchLabels:
app: user-v4
replicas: 1
template:
metadata:
labels:
app: user-v4
spec:
+ tolerations:
+ - key: "user-v4"
+ operator: "Equal"
+ value: "true"
+ effect: "NoSchedule"
containers:
- name: nginx
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3
ports:
- containerPort: 80
复制代码
修改 Node 的污点
kubectl taint nodes node1 user-v4=1:NoSchedule --overwrite
复制代码
删除 Node 的污点
kubectl taint nodes node1 user-v4-
复制代码
在 master 上布署 pod
kubectl taint nodes node1 user-v4=true:NoSchedule
kubectl describe node node1
kubectl describe node master
复制代码
vi deployment-user-v4.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-v4
spec:
minReadySeconds: 1
selector:
matchLabels:
app: user-v4
replicas: 1
template:
metadata:
labels:
app: user-v4
spec:
+ tolerations:
+ - key: "node-role.kubernetes.io/master"
+ operator: "Exists"
+ effect: "NoSchedule"
containers:
- name: nginx
image: registry.cn-beijing.aliyuncs.com/zhangyaohuang/nginx:user-v3
ports:
- containerPort: 80
kubectl apply -f deployment-user-v4.yaml
复制代码
apiVersion: v1kind: Podmetadata: name: private-regspec: containers: - name: private-reg-container image: imagePullSecrets: - name: har