curl -LO https://kubectl.oss-cn-hangzhou.aliyuncs.com/macos/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl kubectl --help
curl -LO https://kubectl.oss-cn-hangzhou.aliyuncs.com/linux/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl kubectl --help
把 https://kubectl.oss-cn-hangzhou.aliyuncs.com/windows/kubectl.exe 放到系统PATH路径下html
kubectl --help
配置 kubectl 链接 Kubernetes 集群的配置,可参考文档 经过kubectl链接Kubernetes集群node
kubectl port-forward -n istio-system "$(kubectl get -n istio-system pod --selector=app=kiali -o jsonpath='{.items..metadata.name}')" 20001
在本地浏览器中访问 http://localhost:20001 ,使用默认帐户 admin/admin 登陆linux
demo
的命名空间,并为其添加一个 istio-injection:enabled
的标签istio-app
,指定应用版本为v1
,选择刚刚建立的命名空间 demo
registry.cn-beijing.aliyuncs.com/test-node/node-server:v1
istio-app-svc
,类型为虚拟集群 IP ,服务端口和容器端口均为8080kubectl apply -n demo -f - <<EOF apiVersion: v1 kind: Service metadata: name: sleep labels: app: sleep spec: ports: - port: 80 name: http selector: app: sleep --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: sleep spec: replicas: 1 template: metadata: labels: app: sleep spec: containers: - name: sleep image: pstauffer/curl command: ["/bin/sleep", "3650d"] imagePullPolicy: IfNotPresent EOF
kubectl exec -it -n demo "$(kubectl get -n demo pod --selector=app=sleep -o jsonpath='{.items..metadata.name}')" sh
执行以下命令调用以前部署的 Istio 应用:git
for i in `seq 1000` do curl http://istio-app-svc.demo:8080; echo ''; sleep 1; done;
能够看到返回的信息:github
Hello from v1 Hello from v1 Hello from v1
istio-app-svc
服务,点击管理,在版本管理中,点击增长灰度版本,新的版本指定为v2
在容器配置中使用如下镜像:macos
registry.cn-beijing.aliyuncs.com/test-node/node-server:v2
在灰度策略中选择基于流量比例发布,流量比例 50%json
v1
和v2
版本的返回结果Hello from v1 Hello from v2 Hello from v1 Hello from v1 Hello from v2 Hello from v1
fault filter abort
。Hello from v1 Hello from v1 Hello from v1 fault filter abort fault filter abort fault filter abort Hello from v1 Hello from v2 fault filter abort Hello from v2 Hello from v2
同时,在 kiali 可视化界面中,也能够看到 sleep 服务对 istio-app-svc 服务的调用有 50% 左右的失败比例windows
v2
kubectl delete destinationrule -n demo istio-app-svc kubectl apply -n demo -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: istio-app-svc spec: host: istio-app-svc trafficPolicy: connectionPool: tcp: maxConnections: 1 http: http1MaxPendingRequests: 1 maxRequestsPerConnection: 1 subsets: - labels: version: v1 name: v1 - labels: version: v2 name: v2 EOF
kubectl apply -n demo -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/sample-client/fortio-deploy.yaml
maxConnections: 1
以及 http1MaxPendingRequests: 1
。这意味着若是超过了一个链接同时发起请求,Istio 就会熔断,阻止后续的请求或链接。所以咱们以并发数为 3,发出 100 个请求:FORTIO_POD=$(kubectl -n demo get pod | grep fortio | awk '{ print $1 }') kubectl -n demo exec -it $FORTIO_POD -c fortio /usr/bin/fortio -- load -c 3 -qps 0 -n 100 -loglevel Warning http://istio-app-svc:8080
从结果中,能够看到,有超过 40% 的请求被 Istio 阻断了。api
Code 200 : 57 (57.0 %) Code 503 : 43 (43.0 %) Response Header Sizes : count 100 avg 130.53 +/- 113.4 min 0 max 229 sum 13053 Response Body/Total Sizes : count 100 avg 242.14 +/- 0.9902 min 241 max 243 sum 24214 All done 100 calls (plus 0 warmup) 0.860 ms avg, 2757.6 qps
在 kiali 中观察能够发现,这部分请求并无真正到达 istio-app-svc 的 Pod浏览器
upstream_rq_pending_overflow
的值即为被熔断策略阻止的请求数kubectl -n demo exec -it $FORTIO_POD -c istio-proxy -- sh -c 'curl localhost:15000/stats' | grep istio-app-svc | grep pending cluster.outbound|8080|v1|istio-app-svc.demo-ab.svc.cluster.local.upstream_rq_pending_active: 0 cluster.outbound|8080|v1|istio-app-svc.demo-ab.svc.cluster.local.upstream_rq_pending_failure_eject: 0 cluster.outbound|8080|v1|istio-app-svc.demo-ab.svc.cluster.local.upstream_rq_pending_overflow: 99 cluster.outbound|8080|v1|istio-app-svc.demo-ab.svc.cluster.local.upstream_rq_pending_total: 199
执行如下命令:
kubectl delete ns demo
原文连接 本文为云栖社区原创内容,未经容许不得转载。