kubernetes 安装kong、kong-ingress-controlor

1、关于kong的详细内容这里再也不赘述,能够查看官网。html

kong升级到1.0之后功能愈来愈完善,并切新版本的kong能够做为service-mesh使用,并能够将其做为kubernetes的ingress-controlor。虽然在做为service-mesh方面与istio还有差别,可是kong的发展前景很好,kong-ingress-controlor能够自动发现kubernetes集群里面的ingress服务并统一管理。因此咱们的测试集群正在试用kong,这里先记录一下部署过程。node

 

2、部署nginx

提早准备好:kubernetes 集群(我线上使用的是1.13.2)、PV持久化(使用nfs作的)、helmgit

获取charts:github

安装好了helm,能够直接使用:sql

helm  fetch stable/kong

这个默认repo获取是须要FQ的。api

咱们使用的是根据官方的定制的:app

https://github.com/cuishuaigit/k8s-kongless

 

部署前能够根据本身的须要进行定制:curl

修改values.yaml文件,我这里取消了admin API的https,由于是纯内网环境。而后作了admin、proxy(http、https)的nodeport端口分别为3234四、32380、32343。而后就是设置了默认开启 ingressController。

部署kong:

git clone https://github.com/cuishuaigit/k8s-kong cd k8s-kong helm install -n kong-ingress --tiller-namespace default .

测试环境的tiller是部署在default这个namespace下的。

部署完的效果:

root@ku13-1:~# kubectl get pods | grep kong kong-ingress-kong-5c968fdb74-gsrr8 1/1     Running     0 4h14m kong-ingress-kong-controller-5896fd6d67-4xcg5 2/2     Running     1 4h14m kong-ingress-kong-init-migrations-k9ztt 0/1     Completed   0 4h14m kong-ingress-postgresql-0                         1/1     Running     0          4h14m
root@ku13-1:/data/k8s-kong# kubectl get svc | grep kong kong-ingress-kong-admin NodePort 192.103.113.85    <none>        8444:32344/TCP 4h18m kong-ingress-kong-proxy NodePort 192.96.47.146     <none>        80:32380/TCP,443:32343/TCP 4h18m kong-ingress-postgresql ClusterIP 192.97.113.204    <none>        5432/TCP 4h18m kong-ingress-postgresql-headless ClusterIP None <none>        5432/TCP                      4h18m

 

而后根据https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/minikube.md部署了demo服务:

wget  https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/manifests/dummy-application.yaml

 # cat dummy-application.yaml

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: http-svc spec: replicas: 1 selector: matchLabels: app: http-svc template: metadata: labels: app: http-svc spec: containers: - name: http-svc image: gcr.io/google_containers/echoserver:1.8 ports: - containerPort: 8080
        env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP

 

# cat demo-service.yaml

apiVersion: v1 kind: Service metadata: name: http-svc labels: app: http-svc spec: type: NodePort ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: app: http-svc

 

kubectl create -f dummy-application.yaml  -f  demo-servcie.yaml

 

建立ingress rule:

#cat demo-ingress.yaml

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: foo-bar spec: rules: - host: foo.bar http: paths: - path: / backend: serviceName: http-svc servicePort: 80

 

kubectl  create -f demon-ingress.yaml

 

使用curl测试:

root@ku13-1:/data/k8s-kong# curl http://192.96.47.146 -H Host:foo.bar

Hostname: http-svc-6f459dc547-qpqmv Pod Information: node name: ku13-2 pod name: http-svc-6f459dc547-qpqmv pod namespace: default pod IP: 192.244.32.25 Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=192.244.6.216 method=GET real path=/ query= request_version=1.1 request_uri=http://192.244.32.25:8080/ Request Headers: accept=*/* connection=keep-alive host=192.244.32.25:8080 user-agent=curl/7.47.0 x-forwarded-for=10.2.6.7 x-forwarded-host=foo.bar x-forwarded-port=8000 x-forwarded-proto=http x-real-ip=10.2.6.7 Request Body: -no body in request-

 

3、部署konga

konga是kong的一个dashboard,具体部署参考http://www.javashuo.com/article/p-kgljjxcv-gd.html

 

4、kong plugin

kong有不少插件,帮助用户更好的使用kong来完成更增强大的代理功能。这里介绍两种,其余的使用都是类似的,只是配置参数不一样,具体参数配置参考https://docs.konghq.com/1.1.x/admin-api/#plugin-object

kong-ingress-controlor提供了四种crd:KongPlugin、KongIngress、KongConmuser、KongCredential

一、request-transform

建立yaml:

#cat demo-request-transformer.yaml

apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: transform-request-to-dummy namespace: default labels: global: "false" disable: false config: replace: headers: - 'host:llll' add: headers: - "x-myheader:my-header-value" plugin: request-transformer

 

建立插件:

kubectl create -f demo-request-transformer.yaml

 

二、file-log

建立yaml:

# cat demo-file-log.yaml

apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: echo-file-log namespace: default labels: global: "false" disable: false plugin: file-log config: path: /tmp/req.log reopen: true

 

建立插件:

kubectl create -f demo-file-log.yaml

 

三、插件应用

插件能够与route、servcie绑定,绑定的方式就是使用annotation,0.20版本后的ingress controlor使用的是plugins.konghq.com.

1)route

在route层添加插件,就是在ingress里面添加:

# cat demo-ingress.yaml

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: foo-bar annotations: plugins.konghq.com: transform-request-to-dummy,echo-file-log spec: rules: - host: foo.bar http: paths: - path: / backend: serviceName: http-svc servicePort: 80

 

应用:

kubectl apply -f demo-ingress.yaml

去dashboard上查看效果:

 

 或者使用admin API查看:

curl http://10.1.2.8:32344/plugins | jq

32344是kong admin  API映射到node节点的端口, jq格式化输出

 

2)service

在service层添加插件,直接在dummy的service的yaml里面添加anntations:

# cat  demo-service.yaml

apiVersion: v1 kind: Service metadata: name: http-svc labels: app: http-svc annotations: plugins.konghq.com: transform-request-to-dummy,echo-file-log spec: type: NodePort ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: app: http-svc

 

应用:

kubectl apply -f demo-service.yaml

去dashboard上查看效果:

 

或者使用admin API:

curl http://10.1.2.8:32344/plugins | jq

 

四、插件效果

1)request-transformer

# curl  http://10.1.2.8:32380 -H Host:foo.bar

Hostname: http-svc-6f459d7-7qb2n Pod Information: node name: ku13-2 pod name: http-svc-6f459d7-7qb2n pod namespace: default pod IP: 192.244.32.37 Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=192.244.6.216 method=GET real path=/ query= request_version=1.1 request_uri=http://llll:8080/
 Request Headers: accept=*/* connection=keep-alive host=llll user-agent=curl/7.47.0 x-forwarded-for=10.1.2.8 x-forwarded-host=foo.bar x-forwarded-port=8000 x-forwarded-proto=http x-myheader=my-header-value x-real-ip=10.1.2.8 Request Body: -no body in request-

能够看到咱们在上面的plugin的设置生效了。host被替换成了llll,而且添加了x-myheader。

 

2)file-log

须要登录kong的pod去查看:

kubectl exec -it kong-ingress-kong-5c9lo74-gsrr8 -- grep -c request  /tmp/req.log 51

能够看到正确收集到日志了。

 

五、使用注意事项

目前使用的时候若是当前的某个plugin被删掉了,而annotations没有修改,那么会致使全部的plugin都不可用,这个官方正在修复这个bug。因此如今使用的时候要格外注意避免出现问题。

 

参考:

https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/custom-resources.md

https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/external-service/externalnamejwt.md

https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/minikube.md

https://github.com/cuishuaigit/k8s-kong

相关文章
相关标签/搜索