一般状况下,咱们会定义一个 Service 来管理一组 Pod 暴露相关的服务,若是要对外暴露服务的话,只须要定义相应的端口便可(NodePort模式),但若是定义了不少 Service 对象并暴露服务的话就须要配置不少端口,后续维护起来就会变的很复杂,因此 Kubernetes 中还使用了 Ingress 的机制,好比使用 Nginx 绑定一个固定端口 80,后续的请求经过转发到 Service 便可。这样若是每次新增服务的话,还须要修改 Nginx 的配置,为了解决这个问题 Kubernetes 中还使用了 Ingress Controller 组件。简单理解就是原先须要修改 Nginx 配置,而后配置不一样的转发规则到 Service 这个过程抽象出来变成一个 Ingress 对象,后续 Nginx 的变动再经过 Ingress Controoler 与 Kubernetes API 交互,动态的去感知集群中 Ingress 规则变化,再写到 Nginx Pod 里。html
kind: Service apiVersion: v1 metadata: name: test-service spec: selector: app: test—-app ports: - protocol: TCP port: 80 targetPort: 8080
因为 Service 中的服务仅能够在容器内部中通信,若是须要外部能访问到,还须要暴露 Service , 以下 : node
internet | ------------ [ Services ]
kubernetes 中定义了以下的一些服务类型:nginx
Proxygit
使用 Kubernetes 代理来访问服务,通常内网查看 dashboard,调试远程查看时可能会用到。 github
{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "kubernetes", "namespace": "default", "selfLink": "/api/v1/namespaces/default/services/kubernetes", "uid": "f594405b-dc19-11e8-90ea-0050569f4a19", "resourceVersion": "6", "creationTimestamp": "2018-10-30T08:01:08Z", "labels": { "component": "apiserver", "provider": "kubernetes" } }, "spec": { "ports": [ { "name": "https", "protocol": "TCP", "port": 443, "targetPort": 6443 } ], "clusterIP": "10.96.0.1", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } }
NodePort Serviceweb
指定服务类型为 NodePort 是将外部流量直接发送给服务最原始的方式,须要在 Node 服务节点主机上开放特定的端口,经过绑定主机的某个端口,而后进行 Pod 的请求转发和负载均衡,但这种方式下缺陷是 Service 可能有不少个,若是每一个都绑定一个 Node 主机端口的话,主机须要开放外围一堆的端口进行服务调用,管理混乱,而且只能使用 30000-32767 之间的端口,适用与临时暴露某一个服务的端口演示的场景。 docker
{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "kubernetes-dashboard", "namespace": "kube-system", "selfLink": "/api/v1/namespaces/kube-system/services/kubernetes-dashboard", "uid": "edd78318-dc1a-11e8-90ea-0050569f4a19", "resourceVersion": "1076", "creationTimestamp": "2018-10-30T08:08:05Z", "labels": { "k8s-app": "kubernetes-dashboard" } }, "spec": { "ports": [ { "protocol": "TCP", "port": 443, "targetPort": 8443, "nodePort": 32151 } ], "selector": { "k8s-app": "kubernetes-dashboard" }, "clusterIP": "10.103.60.159", "type": "NodePort", "sessionAffinity": "None", "externalTrafficPolicy": "Cluster" }, "status": { "loadBalancer": {} } }
LoadBlancer Servicejson
通常配合公用云使用,指定的端口上的全部流量都将转发到该服务,使用 LoadBlancer Service 暴露服务时其实是向云平台申请建立一个负载均衡器来向外暴露服务,可能须要支付额外的费用。后端
Nginx Ingress Controller 是由 Nginx 与 Ingress Controller 两部分组成。api
Ingress
与上述的几种类型不一样,Ingress 实际上不是一种服务。它是位于多前面服务的不一样,能够转发不一样的域名请求到集群中不一样的服务上,简单的理解,Ingress 就是从 Kubernetes 集群外访问集群的入口,将用户的URL请求转发到不一样的 Service 上。Ingress 至关于 Nginx、Apache 等负载均衡方向代理服务器,其中还包括规则定义,即 URL 的路由信息,路由信息得的刷新由 Ingress controller 来提供。
Ingress controller
Ingress Controller 经过不断地跟 kubernetes API 打交道,实时的感知后端 service、pod 等变化,好比新增和减小 pod,service 增长与减小等;当获得这些变化信息后,Ingress Controller 再结合下文的 Ingress 生成配置,而后更新反向代理负载均衡器,并刷新其配置,达到服务发现的做用。对于 Ingress controller 来讲,建立一个 Ingress 至关于在 nginx.conf 中添加一个 server 入口,并 nginx -s reload 从新生效。。
好比想要经过负载均衡器实现不一样子域名到不一样服务的访问:
foo.bar.com --| |-> foo.bar.com s1:80 | 178.91.123.132 | bar.foo.com --| |-> bar.foo.com s2:80
定义 Ingress
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 - host: bar.foo.com http: paths: - backend: serviceName: s2 servicePort: 80
Ingress自己并不会自动建立负载均衡器,须要运行一个 Ingress controller 来根据 Ingress 的定义来管理负载均衡器。为了 Ingress 正常工做,集群中必须运行 Ingress controller ,支持 Ingress 的类型的 kube-controller 有以下几种:
Kubernetes currently supports and maintains GCE and nginx controllers.
F5 Networks provides support and maintenance for the F5 BIG-IP Controller for Kubernetes.
Kong offers community or commercial support and maintenance for the Kong Ingress Controller for Kubernetes
Traefik is a fully featured ingress controller (Let’s Encrypt, secrets, http2, websocket…), and it also comes with commercial support by Containous
NGINX, Inc. offers support and maintenance for the NGINX Ingress Controller for Kubernetes
HAProxy based ingress controller jcmoraisjr/haproxy-ingress which is mentioned on this blog post HAProxy Ingress Controller for Kubernetes
Istio based ingress controller Control Ingress Traffic
以下是第三方 Nginx 支持的模式 NGINX Ingress Controller for Kubernetes ,非官方(虽然官方也依赖开源版本的 Nginx)。
关于 nginxinc/kubernetes-ingress 与 kubernetes/ingress-nginx 二者的差别: https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md ,若是不是使用的不是 NGINX Plus 商业版本,仍是官方的支持要丰富一些,另外基于软件的 Traefik,HAProxy,Kong 也是不错的选择,不差钱的也能够考虑商业性质的硬件负载设备如 F5 等。我的比较看好 kubernetes/ingress-nginx 与 Traefik 。这里选择使用官方的 kubernetes/ingress-nginx 做为 ingress controller ,后续再尝试其余第三方的组件。
安装 nginx-ingress-controller (host network 模式根据 Node 数量设置副本数,取决于部署的方式 Deployment or DaemonSet)
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-ingress-controller-5bdbdc5657-bcn2f 1/1 Running 0 74m 10.38.0.1 kubernetes-node-2 <none> nginx-ingress-controller-5bdbdc5657-mmtph 1/1 Running 0 79m 10.40.0.1 kubernetes-node-1 <none> $ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx nginx-ingress-controller-6457d975c8-6twqf 1/1 Running 0 78m ingress-nginx nginx-ingress-controller-5bdbdc5657-mmtph 1/1 Running 0 83m $ kubectl describe pod nginx-ingress-controller-6457d975c8-6twqf -n ingress-nginx Name: nginx-ingress-controller-6457d975c8-6twqf Namespace: ingress-nginx Priority: 0 PriorityClassName: <none> Node: kubernetes-node-2/172.23.216.50 Start Time: Wed, 31 Oct 2018 20:04:44 +0800 Labels: app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx pod-template-hash=6457d975c8 Annotations: prometheus.io/port: 10254 prometheus.io/scrape: true Status: Running IP: 172.23.216.50 Controlled By: ReplicaSet/nginx-ingress-controller-6457d975c8 Containers: nginx-ingress-controller: Container ID: docker://f4d5b69cf579752799d6d7e92c547ed9a5a0ba9154b3683c4956079ea9e77304 Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0 Image ID: docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:f6180c5397d2361c317aff1314dc192ab0f9f515346a5319422cdc264f05d2d9 Ports: 80/TCP, 443/TCP Host Ports: 80/TCP, 443/TCP Args: /nginx-ingress-controller --configmap=$(POD_NAMESPACE)/nginx-configuration --publish-service=$(POD_NAMESPACE)/ingress-nginx --annotations-prefix=nginx.ingress.kubernetes.io State: Running Started: Wed, 31 Oct 2018 20:04:45 +0800 Ready: True Restart Count: 0 Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAME: nginx-ingress-controller-6457d975c8-6twqf (v1:metadata.name) POD_NAMESPACE: ingress-nginx (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-serviceaccount-token-rhpsb (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: nginx-ingress-serviceaccount-token-rhpsb: Type: Secret (a volume populated by a Secret) SecretName: nginx-ingress-serviceaccount-token-rhpsb Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 49m (x3 over 49m) default-scheduler 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) didn't have free ports for the requested pod ports. Normal Scheduled 49m default-scheduler Successfully assigned ingress-nginx/nginx-ingress-controller-6457d975c8-6twqf to kubernetes-node-2 Normal Pulled 48m kubelet, kubernetes-node-2 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0" already present on machine Normal Created 48m kubelet, kubernetes-node-2 Created container Normal Started 48m kubelet, kubernetes-node-2 Started container
查看安装的版本
POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.20.0 Build: git-e8d8103 Repository: https://github.com/kubernetes/ingress-nginx.git -------------------------------------------------------------------------------
根据不一样的网络环境与场景,须要选择不一样的策略,具体能够参考官方的文档,以下列举经常使用的几种:
Cloud environments 模式
若是是使用公有云,可使用公有云的负载到 Node 节点便可,须要支付额外费用。
NodePort Service 模式(临时测试,通常不建议使用)
apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: type: NodePort ports: - name: http port: 80 targetPort: 80
# nodePort: 30000 protocol: TCP - name: https port: 443 targetPort: 443 protocol: TCP selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ---
执行
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx NodePort 10.98.42.64 <none> 80:31460/TCP,443:31200/TCP 6m50s
host network 模式
修改 hostNetwork: true
vi nginx-ingress-controller.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: replicas: 2 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: hostNetwork: true serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 --- $ kubectl apply -f nginx-ingress-controller.yaml
查看 ingress-nginx
$ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-ingress-controller-6457d975c8-6twqf 1/1 Running 0 3m7s 172.23.216.50 kubernetes-node-2 <none> nginx-ingress-controller-6457d975c8-smjsv 1/1 Running 0 3m7s 172.23.216.49 kubernetes-node-1 <none>
测试访问
$ curl -D- 172.23.216.50 HTTP/1.1 404 Not Found Server: nginx/1.15.5 Date: Wed, 31 Oct 2018 12:08:40 GMT Content-Type: text/html Content-Length: 153 Connection: keep-alive <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.15.5</center> </body> </html>
建立 kubernetes-dashboard-ingress(默认 https ,annotations 头必须设置 )
vi kubernetes-dashboard-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kubernetes-dashboard-ingress namespace: kube-system annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/secure-backends: "true" spec: # tls: # - secretName: k8s-dashboard-secret rules: - http: paths: - path: / backend: serviceName: kubernetes-dashboard servicePort: 443 - path: /test backend: serviceName: test-nginx servicePort: 80
执行$ kubectl apply -f kubernetes-dashboard-ingress.yaml $ kubectl get ingress -o wide --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE kube-system kubernetes-dashboard-ingress * 80 3h53m
最后访问:https://172.23.216.49/ 或 https://172.23.216.50/
备注:其余功能参考官方文档:
其余命令#删除 Ingress Controller 命名空间 $ kubectl delete namespace nginx-ingress #安装网络组件 $ yum install net-tools #查看开放的端口 $ netstat -ntlp
REFER:
https://kubernetes.io/docs/concepts/services-networking/ingress/
https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
https://www.nginx.com/products/nginx/kubernetes-ingress-controller
https://github.com/kubernetes/ingress-nginx
https://github.com/containous/traefik
https://github.com/nginxinc/kubernetes-ingress/
http://www.javashuo.com/article/p-zbnkcewe-mr.html