参考:php
Kubernetes包括用于服务发现的DNS服务器Kube-DNS。 该DNS服务器利用SkyDNS的库来为Kubernetes pod和服务提供DNS请求。SkyDNS2的做者,Miek Gieben,建立了一个新的DNS服务器,CoreDNS,它采用更模块化,可扩展的框架构建。 Infoblox已经与Miek合做,将此DNS服务器做为Kube-DNS的替代品。html
CoreDNS利用做为Web服务器Caddy的一部分而开发的服务器框架。该框架具备很是灵活,可扩展的模型,用于经过各类中间件组件传递请求。这些中间件组件根据请求提供不一样的操做,例如记录,重定向,修改或维护。虽然它一开始做为Web服务器,可是Caddy并非专门针对HTTP协议的,而是构建了一个基于CoreDNS的理想框架。node
在这种灵活的模型中添加对Kubernetes的支持,至关于建立了一个Kubernetes中间件。该中间件使用Kubernetes API来知足针对特定Kubernetes pod或服务的DNS请求。并且因为Kube-DNS做为Kubernetes的另外一项服务,kubelet和Kube-DNS之间没有紧密的绑定。您只须要将DNS服务的IP地址和域名传递给kubelet,而Kubernetes并不关心谁在实际处理该IP请求。linux
1.0.0版本主要遵循Kube-DNS的当前行为。 CoreDNS的005及更高版本实现了完整的规范和更多功能。nginx
全部群集中不须要pod A记录支持,默认状况下禁用。 此外,CoreDNS对此用例的支持超出了在Kube-DNS中找到的标准行为。git
在Kube-DNS中,这些记录不反映集群的状态,例如,对w-x-y-z.namespace.pod.cluster.local的任何查询将返回带有w.x.y.z(ip)的A记录,即便该IP不属于指定的命名空间,甚至不属于集群地址空间。最初的想法是启用对* .namespace.pod.cluster.local这样的域使用通配符SSL证书。github
CoreDNS集成了提供pod验证的选项,验证返回的IP地址w.x.y.z其实是指定命名空间中的pod的IP。他防止在命名空间中欺骗DNS名称。 然而,它确实会大大增长CoreDNS实例的内存占用,由于如今它须要观察全部的pod,而不只仅是服务端点。web
https://github.com/coredns/deployment/tree/master/kubernetesdocker
主要有几个文件:apache
deploy.sh是一个便捷的脚本,用于生成用于在当前运行标准kube-dns的集群上运行CoreDNS的清单。使用coredns.yaml.sed文件做为模板,它建立一个ConfigMap和一个CoreDNS deployment,而后更新 Kube-DNS service selector以使用CoreDNS deployment。 经过从新使用现有服务,服务请求不会中断。
脚本不会删除kube-dns的deployment或replication controller - 您必须手动执行:
kubectl delete --namespace=kube-system deployment kube-dns
要使用它,只需将它们放在同一目录中,而后运行deploy.sh脚本,将其传递给您的服务CIDR(10.3.0.0/24)。 这将生成具备必要Corefile的ConfigMap。 它还将查找现有的kube-dns服务的集群IP。
(注意:以上原始脚本只适用于当前kubernetes集群含有kube-dns的状况,若是没有须要修改下脚本)
#!/bin/bash # Deploys CoreDNS to a cluster currently running Kube-DNS. SERVICE_CIDR=$1 CLUSTER_DOMAIN=${2:-cluster.local} YAML_TEMPLATE=${3:-`pwd`/coredns.yaml.sed} YAML=${4:-`pwd`/coredns.yaml} if [[ -z $SERVICE_CIDR ]]; then echo "Usage: $0 SERVICE-CIDR [ CLUSTER-DOMAIN ] [ YAML-TEMPLATE ] [ YAML ]" exit 1 fi #CLUSTER_DNS_IP=$(kubectl get service --namespace kube-system kube-dns -o jsonpath="{.spec.clusterIP}") CLUSTER_DNS_IP=10.3.0.10 sed -e s/CLUSTER_DNS_IP/$CLUSTER_DNS_IP/g -e s/CLUSTER_DOMAIN/$CLUSTER_DOMAIN/g -e s?SERVICE_CIDR?$SERVICE_CIDR?g $YAML_TEMPLATE
默认状况下CLUSTER_DNS_IP是自动获取kube-dns的集群ip的,可是因为没有部署kube-dns因此只能手动指定一个集群ip了。
coredns.yaml.sed的内容以下:
apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors log stdout health kubernetes CLUSTER_DOMAIN SERVICE_CIDR proxy . /etc/resolv.conf cache 30 } --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: replicas: 1 selector: matchLabels: k8s-app: coredns template: metadata: labels: k8s-app: coredns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' spec: containers: - name: coredns image: coredns/coredns:latest imagePullPolicy: Always args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: coredns clusterIP: CLUSTER_DNS_IP ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP
执行:
./deploy.sh 10.3.0.0/24 cluster.local
以上脚本执行后能够看到预览的效果:
apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors log stdout health kubernetes cluster.local 10.3.0.0/24 proxy . /etc/resolv.conf cache 30 } --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: replicas: 1 selector: matchLabels: k8s-app: coredns template: metadata: labels: k8s-app: coredns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' spec: containers: - name: coredns image: coredns/coredns:latest imagePullPolicy: Always args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: coredns clusterIP: 10.3.0.10 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP
仔细观察上面的Corefile部分,这是一个在端口53上运行CoreDNS并为Kubernetes提供cluster.local域的示例
.:53 { errors log stdout health kubernetes cluster.local 10.3.0.0/24 proxy . /etc/resolv.conf cache 30 }
1)errors官方没有明确解释,后面研究
2)log stdout:日志中间件配置为将日志写入STDOUT
3)health:健康检查,提供了指定端口(默认为8080)上的HTTP端点,若是实例是健康的,则返回“OK”。
4)cluster.local:CoreDNS为kubernetes提供的域,10.3.0.0/24这告诉Kubernetes中间件它负责为反向区域提供PTR请求0.0.3.10.in-addr.arpa ..换句话说,这是容许反向DNS解析服务(咱们常用到得DNS服务器里面有两个区域,即“正向查找区域”和“反向查找区域”,正向查找区域就是咱们一般所说的域名解析,反向查找区域便是这里所说的IP反向解析,它的做用就是经过查询IP地址的PTR记录来获得该IP地址指向的域名,固然,要成功获得域名就必须要有该IP地址的PTR记录。PTR记录是邮件交换记录的一种,邮件交换记录中有A记录和PTR记录,A记录解析名字到地址,而PTR记录解析地址到名字。地址是指一个客户端的IP地址,名字是指一个客户的彻底合格域名。经过对PTR记录的查询,达到反查的目的。)
5)proxy:这能够配置多个upstream 域名服务器,也能够用于延迟查找 /etc/resolv.conf 中定义的域名服务器
6)cache:这容许缓存两个响应结果,一个是确定结果(即,查询返回一个结果)和否认结果(查询返回“没有这样的域”),具备单独的高速缓存大小和TTLs。
[root@dev-master CoreDNS]# ./deploy.sh 10.3.0.0/24 cluster.local | kubectl apply -f - configmap "coredns" created deployment "coredns" created service "kube-dns" created
[root@dev-master CoreDNS]# kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE coredns-512496995-c1x9g 1/1 Running 0 5m default-http-backend-905355492-nrt1z 1/1 Running 0 23h heapster-2450140206-dw408 1/1 Running 2 23h kube-apiserver-172.16.71.200 1/1 Running 3 7d kube-controller-manager-172.16.71.200 1/1 Running 10 8d kube-proxy-172.16.71.200 1/1 Running 49 37d kube-scheduler-172.16.71.200 1/1 Running 300 14d kubernetes-dashboard-654048359-p73r9 1/1 Running 0 23h monitoring-grafana-438219031-32btw 1/1 Running 1 5d monitoring-influxdb-3584808869-s6sh1 1/1 Running 2 23h nginx-ingress-controller-1644785683-9fxsp 1/1 Running 4 26d nginx-ingress-controller-1644785683-mw7nx 1/1 Running 2 23h tiller-deploy-411327518-q9zn3 1/1 Running 2 23h
[root@dev-master CoreDNS]# kubectl logs -f coredns-512496995-c1x9g --namespace=kube-system .:53 2017/09/13 02:36:31 [INFO] CoreDNS-011 2017/09/13 02:36:31 [INFO] linux/amd64, go1.9, 1b60688d CoreDNS-011 linux/amd64, go1.9, 1b60688d
三、修改master节点和全部node节点的/etc/systemd/system/kube-kubelet.service,修改内容如红色所注,与上面的Corefile中的值对应。
[Unit] Description=Kubernetes Kubelet Master Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] Environment=KUBELET_IMAGE_TAG=v1.6.2 ExecStartPre=/usr/bin/mkdir -p /var/log/containers ExecStart=/opt/bin/kubelet \ --api-servers=http://127.0.0.1:8080 \ --register-schedulable=false \ --cni-conf-dir=/etc/kubernetes/cni/net.d \ --container-runtime=docker \ --allow-privileged=true \ --pod-manifest-path=/etc/kubernetes/manifests \ --hostname-override=172.16.71.200 \ --pod-infra-container-image=172.16.80.94/mir/pause-amd64:3.0 \ --v=3 \ --cluster-dns=10.3.0.10 \ --cluster-domain=cluster.local. \ --resolv-conf=/etc/resolv.conf \ Restart=always RestartSec=10 [Install] WantedBy=multi-user.target
如今咱们来建立一个nginx的pod和service,测试一下coredns是否起做用
apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: 172.16.71.199/common/nginx:1.8.1 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx spec: ports: - port: 80 targetPort: 80 protocol: TCP selector: app: nginx
[root@dev-master CoreDNS]# kubectl create -f nginx.yaml pod "nginx" created service "nginx" created
[root@dev-master CoreDNS]# kubectl get pod NAME READY STATUS RESTARTS AGE load-generator-1962471460-6v7lb 1/1 Running 3 22d nginx 1/1 Running 0 1m php-apache-1106203038-w51jw 1/1 Running 3 22d
用curl测试,首先进入这个集群内的另外一个pod(这个pod是在修改master节点和node节点的/etc/systemd/system/kube-kubelet.service以后建立的),在pod内部访问刚才建立的nginx。
[root@mir2-handler-deployment-3595565332-bqk3t /]# <5332-bqk3t /]# curl nginx.default.svc.cluster.local <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
实验证实,用my-svc.my-namespace.svc.cluster.local的方式能够访问服务。