这篇文章主要讲述:node
IPVS(IP虚拟服务器)实现传输层负载平衡,一般称为第4层LAN交换,是Linux内核的一部分。nginx
IPVS在主机上运行,在真实服务器集群前充当负载均衡器。 IPVS能够将对基于TCP和UDP的服务的请求定向到真实服务器,并使真实服务器的服务在单个IP地址上显示为虚拟服务。算法
IPVS模式在Kubernetes v1.8中引入,并在v1.9中进入了beta。 IPTABLES模式在v1.1中添加,并成为自v1.2以来的默认操做模式。 IPVS和IPTABLES都基于netfilter。 IPVS模式和IPTABLES模式之间的差别以下:api
IPVS proxier将使用iptables,在数据包过滤,SNAT和支持NodePort类型的服务这几个场景中。具体来讲,ipvs proxier将在如下4个场景中回归iptables。服务器
若是kube-proxy以--masquerade-all = true开头,则ipvs proxier将假装访问服务群集IP的全部流量,其行为与iptables proxier相同。假设有一个群集IP 10.244.5.1和端口8080的服务,那么ipvs proxier安装的iptables应该以下所示。负载均衡
# iptables -t nat -nL Chain PREROUTING (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ Chain POSTROUTING (policy ACCEPT) target prot opt source destination KUBE-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */ Chain KUBE-POSTROUTING (1 references) target prot opt source destination MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000 Chain KUBE-MARK-DROP (0 references) target prot opt source destination MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000 Chain KUBE-MARK-MASQ (6 references) target prot opt source destination MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000 Chain KUBE-SERVICES (2 references) target prot opt source destination KUBE-MARK-MASQ tcp -- 0.0.0.0/0 10.244.5.1 /* default/foo:http cluster IP */ tcp dpt:8080
若是kube-proxy以--cluster-cidr = <cidr>开头,则ipvs proxier将假装访问服务群集IP的群集外流量,其行为与iptables proxier相同。假设kube-proxy随集群cidr 10.244.16.0/24提供,服务集群IP为10.244.5.1,端口为8080,则ipvs proxier安装的iptables应以下所示。tcp
# iptables -t nat -nL Chain PREROUTING (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ Chain POSTROUTING (policy ACCEPT) target prot opt source destination KUBE-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */ Chain KUBE-POSTROUTING (1 references) target prot opt source destination MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000 Chain KUBE-MARK-DROP (0 references) target prot opt source destination MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000 Chain KUBE-MARK-MASQ (6 references) target prot opt source destination MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000 Chain KUBE-SERVICES (2 references) target prot opt source destination KUBE-MARK-MASQ tcp -- !10.244.16.0/24 10.244.5.1 /* default/foo:http cluster IP */ tcp dpt:8080
当服务的LoadBalancerStatus.ingress.IP不为空而且指定了服务的LoadBalancerSourceRanges时,ipvs proxier将安装iptables,以下所示。
假设服务的LoadBalancerStatus.ingress.IP为10.96.1.2,服务的LoadBalancerSourceRanges为10.120.2.0/24。工具
# iptables -t nat -nL Chain PREROUTING (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ Chain POSTROUTING (policy ACCEPT) target prot opt source destination KUBE-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */ Chain KUBE-POSTROUTING (1 references) target prot opt source destination MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000 Chain KUBE-MARK-DROP (0 references) target prot opt source destination MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000 Chain KUBE-MARK-MASQ (6 references) target prot opt source destination MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000 Chain KUBE-SERVICES (2 references) target prot opt source destination ACCEPT tcp -- 10.120.2.0/24 10.96.1.2 /* default/foo:http loadbalancer IP */ tcp dpt:8080 DROP tcp -- 0.0.0.0/0 10.96.1.2 /* default/foo:http loadbalancer IP */ tcp dpt:8080
为了支持NodePort类型的服务,ipvs将在iptables proxier中继续现有的实现。例如,post
# kubectl describe svc nginx-service Name: nginx-service ... Type: NodePort IP: 10.101.28.148 Port: http 3080/TCP NodePort: http 31604/TCP Endpoints: 172.17.0.2:80 Session Affinity: None # iptables -t nat -nL [root@100-106-179-225 ~]# iptables -t nat -nL Chain PREROUTING (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */ Chain KUBE-SERVICES (2 references) target prot opt source destination KUBE-MARK-MASQ tcp -- !172.16.0.0/16 10.101.28.148 /* default/nginx-service:http cluster IP */ tcp dpt:3080 KUBE-SVC-6IM33IEVEEV7U3GP tcp -- 0.0.0.0/0 10.101.28.148 /* default/nginx-service:http cluster IP */ tcp dpt:3080 KUBE-NODEPORTS all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL Chain KUBE-NODEPORTS (1 references) target prot opt source destination KUBE-MARK-MASQ tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service:http */ tcp dpt:31604 KUBE-SVC-6IM33IEVEEV7U3GP tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service:http */ tcp dpt:31604 Chain KUBE-SVC-6IM33IEVEEV7U3GP (2 references) target prot opt source destination KUBE-SEP-Q3UCPZ54E6Q2R4UT all -- 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service:http */ Chain KUBE-SEP-Q3UCPZ54E6Q2R4UT (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 172.17.0.2 0.0.0.0/0 /* default/nginx-service:http */ DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service:http */ tcp to:172.17.0.2:80
确保IPVS须要内核模块性能
ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4
grep -e ipvs -e nf_conntrack_ipv4 /lib/modules/$(uname -r)/modules.builtin
若是编译成内核,则获得以下结果。
kernel/net/ipv4/netfilter/nf_conntrack_ipv4.ko kernel/net/netfilter/ipvs/ip_vs.ko kernel/net/netfilter/ipvs/ip_vs_rr.ko kernel/net/netfilter/ipvs/ip_vs_wrr.ko kernel/net/netfilter/ipvs/ip_vs_lc.ko kernel/net/netfilter/ipvs/ip_vs_wlc.ko kernel/net/netfilter/ipvs/ip_vs_fo.ko kernel/net/netfilter/ipvs/ip_vs_ovf.ko kernel/net/netfilter/ipvs/ip_vs_lblc.ko kernel/net/netfilter/ipvs/ip_vs_lblcr.ko kernel/net/netfilter/ipvs/ip_vs_dh.ko kernel/net/netfilter/ipvs/ip_vs_sh.ko kernel/net/netfilter/ipvs/ip_vs_sed.ko kernel/net/netfilter/ipvs/ip_vs_nq.ko kernel/net/netfilter/ipvs/ip_vs_ftp.ko
# load module <module_name> modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 # to check loaded modules, use lsmod | grep -e ipvs -e nf_conntrack_ipv4 # or cut -f1 -d " " /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4
在使用IPVS模式以前,还应在节点上安装ipset等软件包。
若是不知足这些要求,Kube-proxy将回退到IPTABLES模式。
默认状况下,Kube-proxy将在kubeadm部署的集群中以iptables模式运行。
若是您将kubeadm与配置文件一块儿使用,则能够在kubeProxy字段下指定ipvs模式,添加SupportIPVSProxyMode:true。
kind: MasterConfiguration apiVersion: kubeadm.k8s.io/v1alpha1 ... kubeProxy: config: featureGates: SupportIPVSProxyMode=true mode: ipvs ...
若是您使用的是Kubernetes v1.8,您还能够在kubeadm init命令中添加标志--feature-gates = SupportIPVSProxyMode = true(自v1.9起不推荐使用)
kubeadm init --feature-gates=SupportIPVSProxyMode=true
若是成功启用了ipvs模式,你应该看到ipvs代理规则(使用ipvsadm)例如:
# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.1:443 rr persistent 10800 -> 192.168.0.1:6443 Masq 1 1 0
当本地群集运行时,kube-proxy日志中会出现相似的日志(例如,本地群集的/tmp/kube-proxy.log):
Using ipvs Proxier.
然而没有ipvs代理规则或有如下日志则代表kube-proxy没法使用ipvs模式:
Can't use ipvs proxier, trying iptables proxier Using iptables Proxier.
用户可使用ipvsadm工具检查kube-proxy是否正确维护IPVS规则。例如,咱们在集群中有如下服务:
# kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1d kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 1d
咱们应该能看到以下的规则:
# ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.0.0.1:443 rr persistent 10800 -> 192.168.0.1:6443 Masq 1 1 0 TCP 10.0.0.10:53 rr -> 172.17.0.2:53 Masq 1 0 0 UDP 10.0.0.10:53 rr -> 172.17.0.2:53 Masq 1 0 0
对于Kubernetes v1.10及更高版本,功能门SupportIPVSProxyMode默认设置为true。可是,您须要在v1.10以前为Kubernetes明确启用--feature-gates = SupportIPVSProxyMode = true。
检查kube-proxy模式是否已设置为ipvs
检查是否已将ipvs所需的内核模块编译到内核和软件包中。 (见前提条件)