转载请声明出处哦~,本篇文章发布于luozhiyun的博客:https://www.luozhiyun.comhtml
源码版本是1.19node
这一篇是讲service,可是基础使用以及基本概念因为官方实在是写的比较完整了,我没有必要复述一遍,因此还不太清楚的小伙伴们能够去看官方的文档:https://kubernetes.io/docs/concepts/services-networking/service/。nginx
在 Kubernetes 集群中,每一个 Node 运行一个 kube-proxy
进程。kube-proxy
负责为 Service 实现了一种 VIP(虚拟 IP)的形式。git
从官方文档介绍来看:github
从 Kubernetes v1.0 开始,您已经可使用 userspace 代理模式。 Kubernetes v1.1 添加了 iptables 模式代理,在 Kubernetes v1.2 中,kube-proxy 的 iptables 模式成为默认设置。 Kubernetes v1.8 添加了 ipvs 代理模式。算法
如今咱们看的源码是基于1.19,因此如今默认是ipvs代理模式。segmentfault
LVS是国内章文嵩博士开发并贡献给社区的,主要由ipvs和ipvsadm组成。api
ipvs是工做在内核态的4层负载均衡,基于内核底层netfilter实现,netfilter主要经过各个链的钩子实现包处理和转发。ipvs由ipvsadm提供简单的CLI接口进行ipvs配置。因为ipvs工做在内核态,只处理四层协议,所以只能基于路由或者NAT进行数据转发,能够把ipvs看成一个特殊的路由器网关,这个网关能够根据必定的算法自动选择下一跳。服务器
IPVS能够经过ipvsadm 命令进行配置,如-L列举,-A添加,-D删除。网络
以下命令建立一个service实例172.17.0.1:32016
,-t
指定监听的为TCP
端口,-s
指定算法为轮询算法rr(Round Robin),ipvs支持简单轮询(rr)、加权轮询(wrr)、最少链接(lc)、源地址或者目标地址散列(sh、dh)等10种调度算法。
ipvsadm -A -t 172.17.0.1:32016 -s rr
在添加调度算法的时候还须要用-r指定server地址,-w指定权值,-m指定转发模式,-m设置masquerading表示NAT模式(-g为gatewaying
,即直连路由模式),以下所示:
ipvsadm -a -t 172.17.0.1:32016 -r 10.244.1.2:8080 -m -w 1 ipvsadm -a -t 172.17.0.1:32016 -r 10.244.1.3:8080 -m -w 1 ipvsadm -a -t 172.17.0.1:32016 -r 10.244.3.2:8080 -m -w 1
不清楚iptables调用链的同窗能够先看看:https://www.zsythink.net/archives/1199了解一下。
咱们这里使用上一篇HPA的一个例子:
apiVersion: apps/v1 kind: Deployment metadata: name: hpatest spec: replicas: 1 selector: matchLabels: app: hpatest template: metadata: labels: app: hpatest spec: containers: - name: hpatest image: nginx imagePullPolicy: IfNotPresent command: ["/bin/sh"] args: ["-c","/usr/sbin/nginx; while true;do echo `hostname -I` > /usr/share/nginx/html/index.html; sleep 120;done"] ports: - containerPort: 80 resources: requests: cpu: 1m memory: 100Mi limits: cpu: 3m memory: 400Mi --- apiVersion: v1 kind: Service metadata: name: hpatest-svc spec: selector: app: hpatest ports: - port: 80 targetPort: 80 protocol: TCP
建立好以后,看看svc:
# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hpatest-svc ClusterIP 10.68.196.212 <none> 80/TCP 2m44s
而后咱们查看ipvs配置状况:
# ipvsadm -S -n | grep 10.68.196.212 -A -t 10.68.196.212:80 -s rr -a -t 10.68.196.212:80 -r 172.20.0.251:80 -m -w 1
-S表示输出所保存的规则,-n表示以数字的形式输出ip和端口。能够看到ipvs的LB IP为ClusterIP,算法为rr,RS为Pod的IP。使用的模式为NAT模式。
当咱们建立Service以后,kube-proxy 首先会在宿主机上建立一个虚拟网卡(叫做:kube-ipvs0),并为它分配 Service VIP 做为 IP 地址,以下所示:
# ip addr show kube-ipvs0 7: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default link/ether 12:bb:85:91:96:4d brd ff:ff:ff:ff:ff:ff ... inet 10.68.196.212/32 brd 10.68.196.212 scope global kube-ipvs0 valid_lft forever preferred_lft forever
下面看看ClusterIP如何传递的。使用命令:iptables -t nat -nvL能够看到由不少Chain,ClusterIP访问方式为:
PREROUTING --> KUBE-SERVICES --> KUBE-CLUSTER-IP --> INPUT --> KUBE-FIREWALL --> POSTROUTING
当使用命令连接服务时:
curl 10.68.196.212:80
因为10.96.54.11就在本地,因此会以这个IP做为出口地址,即源IP和目标IP都是10.96.54.11,此时至关于:
10.68.196.212:xxxx -> 10.68.196.212:80
而后通过ipvs,ipvs会从RS ip列中选择其中一个Pod ip做为目标IP:
10.68.196.212:xxxx -> 10.68.196.212:80 | | IPVS v 172.20.0.251:xxxx -> 172.20.0.251:80
查看OUTPUT规则:
# iptables-save -A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A KUBE-SERVICES ! -s 172.20.0.0/16 -m comment --comment "Kubernetes service cluster ip + port for masquerade purpose" -m set --match-set KUBE-CLUSTER-IP dst,dst -j KUBE-MARK-MASQ -A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
如上规则的意思就是除了Pod之外访问ClusterIP的包都打上0x4000/0x4000
。
到了POSTROUTING链:
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
如上规则的意思就是只要匹配mark0x4000/0x4000
的包都作SNAT,因为172.20.0.251是从flannel.1出去的,所以源ip会改为flannel.1的ip 172.20.0.0
:
10.68.196.212:xxxx -> 10.68.196.212:80 | | IPVS v 10.68.196.212:xxxx -> 172.20.0.251:80 | | MASQUERADE v 172.20.0.0:xxxx -> 172.20.0.251:80
最后经过VXLAN隧道发到Pod的Node上,转发给Pod的veth,回包经过路由到达源Node节点,源Node节点经过以前的MASQUERADE再把目标IP还原为172.20.0.251。
文件位置:cmd/kube-proxy/app/server_others.go
kube-proxy启动的时候会调用NewProxyServer初始化ipvs 代理:
func newProxyServer( config *proxyconfigapi.KubeProxyConfiguration, cleanupAndExit bool, master string) (*ProxyServer, error) { ... //获取代理模式userspace iptables ipvs proxyMode := getProxyMode(string(config.Mode), canUseIPVS, iptables.LinuxKernelCompatTester{}) ... //代理模式是iptables if proxyMode == proxyModeIPTables { ... // 代理模式是ipvs } else if proxyMode == proxyModeIPVS { klog.V(0).Info("Using ipvs Proxier.") //判断是够启用了 ipv6 双栈 if utilfeature.DefaultFeatureGate.Enabled(features.IPv6DualStack) { ... } else { ... //初始化 ipvs 模式的 proxier proxier, err = ipvs.NewProxier( ... ) } ... } else { ... } return &ProxyServer{ ... }, nil }
NewProxyServer方法会根据proxyMode来选择是IPVS仍是IPTables,ipvs会调用ipvs.NewProxier方法来初始化一个proxier。
NewProxier
func NewProxier(... ) (*Proxier, error) { ... //对于 SNAT iptables 规则生成 masquerade 标记 masqueradeValue := 1 << uint(masqueradeBit) ... //设置默认调度算法 rr if len(scheduler) == 0 { klog.Warningf("IPVS scheduler not specified, use %s by default", DefaultScheduler) scheduler = DefaultScheduler } // healthcheck服务器对象建立 serviceHealthServer := healthcheck.NewServiceHealthServer(hostname, recorder) ... //初始化 proxier proxier := &Proxier{ ... } //初始化 ipset 规则 proxier.ipsetList = make(map[string]*IPSet) for _, is := range ipsetInfo { proxier.ipsetList[is.name] = NewIPSet(ipset, is.name, is.setType, isIPv6, is.comment) } burstSyncs := 2 klog.V(2).Infof("ipvs(%s) sync params: minSyncPeriod=%v, syncPeriod=%v, burstSyncs=%d", ipt.Protocol(), minSyncPeriod, syncPeriod, burstSyncs) //初始化 syncRunner proxier.syncRunner = async.NewBoundedFrequencyRunner("sync-runner", proxier.syncProxyRules, minSyncPeriod, syncPeriod, burstSyncs) //启动 gracefuldeleteManager proxier.gracefuldeleteManager.Run() return proxier, nil }
这个方法主要作了以下几件事:
这个方法在初始化syncRunner的时候设置了proxier.syncProxyRules方法做为一个参数构建了同步运行器syncRunner。
文件位置:cmd/kube-proxy/app/server.go
func (s *ProxyServer) Run() error { ... //调用ipvs的SyncLoop方法 go s.Proxier.SyncLoop() return <-errCh }
kube-proxy在启动的时候会初始化完ProxyServer 对象后,会调用runLoop方法,而后调用到ProxyServer的Run方法中,最后调用ipvs的SyncLoop方法。
func (proxier *Proxier) SyncLoop() { ... proxier.syncRunner.Loop(wait.NeverStop) //执行NewBoundedFrequencyRunner对象Loop } func (bfr *BoundedFrequencyRunner) Loop(stop <-chan struct{}) { bfr.timer.Reset(bfr.maxInterval) for { select { case <-stop: bfr.stop() return case <-bfr.timer.C(): //定时器方式执行 bfr.tryRun() case <-bfr.run: //按需方式执行(发送运行指令信号) bfr.tryRun() } } } func (bfr *BoundedFrequencyRunner) tryRun() { bfr.mu.Lock() defer bfr.mu.Unlock() //限制条件容许运行func if bfr.limiter.TryAccept() { bfr.fn() // 重点执行部分,调用func,上下文来看此处就是 // 对syncProxyRules()的调用 bfr.lastRun = bfr.timer.Now() // 记录运行时间 bfr.timer.Stop() bfr.timer.Reset(bfr.maxInterval) // 重设下次运行时间 klog.V(3).Infof("%s: ran, next possible in %v, periodic in %v", bfr.name, bfr.minInterval, bfr.maxInterval) return } //限制条件不容许运行,计算下次运行时间 elapsed := bfr.timer.Since(bfr.lastRun) // elapsed:上次运行时间到如今已过多久 nextPossible := bfr.minInterval - elapsed // nextPossible:下次运行至少差多久(最小周期) nextScheduled := bfr.maxInterval - elapsed // nextScheduled:下次运行最迟差多久(最大周期) klog.V(4).Infof("%s: %v since last run, possible in %v, scheduled in %v", bfr.name, elapsed, nextPossible, nextScheduled) if nextPossible < nextScheduled { bfr.timer.Stop() bfr.timer.Reset(nextPossible) klog.V(3).Infof("%s: throttled, scheduling run in %v", bfr.name, nextPossible) } }
SyncLoop方法会调用到proxier的syncRunner实例设置的syncProxyRules方法。
syncProxyRules方法比较长,因此这里就分开来一步步的讲,跟好代码节奏来就行了。
代码位置:pkg/proxy/ipvs/proxier.go
//更新 service 与 endpoint变化信息 serviceUpdateResult := proxy.UpdateServiceMap(proxier.serviceMap, proxier.serviceChanges) endpointUpdateResult := proxier.endpointsMap.Update(proxier.endpointsChanges) staleServices := serviceUpdateResult.UDPStaleClusterIP // 合并 service 列表 for _, svcPortName := range endpointUpdateResult.StaleServiceNames { if svcInfo, ok := proxier.serviceMap[svcPortName]; ok && svcInfo != nil && conntrack.IsClearConntrackNeeded(svcInfo.Protocol()) { klog.V(2).Infof("Stale %s service %v -> %s", strings.ToLower(string(svcInfo.Protocol())), svcPortName, svcInfo.ClusterIP().String()) staleServices.Insert(svcInfo.ClusterIP().String()) for _, extIP := range svcInfo.ExternalIPStrings() { staleServices.Insert(extIP) } } }
这里是同步与新更service和endpoints,而后 合并 service 列表。
//nat链 proxier.natChains.Reset() //nat规则 proxier.natRules.Reset() //filter链 proxier.filterChains.Reset() //filter规则 proxier.filterRules.Reset() // Write table headers. writeLine(proxier.filterChains, "*filter") writeLine(proxier.natChains, "*nat") //建立kubernetes的表链接链数据 proxier.createAndLinkeKubeChain()
这里会重置链表规则,而后调用createAndLinkeKubeChain方法建立kubernetes的表链接链数据,下面咱们看看createAndLinkeKubeChain方法:
func (proxier *Proxier) createAndLinkeKubeChain() { //经过iptables-save获取现有的filter和NAT表存在的链数据 existingFilterChains := proxier.getExistingChains(proxier.filterChainsData, utiliptables.TableFilter) existingNATChains := proxier.getExistingChains(proxier.iptablesData, utiliptables.TableNAT) // Make sure we keep stats for the top-level chains //里面保存了NAT表链和Filter表链 // NAT表链: KUBE-SERVICES / KUBE-POSTROUTING / KUBE-FIREWALL // KUBE-NODE-PORT / KUBE-LOAD-BALANCER / KUBE-MARK-MASQ // Filter表链: KUBE-FORWARD for _, ch := range iptablesChains { //不存在则建立链,建立顶层链 if _, err := proxier.iptables.EnsureChain(ch.table, ch.chain); err != nil { klog.Errorf("Failed to ensure that %s chain %s exists: %v", ch.table, ch.chain, err) return } //nat表写链 if ch.table == utiliptables.TableNAT { if chain, ok := existingNATChains[ch.chain]; ok { writeBytesLine(proxier.natChains, chain) } else { writeLine(proxier.natChains, utiliptables.MakeChainLine(kubePostroutingChain)) } // filter表写链 } else { if chain, ok := existingFilterChains[KubeForwardChain]; ok { writeBytesLine(proxier.filterChains, chain) } else { writeLine(proxier.filterChains, utiliptables.MakeChainLine(KubeForwardChain)) } } } // 默认链下建立kubernete服务专用跳转规则 // iptables -I OUTPUT -t nat --comment "kubernetes service portals" -j KUBE-SERVICES // iptables -I PREROUTING -t nat --comment "kubernetes service portals" -j KUBE-SERVICES // iptables -I POSTROUTING -t nat --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING // iptables -I FORWARD -t filter --comment "kubernetes forwarding rules" -j KUBE-FORWARD for _, jc := range iptablesJumpChain { args := []string{"-m", "comment", "--comment", jc.comment, "-j", string(jc.to)} if _, err := proxier.iptables.EnsureRule(utiliptables.Prepend, jc.table, jc.from, args...); err != nil { klog.Errorf("Failed to ensure that %s chain %s jumps to %s: %v", jc.table, jc.from, jc.to, err) } } }
createAndLinkeKubeChain方法首先会获取现存的filter和NAT表,而后再遍历iptablesChains。
iptablesChains里面保存了NAT表链和Filter表链:NAT表链 KUBE-SERVICES / KUBE-POSTROUTING / KUBE-FIREWALL KUBE-NODE-PORT / KUBE-LOAD-BALANCER / KUBE-MARK-MASQ;Filter表链 KUBE-FORWARD;
而后再根据iptablesJumpChain建立跳转规则。
下面回到syncProxyRules往下走。
// 建立 dummy interface kube-ipvs0 _, err = proxier.netlinkHandle.EnsureDummyDevice(DefaultDummyDevice) if err != nil { klog.Errorf("Failed to create dummy interface: %s, error: %v", DefaultDummyDevice, err) return } // 建立默认的 ipset 规则,http://ipset.netfilter.org/ for _, set := range proxier.ipsetList { if err := ensureIPSet(set); err != nil { return } set.resetEntries() }
设置默认Dummy接口,并肯定ipsets规则已存在的集合,ipset相关能够看:http://ipset.netfilter.org/。
下面会遍历proxier.serviceMap,对每个服务建立 ipvs 规则,比较长,也分开说。
for svcName, svc := range proxier.serviceMap { ... //基于此服务的有效endpoint列表,更新KUBE-LOOP-BACK的ipset集,以备后面生成相应iptables规则(SNAT假装地址) for _, e := range proxier.endpointsMap[svcName] { ep, ok := e.(*proxy.BaseEndpointInfo) if !ok { klog.Errorf("Failed to cast BaseEndpointInfo %q", e.String()) continue } if !ep.IsLocal { continue } epIP := ep.IP() epPort, err := ep.Port() // Error parsing this endpoint has been logged. Skip to next endpoint. if epIP == "" || err != nil { continue } entry := &utilipset.Entry{ IP: epIP, Port: epPort, Protocol: protocol, IP2: epIP, SetType: utilipset.HashIPPortIP, } // 校验KUBE-LOOP-BACK集合entry记录项 if valid := proxier.ipsetList[kubeLoopBackIPSet].validateEntry(entry); !valid { klog.Errorf("%s", fmt.Sprintf(EntryInvalidErr, entry, proxier.ipsetList[kubeLoopBackIPSet].Name)) continue } // 插入此entry记录至active记录队列 proxier.ipsetList[kubeLoopBackIPSet].activeEntries.Insert(entry.String()) } ... }
这一段是根据有效endpoint列表,更新KUBE-LOOP-BACK的ipset集,以备后面生成相应iptables规则(SNAT假装地址);
for svcName, svc := range proxier.serviceMap { ... //构建ipset entry entry := &utilipset.Entry{ IP: svcInfo.ClusterIP().String(), Port: svcInfo.Port(), Protocol: protocol, SetType: utilipset.HashIPPort, } // add service Cluster IP:Port to kubeServiceAccess ip set for the purpose of solving hairpin. // proxier.kubeServiceAccessSet.activeEntries.Insert(entry.String()) // 类型校验ipset entry if valid := proxier.ipsetList[kubeClusterIPSet].validateEntry(entry); !valid { klog.Errorf("%s", fmt.Sprintf(EntryInvalidErr, entry, proxier.ipsetList[kubeClusterIPSet].Name)) continue } // 名为KUBE-CLUSTER-IP的ipset集插入entry,以备后面统一辈子成IPtables规则 proxier.ipsetList[kubeClusterIPSet].activeEntries.Insert(entry.String()) // ipvs call // 构建ipvs虚拟服务器VS服务对象 serv := &utilipvs.VirtualServer{ Address: svcInfo.ClusterIP(), Port: uint16(svcInfo.Port()), Protocol: string(svcInfo.Protocol()), Scheduler: proxier.ipvsScheduler, } // Set session affinity flag and timeout for IPVS service // 设置IPVS服务的会话保持标志和超时时间 if svcInfo.SessionAffinityType() == v1.ServiceAffinityClientIP { serv.Flags |= utilipvs.FlagPersistent serv.Timeout = uint32(svcInfo.StickyMaxAgeSeconds()) } // We need to bind ClusterIP to dummy interface, so set `bindAddr` parameter to `true` in syncService() // 将clusterIP绑定至dummy虚拟接口上,syncService()处理中需置bindAddr地址为True if err := proxier.syncService(svcNameString, serv, true, bindedAddresses); err == nil { activeIPVSServices[serv.String()] = true activeBindAddrs[serv.Address.String()] = true // ExternalTrafficPolicy only works for NodePort and external LB traffic, does not affect ClusterIP // So we still need clusterIP rules in onlyNodeLocalEndpoints mode. //同步endpoints信息,IPVS为VS更新realServer if err := proxier.syncEndpoint(svcName, false, serv); err != nil { klog.Errorf("Failed to sync endpoint for service: %v, err: %v", serv, err) } } else { klog.Errorf("Failed to sync service: %v, err: %v", serv, err) } ... }
ipset集KUBE-CLUSTER-IP更新,以备后面生成相应iptables规则。
for svcName, svc := range proxier.serviceMap { ... //为 load-balancer类型建立 ipvs 规则 for _, ingress := range svcInfo.LoadBalancerIPStrings() { if ingress != "" { // 构建ipset entry entry = &utilipset.Entry{ IP: ingress, Port: svcInfo.Port(), Protocol: protocol, SetType: utilipset.HashIPPort, } if valid := proxier.ipsetList[kubeLoadBalancerSet].validateEntry(entry); !valid { klog.Errorf("%s", fmt.Sprintf(EntryInvalidErr, entry, proxier.ipsetList[kubeLoadBalancerSet].Name)) continue } //KUBE-LOAD-BALANCER ipset集更新 proxier.ipsetList[kubeLoadBalancerSet].activeEntries.Insert(entry.String()) //服务指定externalTrafficPolicy=local时,KUBE-LOAD-BALANCER-LOCAL ipset集更新 if svcInfo.OnlyNodeLocalEndpoints() { if valid := proxier.ipsetList[kubeLoadBalancerLocalSet].validateEntry(entry); !valid { klog.Errorf("%s", fmt.Sprintf(EntryInvalidErr, entry, proxier.ipsetList[kubeLoadBalancerLocalSet].Name)) continue } proxier.ipsetList[kubeLoadBalancerLocalSet].activeEntries.Insert(entry.String()) } // 服务的LoadBalancerSourceRanges被指定时,基于源IP保护的防火墙策略开启,KUBE-LOAD-BALANCER-FW ipset集更新 if len(svcInfo.LoadBalancerSourceRanges()) != 0 { if valid := proxier.ipsetList[kubeLoadbalancerFWSet].validateEntry(entry); !valid { klog.Errorf("%s", fmt.Sprintf(EntryInvalidErr, entry, proxier.ipsetList[kubeLoadbalancerFWSet].Name)) continue } proxier.ipsetList[kubeLoadbalancerFWSet].activeEntries.Insert(entry.String()) allowFromNode := false for _, src := range svcInfo.LoadBalancerSourceRanges() { // ipset call entry = &utilipset.Entry{ IP: ingress, Port: svcInfo.Port(), Protocol: protocol, Net: src, SetType: utilipset.HashIPPortNet, } // 枚举全部源CIDR白名单列表,KUBE-LOAD-BALANCER-SOURCE-CIDR ipset集更新 //cidr:https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr if valid := proxier.ipsetList[kubeLoadBalancerSourceCIDRSet].validateEntry(entry); !valid { klog.Errorf("%s", fmt.Sprintf(EntryInvalidErr, entry, proxier.ipsetList[kubeLoadBalancerSourceCIDRSet].Name)) continue } proxier.ipsetList[kubeLoadBalancerSourceCIDRSet].activeEntries.Insert(entry.String()) // ignore error because it has been validated _, cidr, _ := net.ParseCIDR(src) if cidr.Contains(proxier.nodeIP) { allowFromNode = true } } ... } // ipvs call // 构建ipvs 虚拟主机对象 serv := &utilipvs.VirtualServer{ Address: net.ParseIP(ingress), Port: uint16(svcInfo.Port()), Protocol: string(svcInfo.Protocol()), Scheduler: proxier.ipvsScheduler, } ... } } ... }
这里为 load-balancer类型建立 ipvs 规则,LoadBalancerSourceRanges和externalTrafficPolicy=local被指定时将对KUBE-LOAD-BALANCER-LOCAL、KUBE-LOAD-BALANCER-FW、KUBE-LOAD-BALANCER-SOURCE-CIDR、KUBE-LOAD-BALANCER-SOURCE-IP ipset集更新,以备后面生成相应iptables规则。
... //同步 ipset 记录,清理 conntrack for _, set := range proxier.ipsetList { set.syncIPSetEntries() } //建立 iptables 规则数据 proxier.writeIptablesRules() // 合并iptables规则 proxier.iptablesData.Reset() proxier.iptablesData.Write(proxier.natChains.Bytes()) proxier.iptablesData.Write(proxier.natRules.Bytes()) proxier.iptablesData.Write(proxier.filterChains.Bytes()) proxier.iptablesData.Write(proxier.filterRules.Bytes()) klog.V(5).Infof("Restoring iptables rules: %s", proxier.iptablesData.Bytes()) //基于iptables格式化规则数据,使用iptables-restore刷新iptables规则 err = proxier.iptables.RestoreAll(proxier.iptablesData.Bytes(), utiliptables.NoFlushTables, utiliptables.RestoreCounters) ...
最后这里会刷新iptables 规则,而后建立 iptables 规则数据,将合并iptables规则,iptables-restore刷新iptables规则。
这一篇没有怎么讲service是怎么运行的,怎么使用的,而是选择讲了kube-proxy的ipvs代理是怎么作的,以及在开头讲了ipvs与iptables区别与关系,看不懂的同窗须要本身去补充一下iptables相关的知识,文中的 ipvs 的知识我也是现学的,若是有讲解很差的地方欢迎指出。
https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
https://kubernetes.io/docs/concepts/services-networking/service/
https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr
https://github.com/kubernetes/kubernetes/tree/master/pkg/proxy/ipvs
https://zh.wikipedia.org/zh-cn/网络地址转换