Pod IP:Pod的IP地址,是根据docker0网络IP段进行分配的。node
Cluster IP:Service的IP,是一个虚拟IP,仅做用于service对象,由K8S管理和分配,须要结合service port才能使用,单独的IP没有通讯功能,集群外访问须要一些修改。linux
在K8S集群内部,node ip、pod ip、clustere ip的通讯机制是由k8s指定的路由规则,不是IP路由。git
[root@linux-node1 ~]# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 3h
[root@linux-node1 ssl]# vim flanneld-csr.json { "CN": "flanneld", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] }
[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \ > -ca-key=/opt/kubernetes/ssl/ca-key.pem \ > -config=/opt/kubernetes/ssl/ca-config.json \ > -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld [root@linux-node1 ssl]# ll flannel* -rw-r--r-- 1 root root 997 May 31 11:13 flanneld.csr -rw-r--r-- 1 root root 221 May 31 11:13 flanneld-csr.json -rw------- 1 root root 1675 May 31 11:13 flanneld-key.pem -rw-r--r-- 1 root root 1391 May 31 11:13 flanneld.pem
[root@linux-node1 ssl]# cp flanneld*.pem /opt/kubernetes/ssl/ [root@linux-node1 ssl]# scp flanneld*.pem 192.168.56.120:/opt/kubernetes/ssl/ flanneld-key.pem 100% 1675 127.2KB/s 00:00 flanneld.pem 100% 1391 308.3KB/s 00:00 [root@linux-node1 ssl]# scp flanneld*.pem 192.168.56.130:/opt/kubernetes/ssl/ flanneld-key.pem 100% 1675 291.1KB/s 00:00 flanneld.pem 100% 1391 90.4KB/s 00:00
[root@linux-node1 ~]# cd /usr/local/src # wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz [root@linux-node1 src]# tar zxf flannel-v0.10.0-linux-amd64.tar.gz [root@linux-node1 src]# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/ 复制到linux-node2和linux-node3节点 [root@linux-node1 src]# scp flanneld mk-docker-opts.sh 192.168.56.120:/opt/kubernetes/bin/ [root@linux-node1 src]# scp flanneld mk-docker-opts.sh 192.168.56.130:/opt/kubernetes/bin/ 复制对应脚本到/opt/kubernetes/bin目录下。 [root@linux-node1 ~]# cd /usr/local/src/kubernetes/cluster/centos/node/bin/ [root@linux-node1 bin]# cp remove-docker0.sh /opt/kubernetes/bin/ [root@linux-node1 bin]# scp remove-docker0.sh 192.168.56.120:/opt/kubernetes/bin/ [root@linux-node1 bin]# scp remove-docker0.sh 192.168.56.130:/opt/kubernetes/bin/
[root@linux-node1 ~]# vim /opt/kubernetes/cfg/flannel FLANNEL_ETCD="-etcd-endpoints=https://192.168.56.110:2379,https://192.168.56.120:2379,https://192.168.56.130:2379" FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network" FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem" FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem" FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem" 复制配置到其它节点上 [root@linux-node1 ~]# scp /opt/kubernetes/cfg/flannel 192.168.56.120:/opt/kubernetes/cfg/ [root@linux-node1 ~]# scp /opt/kubernetes/cfg/flannel 192.168.56.130:/opt/kubernetes/cfg/
[root@linux-node1 ~]# vim /usr/lib/systemd/system/flannel.service [Unit] Description=Flanneld overlay address etcd agent After=network.target Before=docker.service [Service] EnvironmentFile=-/opt/kubernetes/cfg/flannel ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE} ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker Type=notify [Install] WantedBy=multi-user.target RequiredBy=docker.service 复制系统服务脚本到其它节点上 # scp /usr/lib/systemd/system/flannel.service 192.168.56.120:/usr/lib/systemd/system/ # scp /usr/lib/systemd/system/flannel.service 192.168.56.130:/usr/lib/systemd/system/
https://github.com/containernetworking/plugins/releases wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz [root@linux-node1 ~]# mkdir /opt/kubernetes/bin/cni [root@linux-node2 ~]# mkdir /opt/kubernetes/bin/cni [root@linux-node3 ~]# mkdir /opt/kubernetes/bin/cni [root@linux-node1 src]# tar zxf cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni [root@linux-node1 src]# scp -r /opt/kubernetes/bin/cni/* 192.168.56.120:/opt/kubernetes/bin/cni/ [root@linux-node1 src]# scp -r /opt/kubernetes/bin/cni/* 192.168.56.130:/opt/kubernetes/bin/cni/
此步的操做是为了建立POD的网段,并在ETCD中存储,然后FLANNEL从ETCD中取出并进行分配github
[root@linux-node1 src]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \ --no-sync -C https://192.168.56.110:2379,https://192.168.56.120:2379,https://192.168.56.130:2379 \ mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}' >/dev/null 2>&1
[root@linux-node1 ~]# systemctl daemon-reload [root@linux-node1 ~]# systemctl enable flannel [root@linux-node1 ~]# chmod +x /opt/kubernetes/bin/* [root@linux-node1 ~]# systemctl start flannel [root@linux-node2 ~]# systemctl daemon-reload [root@linux-node2 ~]# systemctl enable flannel [root@linux-node2 ~]# chmod +x /opt/kubernetes/bin/* [root@linux-node2 ~]# systemctl start flannel [root@linux-node3 ~]# systemctl daemon-reload [root@linux-node3 ~]# systemctl enable flannel [root@linux-node3 ~]# chmod +x /opt/kubernetes/bin/* [root@linux-node3 ~]# systemctl start flannel
能够看到每一个节点上会多出一个flannel.1的网卡,不一样的节点都在不一样网段。docker
[root@linux-node1 ~]# ifconfig flannel.1 flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.2.46.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::f4e6:1aff:fe7e:575b prefixlen 64 scopeid 0x20<link> ether f6:e6:1a:7e:57:5b txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0 [root@linux-node2 ~]# ifconfig flannel.1 flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.2.87.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::d4e5:72ff:fe3e:7309 prefixlen 64 scopeid 0x20<link> ether d6:e5:72:3e:73:09 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0 [root@linux-node3 ~]# ifconfig flannel.1 flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 10.2.33.0 netmask 255.255.255.255 broadcast 0.0.0.0 ether be:cd:5a:4f:6b:d1 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 1 overruns 0 carrier 0 collisions 0
检查/opt/kubernetes/cfg/etcd.conf
配置文件中的ETCD_LISTEN_CLIENT_URLS
是否配置监听127.0.0.1:2379
。依旧没法启动flannel
,从新输入了一遍,正常了,暂时没发现其余缘由,至于etcdctl
没法获取key
值,有待研究!!!json
[root@linux-node1 ~]# vim /usr/lib/systemd/system/docker.service [Unit] #在Unit下面修改After和增长Requires After=network-online.target firewalld.service flannel.service #让docker在flannel网络后面启动 Wants=network-online.target Requires=flannel.service [Service] #增长EnvironmentFile=-/run/flannel/docker Type=notify EnvironmentFile=-/run/flannel/docker #加载环境文件,设置docker0的ip地址为flannel分配的ip地址 ExecStart=/usr/bin/dockerd $DOCKER_OPTS [root@linux-node1 ~]# systemctl daemon-reload [root@linux-node1 ~]# systemctl restart docker [root@linux-node1 ~]# ifconfig docker0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.2.46.1 netmask 255.255.255.0 broadcast 0.0.0.0 ether 02:42:1f:ef:9f:b5 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@linux-node2 ~]# ifconfig docker0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.2.87.1 netmask 255.255.255.0 broadcast 0.0.0.0 ether 02:42:8a:a5:42:d7 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@linux-node3 ~]# ifconfig docker0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.2.33.1 netmask 255.255.255.0 broadcast 0.0.0.0 ether 02:42:57:90:05:47 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
在kubectl get node
时,会看到节点的状态READY
,若是状态为NotReady
,能够查看节点上的kubelet
是否已经启动,若是未启动,进行启动。kubelet
没法启动,要进行查看systemctl status kubelet
或journalctl -xe
看看是什么缘由致使没法启动。遇到的一种状况是依赖docker,查看docker没法启动。再进一步排查docker
没法启动的缘由。vim