1、bonding介绍html
在企业Linux服务器管理里中,服务器的可靠性、可用性以及I/O速度都很是重要,保持服务器的高可用和安全性是生产环境的重要指标,其中最重要的一点是服务器网络链接的高可用性。一般咱们会把重要的服务器作主备,其目的在于当主服务器宕机,备份服务器立刻接管其主服务器的工做,从而实现服务的连续,不至于服务的停用。还有一种状况,咱们会给服务器作负载均衡,当一个服务器对外提供服务,接收到用户请求太多,会致使服务器宕机,这个时候咱们用主备显得力不从心,怎么办呢,这个时候咱们就须要考虑把原来访问一台服务器的流量,分别用不少台服务器来分担,这样一来把原来一台服务器承受的压力分别用不少台服务器来承担。咱们知道一张物理网卡的网络吞吐量是有限的,当服务器上的网卡吞吐量达到上限,这个时候就算性能再好的服务器咱们访问它都会感受慢,这时咱们就须要考虑增大网卡的网络吞吐量。一张网卡不够用,咱们用两张,三张,不少张。虽然不少张网卡同时对外提供服务是能够解决吞吐量的问题,可是新的问题又产生了,用户怎么知道咱们其余网卡上的ip呢?一般状况咱们的都是以一个ip对外服务,(固然也有多个ip对外服务,一个域名对外服务,后台多是多个IP),用户只知道一个ip或者域名,那咱们虽然装了不少张网卡,但直接使用好像是达不到咱们理想的效果。有没有一种技术将不少张网卡虚拟成一个大的网卡,就有点相似于LVM磁盘管理同样,能够把不少张网卡整合成一张,而后来提升网卡的性能呢?诶,有的,bonding就有这样的功能。它能够将多张网卡绑定同一IP地址对外提供服务,可实现高可用或负载均衡。咱们都知道两张网卡或多张网卡设置同一IP地址是不能够的,但bonding能够,它的底层工做原理就是经过虚拟一块网卡对外提供链接,物理网卡的MAC地址都会被修改为相同的MAC地址。这样一来就实现了提升网卡的性能的同时也有冗余的网卡。node
2、bonding工做模式vim
bonding的工做模式有七种,其中有三种是最为经常使用centos
Mode 0 (balance-rr)轮转策略:这种模式就是从头至尾顺序的在每个slave接口上发送数据包,它提供了网卡的负载均衡和容错的能力安全
mode 1 (active-backup) 活动-备份策略:这种模式只有一个slave被激活,当且仅当活动的slave接口失败时才会激活其余slave,为了不交换机发生混乱,此时绑定的MAC地址只有一个外部端口上可见bash
mode 3(broadcast)广播策略:这种模式在全部的slave接口上传送全部的报文,提供容错能力。服务器
注意:active-backup、balance-tlb 和 balance-alb 模式不须要交换机的任何特殊配置。其余绑定模式须要配置交换机以便整合连接。如:Cisco 交换机须要在模式 0、2 和 3 中使用 EtherChannel,但在模式4中须要 LACP和EtherChannel网络
3、bonding实现 mode 0 并测试负载均衡
1)查看系统是否加载了bonding模块less
[root@test-centos7-node1 ~]# lsmod|grep bonding [root@test-centos7-node1 ~]#
说明:若是你的系统执行了lsmod命令 没有过滤到bonding相关的内容,说明你的系统没有加载bonding模块
2)加载bonding模块
[root@test-centos7-node1 ~]# lsmod|grep bonding [root@test-centos7-node1 ~]# modprobe bonding [root@test-centos7-node1 ~]# lsmod |grep bonding bonding 145728 0 [root@test-centos7-node1 ~]#
说明:一般状况下内核版本2.4之后都是默认支持bonding模块,无需手动编译
3)备份原有的网卡配置文件
[root@test-centos7-node1 test]# ls /etc/sysconfig/network-scripts/ ifcfg-ens33 ifdown-ippp ifdown-sit ifup-bnep ifup-plusb ifup-TeamPort ifcfg-ens36 ifdown-ipv6 ifdown-Team ifup-eth ifup-post ifup-tunnel ifcfg-lo ifdown-isdn ifdown-TeamPort ifup-ippp ifup-ppp ifup-wireless ifdown ifdown-post ifdown-tunnel ifup-ipv6 ifup-routes init.ipv6-global ifdown-bnep ifdown-ppp ifup ifup-isdn ifup-sit network-functions ifdown-eth ifdown-routes ifup-aliases ifup-plip ifup-Team network-functions-ipv6 [root@test-centos7-node1 test]# cp /etc/sysconfig/network-scripts/{ifcfg-ens33,ifcfg-ens33.bak} [root@test-centos7-node1 test]# cp /etc/sysconfig/network-scripts/{ifcfg-ens36,ifcfg-ens36.bak} [root@test-centos7-node1 test]# ls /etc/sysconfig/network-scripts/ ifcfg-ens33 ifdown ifdown-isdn ifdown-Team ifup-bnep ifup-plip ifup-sit init.ipv6-global ifcfg-ens33.bak ifdown-bnep ifdown-post ifdown-TeamPort ifup-eth ifup-plusb ifup-Team network-functions ifcfg-ens36 ifdown-eth ifdown-ppp ifdown-tunnel ifup-ippp ifup-post ifup-TeamPort network-functions-ipv6 ifcfg-ens36.bak ifdown-ippp ifdown-routes ifup ifup-ipv6 ifup-ppp ifup-tunnel ifcfg-lo ifdown-ipv6 ifdown-sit ifup-aliases ifup-isdn ifup-routes ifup-wireless [root@test-centos7-node1 test]#
4)建立bonding配置文件
[root@test-centos7-node1 test]# vim /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 ONBOOT=YES BOOTPROTO=static IPADDR=192.168.0.33 PREFIX=24 GATEWAY=192.168.0.1 DNS1=192.168.0.1 BONDING_OPTS="miimon=100 mode=0" ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ "/etc/sysconfig/network-scripts/ifcfg-bond0" [New] 8L, 141C written [root@test-centos7-node1 test]#
说明:band0相对于原来也是一块网卡,咱们能够像配置物理网卡同样配置,但它又区别于物理网卡,它是一张虚拟的网卡,咱们除了配置ip地址信息以外,还须要配置它工做的模式,以及心跳检测时间,其中miimon 是用来进行链路监测的。若是miimon=100,那么系统每100ms 监测一次链路链接状态,若是有一条线路不通就转入另外一条线路。
5)修改物理网卡配置文件
[root@test-centos7-node1 test]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 NAME=ens33 DEVICE=ens33 ONBOOT=yes BOOTPROTO=none MASTER=bond0 SLAVE=yes [root@test-centos7-node1 test]# cat /etc/sysconfig/network-scripts/ifcfg-ens36 NAME=ens36 DEVICE=ens36 ONBOOT=yes BOOTPROTO=none MASTER=bond0 SLAVE=yes [root@test-centos7-node1 test]#
说明:把原有的配置ip信息去掉,把BOOTPROTO修改为none,而后新加MASTER=bond0,SLAVE=yes便可
6)重启网络服务并测试
说明:重启网络服务我用的crt会一直卡在哪里,缘由是咱们从新配置了IP地址。咱们可用crt从新链接新配的地址
[root@test-centos7-node1 test]# ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:0c:29:f2:82:0c brd ff:ff:ff:ff:ff:ff 3: ens36: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether 00:0c:29:f2:82:0c brd ff:ff:ff:ff:ff:ff 5: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:0c:29:f2:82:0c brd ff:ff:ff:ff:ff:ff inet 192.168.0.33/24 brd 192.168.0.255 scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fef2:820c/64 scope link tentative dadfailed valid_lft forever preferred_lft forever [root@test-centos7-node1 test]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: ens33 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:29:f2:82:0c Slave queue ID: 0 Slave Interface: ens36 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:29:f2:82:16 Slave queue ID: 0 [root@test-centos7-node1 test]#
说明:可看到两张物理网卡和bond0的MAC都变成同样了,何况两张物理网卡上没有任何ip地址,bond0上是咱们刚才配置的ip地址,说明咱们配置的bond0已经可使用了。固然咱们也能够看/proc/net/bonding/bond0来查看bond的详细信息,其中能够看到两块物理网卡都从属bond0,band0的工做模式是load balancing。此模式实现了网卡的负载均衡和容错,咱们可任意断开一个物理网卡,其网络服务不断开。测试的话可选择下载一个大文件来测试。
下载http://192.168.0.99/bigfile 文件测试
1)使用bond0 两物理网卡负载均衡来下载
[root@test-centos7-node1 test]# time wget http://192.168.0.99/bigfile --2020-01-10 10:33:48-- http://192.168.0.99/bigfile Connecting to 192.168.0.99:80... connected. HTTP request sent, awaiting response... 200 OK Length: 5211422720 (4.9G) Saving to: ‘bigfile’ 100%[============================================================>] 5,211,422,720 57.2MB/s in 93s 2020-01-10 10:35:21 (53.6 MB/s) - ‘bigfile’ saved [5211422720/5211422720] real 1m32.961s user 0m0.502s sys 0m21.582s [root@test-centos7-node1 test]#
说:可看到下载一个4.9G的大文件,用bond0下载平均下载速度是53.6MB/S
2)不使用bond0下载,恢复两物理网卡,让其都是用不一样的IP
[root@test-centos7-node1 ~]# ls /etc/sysconfig/network-scripts/ ifcfg-ens33 ifdown-ippp ifdown-TeamPort ifup-isdn ifup-TeamPort ifcfg-ens33.bak ifdown-ipv6 ifdown-tunnel ifup-plip ifup-tunnel ifcfg-ens36 ifdown-isdn ifup ifup-plusb ifup-wireless ifcfg-ens36.bak ifdown-post ifup-aliases ifup-post init.ipv6-global ifcfg-lo ifdown-ppp ifup-bnep ifup-ppp network-functions ifdown ifdown-routes ifup-eth ifup-routes network-functions-ipv6 ifdown-bnep ifdown-sit ifup-ippp ifup-sit ifdown-eth ifdown-Team ifup-ipv6 ifup-Team [root@test-centos7-node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 NAME=ens33 DEVICE=ens33 ONBOOT=yes IPADDR=192.168.0.10 PREFIX=24 GATEWAY=192.168.0.1 DNS1=192.168.0.1 [root@test-centos7-node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens36 NAME=ens36 DEVICE=ens36 ONBOOT=yes IPADDR=192.168.0.20 PREFIX=24 GATEWAY=192.168.0.1 DNS1=192.168.0.1 [root@test-centos7-node1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f2:82:0c brd ff:ff:ff:ff:ff:ff inet 192.168.0.10/24 brd 192.168.0.255 scope global ens33 valid_lft forever preferred_lft forever 3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f2:82:16 brd ff:ff:ff:ff:ff:ff inet 192.168.0.20/24 brd 192.168.0.255 scope global ens36 valid_lft forever preferred_lft forever 5: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether e6:e0:29:24:b5:e1 brd ff:ff:ff:ff:ff:ff [root@test-centos7-node1 ~]# time wget http://192.168.0.99/bigfile --2020-01-10 10:42:55-- http://192.168.0.99/bigfile Connecting to 192.168.0.99:80... connected. HTTP request sent, awaiting response... 200 OK Length: 5211422720 (4.9G) Saving to: ‘bigfile’ 100%[=================================================>] 5,211,422,720 63.6MB/s in 2m 48s 2020-01-10 10:45:43 (29.6 MB/s) - ‘bigfile’ saved [5211422720/5211422720] real 2m48.065s user 0m0.823s sys 1m6.360s [root@test-centos7-node1 ~]#
说明:可看到不使用bond0 下载平均速度是29.6MB/S
4、bonding实现 mode 1 并测试
前面的网卡配置文件备份,这里就不在演示,同上面的同样,这里只须要修改bond0 的配置文件,将其mode 0 修改为mode 1 ,物理网卡的配置文件同上面的同样
[root@test-centos7-node1 ~]# ip a l 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether e6:e0:29:24:b5:e1 brd ff:ff:ff:ff:ff:ff 3: ens36: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether e6:e0:29:24:b5:e1 brd ff:ff:ff:ff:ff:ff 5: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether e6:e0:29:24:b5:e1 brd ff:ff:ff:ff:ff:ff inet 192.168.0.33/24 brd 192.168.0.255 scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::e4e0:29ff:fe24:b5e1/64 scope link valid_lft forever preferred_lft forever [root@test-centos7-node1 ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: ens33 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: ens33 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:29:f2:82:0c Slave queue ID: 0 Slave Interface: ens36 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:29:f2:82:16 Slave queue ID: 0 [root@test-centos7-node1 ~]#
说明:能够看到,两个网卡都是启动的状态,当前活跃的网卡是ens33
测试:模拟ens33网线断了,看看ens36会不会顶替上去
说明:能够看到ens33出现故障 ens36 当即就顶替上去了,这里须要注意一点ens33若是恢复了,它不会去顶替ens36 它会一直盯着ens36 直到它ens36死了,它才会顶替上去。
5、bonding实现 mode 3
前期准备同上,只需更改bond0配置文件/etc/sysconfig/network-scripts/ifcfg-bond0,把mode=1 修改为mode=3,而后重启服务网络便可
[root@test-centos7-node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 ONBOOT=YES BOOTPROTO=static IPADDR=192.168.0.33 PREFIX=24 GATEWAY=192.168.0.1 DNS1=192.168.0.1 BONDING_OPTS="miimon=100 mode=3" [root@test-centos7-node1 ~]# systemctl restart network [root@test-centos7-node1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether e6:e0:29:24:b5:e1 brd ff:ff:ff:ff:ff:ff 3: ens36: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000 link/ether e6:e0:29:24:b5:e1 brd ff:ff:ff:ff:ff:ff 5: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether e6:e0:29:24:b5:e1 brd ff:ff:ff:ff:ff:ff inet 192.168.0.33/24 brd 192.168.0.255 scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::e4e0:29ff:fe24:b5e1/64 scope link tentative dadfailed valid_lft forever preferred_lft forever [root@test-centos7-node1 ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: fault-tolerance (broadcast) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: ens33 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:29:f2:82:0c Slave queue ID: 0 Slave Interface: ens36 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 00:0c:29:f2:82:16 Slave queue ID: 0 [root@test-centos7-node1 ~]#
说明:此模式是广播模式,什么意思呢,就是访问bond0 它就会广播给每一个网卡,而后每张物理网卡收到广播后都会回应。
测试:在192.168.0.99 上ping 192.168.0.33
[root@test html]# ping 192.168.0.33 PING 192.168.0.33 (192.168.0.33) 56(84) bytes of data. 64 bytes from 192.168.0.33: icmp_seq=1 ttl=64 time=1.43 ms 64 bytes from 192.168.0.33: icmp_seq=1 ttl=64 time=1.51 ms (DUP!) 64 bytes from 192.168.0.33: icmp_seq=2 ttl=64 time=1.38 ms 64 bytes from 192.168.0.33: icmp_seq=2 ttl=64 time=1.45 ms (DUP!) 64 bytes from 192.168.0.33: icmp_seq=3 ttl=64 time=2.22 ms 64 bytes from 192.168.0.33: icmp_seq=3 ttl=64 time=2.28 ms (DUP!) 64 bytes from 192.168.0.33: icmp_seq=4 ttl=64 time=0.997 ms 64 bytes from 192.168.0.33: icmp_seq=4 ttl=64 time=1.06 ms (DUP!) 64 bytes from 192.168.0.33: icmp_seq=5 ttl=64 time=0.618 ms 64 bytes from 192.168.0.33: icmp_seq=5 ttl=64 time=0.764 ms (DUP!) 64 bytes from 192.168.0.33: icmp_seq=6 ttl=64 time=0.600 ms 64 bytes from 192.168.0.33: icmp_seq=6 ttl=64 time=0.670 ms (DUP!) 64 bytes from 192.168.0.33: icmp_seq=7 ttl=64 time=0.584 ms 64 bytes from 192.168.0.33: icmp_seq=7 ttl=64 time=0.707 ms (DUP!) 64 bytes from 192.168.0.33: icmp_seq=8 ttl=64 time=0.581 ms 64 bytes from 192.168.0.33: icmp_seq=8 ttl=64 time=0.651 ms (DUP!) 64 bytes from 192.168.0.33: icmp_seq=9 ttl=64 time=0.579 ms 64 bytes from 192.168.0.33: icmp_seq=9 ttl=64 time=0.650 ms (DUP!) 64 bytes from 192.168.0.33: icmp_seq=10 ttl=64 time=0.589 ms 64 bytes from 192.168.0.33: icmp_seq=10 ttl=64 time=0.661 ms (DUP!) ^C --- 192.168.0.33 ping statistics --- 10 packets transmitted, 10 received, +10 duplicates, 0% packet loss, time 9006ms rtt min/avg/max/mdev = 0.579/0.999/2.284/0.528 ms [root@test html]#
说明:0.99向0.33发送一条ping报文,0.99上收到了两个消息,是否是很奇怪呀,出去一个回来两个重复的。这就是由于0.33上的两块物理网卡收到广播后都进行了回应,因此在0.99上会收到2条重复的回应消息。
6、卸载bonding 恢复原有物理网卡
1)恢复配置文件
[root@test-centos7-node1 ~]# cp /etc/sysconfig/network-scripts/{ifcfg-ens33.bak,ifcfg-ens33} cp: overwrite ‘/etc/sysconfig/network-scripts/ifcfg-ens33’? y [root@test-centos7-node1 ~]# cp /etc/sysconfig/network-scripts/{ifcfg-ens36.bak,ifcfg-ens36} cp: overwrite ‘/etc/sysconfig/network-scripts/ifcfg-ens36’? y [root@test-centos7-node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 NAME=ens33 DEVICE=ens33 ONBOOT=yes IPADDR=192.168.0.10 PREFIX=24 GATEWAY=192.168.0.1 DNS1=192.168.0.1 [root@test-centos7-node1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens36 NAME=ens36 DEVICE=ens36 ONBOOT=yes IPADDR=192.168.0.20 PREFIX=24 GATEWAY=192.168.0.1 DNS1=192.168.0.1 [root@test-centos7-node1 ~]# mv /etc/sysconfig/network-scripts/{ifcfg-bond0,ifcfg-bond0.bak} [root@test-centos7-node1 ~]# ls /etc/sysconfig/network-scripts/ ifcfg-bond0.bak ifcfg-lo ifdown-ipv6 ifdown-sit ifup-aliases ifup-isdn ifup-routes ifup-wireless ifcfg-ens33 ifdown ifdown-isdn ifdown-Team ifup-bnep ifup-plip ifup-sit init.ipv6-global ifcfg-ens33.bak ifdown-bnep ifdown-post ifdown-TeamPort ifup-eth ifup-plusb ifup-Team network-functions ifcfg-ens36 ifdown-eth ifdown-ppp ifdown-tunnel ifup-ippp ifup-post ifup-TeamPort network-functions-ipv6 ifcfg-ens36.bak ifdown-ippp ifdown-routes ifup ifup-ipv6 ifup-ppp ifup-tunnel [root@test-centos7-node1 ~]#
2)卸载bonding 模块
3)重启
正常状况下修改了配置文件后重启网络服务就能够了 ,若是要卸载bonding模块,卸载后重启网络服务,物理网卡是启动不起来的,须要把服务器重启下便可恢复正常。
总结:经过上面的实验能够看到 bonding技术相似raid技术,它可把多张网卡绑定在一块儿,不一样的模式有着不一样模式的特色。mode 0 轮循负载均衡,可提升网卡的性能的同时也有冗余网卡。mode1 主备模式,可实现网卡的高可用。mode3 广播模式,提供容错能力。其余模式可参考文档https://www.kernel.org/doc/Documentation/networking/bonding.txt