Docker已经上市不少年,不是什么新鲜事物了,不少企业或者开发同窗之前也很少很多有所接触,可是有实操经验的人很少,本系列教程主要偏重实战,尽可能讲干货,会根据本人理解去作阐述,具体官方概念能够查阅官方教程,本章目标以下linux
本系列教程导航:
Docker深刻浅出系列 | 容器初体验
[Docker深刻浅出系列 | 容器初体验(https://www.cnblogs.com/evan-liang/p/12237400.html)docker
本教程是基于第一章建立的虚拟机、操做系统和Docker演示shell
在同一台主机内的两个容器是怎么通讯的呢?
两个容器是怎么作到网络隔离的呢?
centos
我怎样才能在服务器外经过浏览器访问到服务器里的端口为8080的容器的资源?
浏览器
Docker的三种网络模式是什么,都有什么特色呢?
tomcat
在互联网世界,两台主机要进行通讯,是经过两个网卡/网络接口链接起来,接收和发送数据包都通过网卡,两个网卡之间至关于创建了一条通讯管道。
bash
1.查看链路层网卡信息
ip link show
服务器
[root@10 /]# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:ba:0a:28 brd ff:ff:ff:ff:ff:ff 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:12:9a:b1:a7 brd ff:ff:ff:ff:ff:ff
2.显示应用层网卡信息,能够看到更多详细信息,例如ip地址
ip a
网络
[root@10 /]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 57001sec preferred_lft 57001sec inet6 fe80::5054:ff:fe8a:fee6/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:ba:0a:28 brd ff:ff:ff:ff:ff:ff inet 192.168.100.12/24 brd 192.168.100.255 scope global noprefixroute dynamic eth1 valid_lft 143401sec preferred_lft 143401sec inet6 fe80::a00:27ff:feba:a28/64 scope link valid_lft forever preferred_lft forever 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:12:9a:b1:a7 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever
3.显示系统全部的网卡
ls /sys/class/net
[root@10 /]# ls /sys/class/net docker0 eth0 eth1 lo
ip a
核心信息详解3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:ba:0a:28 brd ff:ff:ff:ff:ff:ff inet 192.168.100.12/24 brd 192.168.100.255 scope global noprefixroute dynamic eth1 valid_lft 143401sec preferred_lft 143401sec inet6 fe80::a00:27ff:feba:a28/64 scope link valid_lft forever preferred_lft forever
<BROADCAST,MULTICAST,UP,LOWER_UP>
这个配置串告诉咱们:
BROADCAST 该接口支持广播
MULTICAST 该接口支持多播
UP 网络接口已启用
LOWER_UP 网络电缆已插入,设备已链接至网络
其余配置信息:
mtu 1500 最大传输单位(数据包大小)为1,500字节
qdisc pfifo_fast 用于数据包排队
state UP 网络接口已启用
group default 接口组
qlen 1000 传输队列长度
link/ether 08:00:27:ba:0a:28 接口的 MAC(硬件)地址
brd ff:ff:ff:ff:ff:ff 广播地址
inet 192.168.100.12/24 绑定的IPv4 地址
brd 192.168.0.255 广播地址
scope global 全局有效
dynamic eth1 地址是动态分配的
valid_lft 143401sec IPv4 地址的有效使用期限
preferred_lft 143401sec IPv4 地址的首选生存期
inet6 fe80::a00:27ff:feba:a28/64 IPv6 地址
scope link 仅在此设备上有效
valid_lft forever IPv6 地址的有效使用期限
preferred_lft forever IPv6 地址的首选生存期
经过如下命令,能够查看对应网卡的配置信息,ifcfg-*文件
cat /etc/sysconfig/network-scripts/ifcfg-eth0
这里能够修改配置文件ifcfg-*直接复制一份新的ip配置添加,可是我默认网络用的是动态ip,经过命令会更方便
[root@10 /]# ip addr add 192.168.0.100/24 dev eth0
从下面能够看到,经过ip a
查看,新的ip已经成功绑定
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 54131sec preferred_lft 54131sec inet 192.168.0.100/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe8a:fee6/64 scope link valid_lft forever preferred_lft forever
能够经过如下指定,把新增的ip清理掉
ip addr delete 192.168.0.100/24 dev eth0
重启网卡
service network restart / systemctl restart network
启动或者关闭网卡
ifup/ifdown eth0 or ip link set eth0 up/down
从上文咱们能够知道,两个主机通讯是经过两个网卡链接起来,那在同一个Linux系统,怎么模拟多个网络环境,两个容器是怎么作到网络隔离呢?
在Linux系统中,是经过network namespace来进行网络隔离,Docker也是利用该技术,建立一个彻底隔离的新网络环境,这个环境包括一个独立的网卡空间,路由表,ARP表,ip地址表,iptables,ebtables,等等。总之,与网络有关的组件都是独立的。
ip
命令提供了ip netns exec
子命令能够在对应的network namesapce进行操做,要执行的能够是任何命令,不仅是和网络相关的,建立network namesapce后能够经过ip ntns exec
+namespace名+shell指令进行操做。例如在对应namespace查看网卡信息ip ntns exec nsn1 ip a
经常使用的一些network namespace指令以下:
ip netns list #查看network namespace ip netns add ns1 #添加network namespace ip netns delete ns1 #删除network namespace
1.建立network namespace - ns1
[root@10 /]# ip netns add ns1
2.查看ns1网卡信息
[root@10 /]# ip netns exec ns1 ip a 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
能够看到这时候网卡状态是Down,此时只有一个lo网卡
3.启动ns1上的lo网卡
[root@10 /]# ip netns exec ns1 ifup lo
4.查看网卡状态
[root@10 /]# ip netns exec ns1 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state unk group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever
此时网卡状态已经变成UNKOWN,lo网卡上也绑定了一个本地回环地址127.0.0.1/8
重复上面步骤,按照上面的指令把ns1改成ns2执行一边,最后能够看到网卡状态也为UNKOWN
[root@10 /]# ip netns exec ns2 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever
通过上面的一系列步骤,两个network namespace的网络结构以下:
此时两个Network Namespace只有lo设备,互相之间是没有关联,没法通讯,只能同一Network Namesapce内的应用访问。
Linux提供Virtual Ethernet Pair
技术,分别在两个namespace创建一对网络接口,相似在两个name space之间创建一条pipe,好像拉一条网线同样,让彼此之间能够通讯,简称veth pair
。
veth pair
是成对出现,删除其中一个,另外一个也会自动消失。
1.建立连一对veth虚拟网卡,相似pipe,发给veth-ns1的数据包veth-ns2那边会收到,发给veth2的数据包veth0会收到。就至关于给机器安装了两个网卡,而且之间用网线链接起来了,两个虚拟网卡分别为veth-ns一、veth-ns2
[root@10 /]# ip link add veth-ns1 type veth peer name veth-ns2
2.查看link,能够看到链接已经创建
[root@10 /]# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:ba:0a:28 brd ff:ff:ff:ff:ff:ff 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:12:9a:b1:a7 brd ff:ff:ff:ff:ff:ff 5: veth-ns2@veth-ns1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 2a:96:f4:e3:00:d2 brd ff:ff:ff:ff:ff:ff 6: veth-ns1@veth-ns2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether da:89:0e:56:03:3f brd ff:ff:ff:ff:ff:ff
3.把两个虚拟网卡分别加到两个network name space ns一、ns2中
[root@10 /]# ip link set veth-ns1 netns ns1 [root@10 /]# ip link set veth-ns2 netns ns2
4.分别查看宿主机器和两个network namesapce的状况
ip link ip netns exec ns1 ip link ip netns exec ns2 ip link
ns1上的虚拟网卡信息
6: veth-ns1@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether da:89:0e:56:03:3f brd ff:ff:ff:ff:ff:ff link-netnsid 1
ns2上的虚拟网卡信息
5: veth-ns2@if6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 2a:96:f4:e3:00:d2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
经过上面指令执行结果能够看到原来宿主机的一对虚拟网卡已经移到两个network namespace那,当前的网卡状态仍是DOWN,两个网卡的序列号是按顺序成对的@if五、@if6
5.分别启动这两个虚拟网卡
[root@10 /]# ip netns exec ns1 ip link set veth-ns1 up [root@10 /]# ip netns exec ns2 ip link set veth-ns2 up
执行结果分别以下:
6: veth-ns1@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether da:89:0e:56:03:3f brd ff:ff:ff:ff:ff:ff link-netnsid 1
5: veth-ns2@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 2a:96:f4:e3:00:d2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
能够看到此时两个虚拟网卡状态为UP,都已经启动,但尚未IP地址,所以还不能通讯
6.给虚拟网卡配置IP地址
[root@10 /]# ip netns exec ns1 ip addr add 192.168.0.11/24 dev veth-ns1 [root@10 /]# ip netns exec ns2 ip addr add 192.168.0.12/24 dev veth-ns2
7.经过ip a
查看ip地址是否配置成功
root@10 /]# ip netns exec ns1 ip a
root@10 /]# ip netns exec ns2 ip a
8.测试ns1和ns2是否能够互相连通
[root@10 /]# ip netns exec ns1 ping 192.168.0.12 PING 192.168.0.12 (192.168.0.12) 56(84) bytes of data. 64 bytes from 192.168.0.12: icmp_seq=1 ttl=64 time=0.048 ms
[root@10 /]# ip netns exec ns2 ping 192.168.0.11 PING 192.168.0.11 (192.168.0.11) 56(84) bytes of data. 64 bytes from 192.168.0.11: icmp_seq=1 ttl=64 time=0.041 ms 64 bytes from 192.168.0.11: icmp_seq=2 ttl=64 time=0.039 ms 64 bytes from 192.168.0.11: icmp_seq=3 ttl=64 time=0.041 ms
这时候,两个network namespace已经成功连通了
以上的veth pair
只能解决两个namesapce之间的通讯问题,可是多个namesapce是不能直接互相通讯,由于处于不一样的网络,在平常生活中,咱们会用到交换机去链接不一样的网络,而在Linux,咱们能够经过bridege去实现。
在上图能够看到,这时候连个namespace ns三、ns4并非直接经过veth pair
链接,而是经过bridge间接链接,接下来我会一步步教你们怎么按照上图设置
1.建立Network Space
为了不跟上面混淆,咱们从新建立新的namespace
[root@10 /]# ip netns add ns3 [root@10 /]# ip netns add ns4 [root@10 /]# ip netns add bridge
2.建立一对veth pair
网卡
[root@10 /]# ip link add type veth
3.查看宿主机上生成的一对虚拟网卡
[root@10 /]# ip link ... 7: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 36:8e:bc:43:f0:4a brd ff:ff:ff:ff:ff:ff 8: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 92:c0:44:18:64:93 brd ff:ff:ff:ff:ff:ff
能够看到,宿主机已经存在一对网卡veth一、veth0
4.让veth0加入到ns3中,让veth1加入到bridge中,并分别把虚拟网卡重命名为ns3-bridge、bridge-ns3
[root@10 /]# ip link set dev veth0 name ns3-bridge netns ns3 [root@10 /]# ip link set dev veth1 name bridge-ns3 netns bridge
5.在建立一对veth pair
,让veth0加入到ns4中,让veth1加入到bridge中,并分别把虚拟网卡重命名为ns4-bridge、bridge-ns4
[root@10 /]# ip link add type veth [root@10 /]# ip link set dev veth0 name ns4-bridge netns ns4 [root@10 /]# ip link set dev veth1 name bridge-ns4 netns bridge
6.分别进入各个namespace查看网卡信息
[root@10 /]# ip netns exec ns4 ip a ... 9: ns4-bridge@if10: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether ea:53:ea:e6:2e:2e brd ff:ff:ff:ff:ff:ff link-netnsid 1
[root@10 /]# ip netns exec ns3 ip a ... 7: ns3-bridge@if8: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 36:8e:bc:43:f0:4a brd ff:ff:ff:ff:ff:ff link-netnsid 1
[root@10 /]# ip netns exec bridge ip a ... 8: bridge-ns3@if7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 92:c0:44:18:64:93 brd ff:ff:ff:ff:ff:ff link-netnsid 0 10: bridge-ns4@if9: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 9e:f4:57:43:2e:2b brd ff:ff:ff:ff:ff:ff link-netnsid 1
能够看到ns3与bridge、ns4与bridge的网卡序号是连续的,证实是同一对veth pair
7.在bridge namespace中建立br设备(网桥)
在对bridge进行操做,须要用到bridge-utils
,能够经过如下命令安装
yum install bridge-utils
接下来建立br设备
[root@10 /]# ip netns exec bridge brctl addbr br
8.启动br设备
[root@10 /]# ip netns exec bridge ip link set dev br up
9.启动bridge中的两个虚拟网卡
[root@10 /]# ip netns exec bridge ip link set dev bridge-ns3 up [root@10 /]# ip netns exec bridge ip link set dev bridge-ns4 up
10.把bridge中两个虚拟网卡加入到br设备中
[root@10 /]# ip netns exec bridge brctl addif br bridge-ns3 [root@10 /]# ip netns exec bridge brctl addif br bridge-ns4
11.启动ns三、ns4中的虚拟网卡,并加入ip
[root@10 /]# ip netns exec ns3 ip link set dev ns3-bridge up [root@10 /]# ip netns exec ns3 ip address add 192.168.0.13/24 dev ns3-bridge [root@10 /]# ip netns exec ns4 ip link set dev ns4-bridge up [root@10 /]# ip netns exec ns4 ip address add 192.168.0.14/24 dev ns4-bridge
12.测试两个namespace的连通性
[root@10 /]# ip netns exec ns3 ping 192.168.0.14 PING 192.168.0.14 (192.168.0.14) 56(84) bytes of data. 64 bytes from 192.168.0.14: icmp_seq=1 ttl=64 time=0.061 ms 64 bytes from 192.168.0.14: icmp_seq=2 ttl=64 time=0.047 ms 64 bytes from 192.168.0.14: icmp_seq=3 ttl=64 time=0.042 ms
[root@10 /]# ip netns exec ns4 ping 192.168.0.13 PING 192.168.0.13 (192.168.0.13) 56(84) bytes of data. 64 bytes from 192.168.0.13: icmp_seq=1 ttl=64 time=0.046 ms 64 bytes from 192.168.0.13: icmp_seq=2 ttl=64 time=0.076 ms 64 bytes from 192.168.0.13: icmp_seq=3 ttl=64 time=0.081 ms
Docker常见的网络模式有三种bridge、none和host
Docker网络相关指令
docker network ls - 查询可用网络
docker network create - 新建一个网络
docker network rm - 移除一个网络
docker network inspect - 查看一个网络
docker network connect - 链接容器到一个网络
docker network disconnect - 把容器从一个网络断开
bridge是docker容器默认的网络模式,它的实现原理跟咱们网络虚拟化的多namespace例子同样,经过veth pair
和bridge进行间接链接,当 Docker 启动时,会自动在主机上建立一个 docker0 虚拟网桥,其实是 Linux 的一个 bridge,能够理解为一个软件交换机。它会在挂载到它的网口之间进行转发,建立了在主机和全部容器之间一个虚拟共享网络
接下来,咱们以上图tomcat容器为例,来展现下bridge模式的操做
1.启动两个tomcat容器(使用咱们在第一章建立的tomcat镜像)
[root@10 vagrant]# docker run -d --name tomcat01 -p 8081:8080 tomcat [root@10 /]# docker run -d --name tomcat03 -p 8082:8080 tomcat
2.查看两个tomcat容器的网络接口信息
[root@10 /]# docker exec -it tomcat01 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
[root@10 /]# docker exec -it tomcat03 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
能够看到tomcat01和tomcat03的ip分别为172.17.0.2/16
、172.17.0.3/16
,而且有一对不连续的基于veth pair
建立的虚拟网卡eth0@if12
、eth0@if16
,很明显,根据咱们上面学过的内容,这两个不是成对出现的虚拟网卡不能直接通讯,应该是经过bridge来进行链接
3.查看宿主机centos系统的网卡信息,验证是否存在与tomcat容器对应的虚拟网卡
[root@10 /]# ip a ... 4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:12:9a:b1:a7 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:12ff:fe9a:b1a7/64 scope link valid_lft forever preferred_lft forever 12: veth068cc5c@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 66:1c:13:cd:b4:78 brd ff:ff:ff:ff:ff:ff link-netnsid 5 inet6 fe80::641c:13ff:fecd:b478/64 scope link valid_lft forever preferred_lft forever 16: veth92816fa@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 0a:cf:a0:8e:78:7f brd ff:ff:ff:ff:ff:ff link-netnsid 6 inet6 fe80::8cf:a0ff:fe8e:787f/64 scope link valid_lft forever preferred_lft forever
果真,咱们能够看到宿主机上确实存在与两个tomcat容器相对应的虚拟网卡,而且还有一个虚拟网桥docker0的网络接口,这种链接方法叫bridge模式
4.经过docker network inspect bridge
命令查看下bridge的配置
[root@10 /]# docker network inspect ... "Containers": { "2f3c3081b8bd409334f21da3441f3b457e243293f3180d54cfc12d5902ad4dbc": { "Name": "tomcat03", "EndpointID": "2375535cefdbccd3434d563ef567a1032694bdfb4356876bd9d8c4e07b1f222b", "MacAddress": "02:42:ac:11:00:03", "IPv4Address": "172.17.0.3/16", "IPv6Address": "" }, "c13db4614a49c302121e467d8aa8ea4f008ab55f83461430d3dd46e59085937f": { "Name": "tomcat01", "EndpointID": "99a04efa9c7bdb0232f98d25f490682b065de1ce076b31487778fa257552a2ba", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2/16", "IPv6Address": "" } },
能够看到两个container已经绑定到bridge中
5.分别测试两个tomcat容器相互连通
[root@10 /]# docker exec -it tomcat01 ping 172.17.0.3 PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data. 64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.054 ms 64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.040 ms 64 bytes from 172.17.0.3: icmp_seq=3 ttl=64 time=0.039 ms
[root@10 /]# docker exec -it tomcat03 ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.046 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.042 ms 64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.039 ms
能够看到这两个容器能够互相访问
6.在容器访问互联网网站
docker容器是能够经过brige与host主机网络连通,所以能够间接经过iptables实现NAT转发进行互联网访问,可以让多个内网用户经过一个外网地址上网,解决了IP资源匮乏的问题
接下来选择其中刚才其中一个tomcat容器进行测试,若是发现不能访问互联网,可能须要重启下docker服务systemctl restart docker
[root@10 /]# docker exec -it tomcat01 curl -I https://www.baidu.com HTTP/1.1 200 OK Accept-Ranges: bytes Cache-Control: private, no-cache, no-store, proxy-revalidate, no-transform Connection: keep-alive Content-Length: 277 Content-Type: text/html Date: Thu, 06 Feb 2020 06:03:48 GMT Etag: "575e1f72-115" Last-Modified: Mon, 13 Jun 2016 02:50:26 GMT Pragma: no-cache Server: bfe/1.0.8.18
从上面返回信息能够看到,容器成功访问百度网站
host模式是共享宿主机器网络,所以使用的是同一个network namespace
1.建立一个容器,命名为tomcat-host,网络模式选择host
[root@10 /]# docker run -d --name tomcat-host --network host tomcat ee3c6d2a5f61caa371088f40bc0c5d11101d12845cdee24466322a323b11ee11
2.查看容器的网络接口信息会发现跟宿主centos的同样
[root@10 /]# docker exec -it tomcat-host ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 50028sec preferred_lft 50028sec inet6 fe80::5054:ff:fe8a:fee6/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:ba:0a:28 brd ff:ff:ff:ff:ff:ff inet 192.168.100.12/24 brd 192.168.100.255 scope global noprefixroute dynamic eth1 valid_lft 156886sec preferred_lft 156886sec inet6 fe80::a00:27ff:feba:a28/64 scope link valid_lft forever preferred_lft forever 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:12:9a:b1:a7 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:12ff:fe9a:b1a7/64 scope link valid_lft forever preferred_lft forever
3.查看容器的网络信息,会发现容器没有被单独分配ip地址
"Containers": { "ee3c6d2a5f61caa371088f40bc0c5d11101d12845cdee24466322a323b11ee11": { "Name": "tomcat-host", "EndpointID": "53565ff879878bfd10fc5843582577d54eb68b14b29f4b1ff2e213d38e2af7ce", "MacAddress": "", "IPv4Address": "", "IPv6Address": "" } }
上面提到,none网络模式是有一个独立的namesapce,默认状况下没有任何初始化网络配置,与外界网络隔离,须要本身去定制
1.建立容器tomcat-none,并设置网络为none
[root@10 /]# docker run -d --name tomcat-none --network none tomcat d90808e0b7455c2f375c3d88fa18a1872b4a03e2112bff3db0b3996d16523b1a
2.查看网络接口信息
[root@10 /]# docker exec -it tomcat-none ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever
这里只有lo设备,没有其余网卡,只有本地回环ip地址
3.查看docker网络信息
"Containers": { "d90808e0b7455c2f375c3d88fa18a1872b4a03e2112bff3db0b3996d16523b1a": { "Name": "tomcat-none", "EndpointID": "4ea757bbd108ac783bd1257d33499b7b77cd7ea529d4e6c761923eb596dc446c", "MacAddress": "", "IPv4Address": "", "IPv6Address": "" } }
容器没有被分配任何地址
咱们能够建立本身的网络模式,这里默认是bridge模式,接下来咱们演示如何让不一样网络的容器连通起来
1.建立一个新网络,名字为custom,默认模式是bridge
[root@10 /]# docker network create custom af392e4739d810b2e12219c21f505135537e95ea0afcb5075b3b1a5622a66112
2.查看下当前docker网络列表
[root@10 /]# docker network ls NETWORK ID NAME DRIVER SCOPE ce20377e3f10 bridge bridge local af392e4739d8 custom bridge local afc6ca3cf515 host host local 94cfa528d194 none null local
3.查看下自定义网络的一些信息
94cfa528d194 none null local [root@10 /]# docker network inspect custom [ { "Name": "custom", "Id": "af392e4739d810b2e12219c21f505135537e95ea0afcb5075b3b1a5622a66112", "Created": "2020-02-05T23:49:08.321895241Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ]
4.建立一个容器tomcat-custom,容器网络模式为custom
[root@10 /]# docker run -d --name tomcat-custom --network custom tomcat 2e77115f42e36827646fd6e3abacc0594ff71cd1847f6fbffda28e22fb55e9ea
5.查看tomcat-custom的网络接口信息
[root@10 /]# docker exec -it tomcat-custom ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 22: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0 valid_lft forever preferred_lft forever
6.测试tomcat-custom去链接上面咱们建立的tomcat01容器
[root@10 /]# docker exec -it tomcat-custom ping 192.17.0.2 PING 192.17.0.2 (192.17.0.2) 56(84) bytes of data. --- 192.17.0.2 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3001ms
从上面执行结果能够看出,默认状况下,tomcat-custom跟tomcat01是不能连通,由于处于不一样的网络custom、bridge。
7.把tomcat01加到自定义网络custom
[root@10 /]# docker network connect custom tomcat01
8.查看当前custom网络信息
"Containers": { "2e77115f42e36827646fd6e3abacc0594ff71cd1847f6fbffda28e22fb55e9ea": { "Name": "tomcat-custom", "EndpointID": "bf2b94f3b580b9df0ca9f6ce2383198961711d1b3d19d33bbcf578d81157e47f", "MacAddress": "02:42:ac:12:00:02", "IPv4Address": "172.18.0.2/16", "IPv6Address": "" }, "c13db4614a49c302121e467d8aa8ea4f008ab55f83461430d3dd46e59085937f": { "Name": "tomcat01", "EndpointID": "f97305672ae617f207dfef1b3dc250d2b8d6a9ec9b36b1b0115e2456f18c44c6", "MacAddress": "02:42:ac:12:00:03", "IPv4Address": "172.18.0.3/16", "IPv6Address": "" } }
能够看到两个容器都已经配置到custom网络中,tomcat-custom 172.18.0.2/16
,tomcat01 172.18.0.3/16
9.配置完网络后,在用tomcat01尝试链接tomcat-custom
[root@10 /]# docker exec -it tomcat01 ping 172.18.0.3 PING 172.18.0.3 (172.18.0.3) 56(84) bytes of data. 64 bytes from 172.18.0.3: icmp_seq=1 ttl=64 time=0.032 ms 64 bytes from 172.18.0.3: icmp_seq=2 ttl=64 time=0.080 ms 64 bytes from 172.18.0.3: icmp_seq=3 ttl=64 time=0.055 ms
从执行结果能够看到,如今tomcat01已经能够跟tomcat-custom通讯了,由于处于同一个网络中
10.此时查看centos中的网络接口信息,能够看到存在对应的虚拟网卡
[root@10 /]# ip a ... 23: vethc30bd52@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-af392e4739d8 state UP group default link/ether 2e:a1:c8:a2:e5:83 brd ff:ff:ff:ff:ff:ff link-netnsid 5 inet6 fe80::2ca1:c8ff:fea2:e583/64 scope link valid_lft forever preferred_lft forever 25: veth69ea87b@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 92:eb:8f:65:fe:7a brd ff:ff:ff:ff:ff:ff link-netnsid 6 inet6 fe80::90eb:8fff:fe65:fe7a/64 scope link valid_lft forever preferred_lft forever 27: veth068cc5c@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-af392e4739d8 state UP group default link/ether ea:44:90:6c:0d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 6 inet6 fe80::e844:90ff:fe6c:d49/64 scope link valid_lft forever preferred_lft forever
当用户建立了自定义网络,docker引擎默认会对加入该网络的容器启动嵌入式DNS,所以同一网络的容器能够互相经过容器名称进行通讯,避免因下游系统有ip须要从新发布
对于非自定义网络,具体配置,能够查阅官网配置容器DNS
1.经过上面咱们建立号的自定义网络和容器进行测试
[root@10 /]# docker exec -it tomcat01 ping tomcat-custom PING tomcat-custom (172.18.0.2) 56(84) bytes of data. 64 bytes from tomcat-custom.custom (172.18.0.2): icmp_seq=1 ttl=64 time=0.031 ms 64 bytes from tomcat-custom.custom (172.18.0.2): icmp_seq=2 ttl=64 time=0.038 ms 64 bytes from tomcat-custom.custom (172.18.0.2): icmp_seq=3 ttl=64 time=0.040 ms
[root@10 /]# docker exec -it tomcat-custom ping tomcat01 PING tomcat01 (172.18.0.3) 56(84) bytes of data. 64 bytes from tomcat01.custom (172.18.0.3): icmp_seq=1 ttl=64 time=0.031 ms 64 bytes from tomcat01.custom (172.18.0.3): icmp_seq=2 ttl=64 time=0.038 ms 64 bytes from tomcat01.custom (172.18.0.3): icmp_seq=3 ttl=64 time=0.040 ms
能够看到tomcat01和tomcat-custom能够互相经过容器名进行通讯
端口映射对于咱们来讲不陌生,平时访问服务,例如tomcat,都是经过ip+服务端口进行访问,如:localhost:8080,可是容器是寄生在宿主机器上,所以,若是咱们想被外部访问,还须要映射到宿主的端口上,经过宿主的ip+port进行访问,以下如图所示:
接下来咱们来实践下docker端口映射相关操做
1.建立一个tomcat-port容器,指定宿主端口映射8999
[root@10 /]# docker run -d --name tomcat-port -p 8999:8080 tomcat 0b5b014ae2552b85aff55b385ba20518b38509b5670a95ad9eea09475ea26629
2.进入容器,在容器中访问
[root@10 /]# docker exec -it tomcat-port curl -i localhost:8080 HTTP/1.1 404 Content-Type: text/html;charset=utf-8 Content-Language: en Content-Length: 713 Date: Thu, 06 Feb 2020 07:43:59 GMT
从上面结果能够看到,实际上已经访问成功,可是因为我没有配置tomcat管理页面,因此这里是报404
3.在centos访问容器
这时候咱们就须要经过容器ip+port的方式进行访问
[root@10 /]# curl -I 172.17.0.4:8080 HTTP/1.1 404 Content-Type: text/html;charset=utf-8 Content-Language: en Transfer-Encoding: chunked Date: Thu, 06 Feb 2020 07:49:41 GMT
能够看到,这里也访问成功
4.在主体机器访问,也就是个人主机MacOS操做系统上访问
这时候须要用虚拟机上centos IP+映射端口 进行访问
192:centos7 evan$ curl -I 192.168.100.12:8999 HTTP/1.1 404 Content-Type: text/html;charset=utf-8 Content-Language: en Transfer-Encoding: chunked Date: Thu, 06 Feb 2020 07:52:52 GMT
能够看到,这里也能正常访问
上文讲解了Docker的单机网络通讯原理以及Linux虚拟化技术,最后咱们回顾下一开始的那几个问题,相信你们心中都已经有答案了
1.同一个主机两个容器如何通讯?
Docker基于Virtual Ethernet Pair
技术实现了容器之间的通讯,但并不是直接端对端对接,在默认网络bridge模式下,Docker引擎会分别在每一个容器和宿主网络创建一对虚拟网卡veth pair
,经过bridge间接实现通讯,经过network namespace实现网络隔离。
2.怎么从服务器外访问容器?
从服务器外访问容器是经过端口映射方式,访问的IP地址和端口并不是容器的真实IP和端口,而是宿主机器的端口映射,因此要从服务器外部访问容器,须要经过宿主机器的IP/域名+宿主端口进行访问。
3.Docker的三种网络模式是什么?
Docker常见的网络模式有三种bridge、none和host
bridge - Docker的默认网络模式,有独立的命名空间,可以支持各类自定义网络, 以及实现网络隔离,可以知足容器网络隔离的要求
host - 容器与主机在相同的网络命名空间下面,使用相同的网络协议栈,容器能够直接使用主机的全部网络接口,最简单和最低延迟的模式
none - 使用none模式,Docker容器拥有本身的Network Namespace,可是,并不为Docker容器进行任何网络配置,即Docker容器没有网卡、IP、路由等信息,让开发者能够自由按需定制