Keepalived+LVS DR
- 完整架构须要两台服务器(角色为dir)分别安装keepalived软件,目的是实现高可用,但keepalived自己也有负载均衡的功能,因此本次实验能够只安装一台keepalived
- keepalived内置了ipvsadm的功能,因此不须要再安装ipvsadm包,也不用编写和执行那个lvs_dir的脚本
- 三台机器分别为:
- dir(安装keepalived)133.130
- rs1 133.132
- rs2 133.133
- vip 133.200
- 编辑keepalived配置文件 vim /etc/keepalived/keepalived.conf//内容地址
- 须要更改里面的ip信息
- 执行ipvsadm -C 把以前的ipvsadm规则清空掉
- systemctl restart network 能够把以前的vip清空掉
- 两台rs上,依然要执行/usr/local/sbin/lvs_rs.sh脚本
- keepalived有一个比较好的功能,能够在一台rs宕机时,再也不把请求转发过去
- 测试
Keepalived+LVS DR
- 完整架构须要两台服务器(角色为dir)分别安装keepalived软件,目的是实现高可用,但keepalived自己也有负载均衡的功能,因此本次实验能够只安装一台keepalived
- 为何须要把keepalived 加到lvs 中的目的是什么?
- 缘由一:lvs,它有个关键角色,就是dir分发器,若是分发器宕掉,那全部的访问就会被终止,由于全部的入口全都在dir分发器上,因此须要把分发器作一个高可用,用keepalived实现高可用,而且keepalived还有负载均衡的做用。
- 缘由二:在使用lvs的时候,若是没有额外的操做,这时将一个rs机器关机(宕机)时,lvs照样会分发数据到这台宕机机器,这是就会出现访问无效的状况,说明lvs并不聪明;这时使用keepalived,就能够保证集群中其中一台rs宕机了,web还能正常提供,不会出现用户访问时无效连接的结果;通常这种架构,确定是2台keepalived;
- 由于keepalived内置了ipvsadm的功能,因此再也不须要安装ipvsadm的包,也不用再编写和执行.sh脚本
准备工做
- 准备三台机器,分别为
- dir(安装keepalived)74.129
- rs1 74.131
- rs2 74.133
- vip 74.200
- 在dir分发器(A机器)上,清空ipvsadm规则,并查看ipvsadm规则,会发现已经清空
[root@hf-01 ~]# ipvsadm -C
[root@hf-01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@hf-01 ~]#
- 在分发器(即A机器)上编辑配置文件,在/etc/keepalived/keepalived.conf 配置,配置文件内容
- 由于以前作实验里面编辑过配置文件,这时直接删除,而后粘贴新的配置文件
- 修改配置文件中的网卡、vip ,还有rs机器上的IP
[root@hf-01 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
#备用服务器上为 BACKUP
state MASTER
#绑定vip的网卡为ens33,你的网卡和阿铭的可能不同,这里须要你改一下
interface ens36
virtual_router_id 51
#备用服务器上为90
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass aminglinux
}
virtual_ipaddress {
192.168.74.200 //vip 地址
}
}
virtual_server 192.168.74.200 80 { //vip 地址
#(每隔10秒查询realserver状态)
delay_loop 10
#(lvs 算法)
lb_algo wlc
#(DR模式)
lb_kind DR
#(同一IP的链接60秒内被分配到同一台realserver)
persistence_timeout 60
#(用TCP协议检查realserver状态)
protocol TCP
real_server 192.168.74.131 80 { //rs1机器
#(权重)
weight 100
TCP_CHECK {
#(10秒无响应超时)
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.74.133 80 { //rs2机器
weight 100
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
保存退出
- 启动nginx服务,查看nginx进程,查看keepalived服务
[root@hf-01 ~]# systemctl start nginx
[root@hf-01 ~]# ps aux |grep nginx
root 2952 0.0 0.2 123372 2104 ? Ss 06:55 0:00 nginx: master process /usr/sbin/nginx
nginx 2953 0.0 0.3 123836 3588 ? S 06:55 0:00 nginx: worker process
root 2994 0.0 0.0 112672 980 pts/0 R+ 07:12 0:00 grep --color=auto nginx
[root@hf-01 ~]# ps aux |grep keep
root 3006 0.0 0.1 121324 1404 ? Ss 07:16 0:00 /usr/sbin/keepalived -D
root 3007 0.0 0.2 121448 2732 ? S 07:16 0:00 /usr/sbin/keepalived -D
root 3008 0.0 0.2 121324 2336 ? S 07:16 0:00 /usr/sbin/keepalived -D
root 3014 0.0 0.0 112672 984 pts/0 R+ 07:16 0:00 grep --color=auto keep
[root@hf-01 ~]#
- 查看IP,会看到虚拟IP依然存在
[root@hf-01 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:ff:fe:93 brd ff:ff:ff:ff:ff:ff
inet 192.168.74.129/24 brd 192.168.74.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.74.150/24 brd 192.168.74.255 scope global secondary eno16777736:0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feff:fe93/64 scope link
valid_lft forever preferred_lft forever
3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:ff:fe:9d brd ff:ff:ff:ff:ff:ff
inet 192.168.74.129/24 brd 192.168.74.255 scope global ens36
valid_lft forever preferred_lft forever
inet 192.168.74.200/32 brd 192.168.74.200 scope global ens36:2
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feff:fe9d/64 scope link
valid_lft forever preferred_lft forever
[root@hf-01 ~]#
- 查看ipvsadm规则
[root@hf-01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.74.200:80 wlc persistent 60
-> 192.168.74.131:80 Route 100 0 0
-> 192.168.74.133:80 Route 100 0 0
[root@hf-01 ~]#
- 这时关闭keepalived服务,再来查看ip,会到虚拟IP停掉了
[root@hf-01 ~]# systemctl stop keepalived
[root@hf-01 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:ff:fe:93 brd ff:ff:ff:ff:ff:ff
inet 192.168.74.129/24 brd 192.168.74.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.74.150/24 brd 192.168.74.255 scope global secondary eno16777736:0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feff:fe93/64 scope link
valid_lft forever preferred_lft forever
3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:ff:fe:9d brd ff:ff:ff:ff:ff:ff
inet 192.168.74.129/24 brd 192.168.74.255 scope global ens36
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feff:fe9d/64 scope link
valid_lft forever preferred_lft forever
[root@hf-01 ~]#
- 再来查看规则,会发现没有启动规则
[root@hf-01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@hf-01 ~]#
- 这时启动keepalived,再来查看规则
[root@hf-01 ~]# systemctl start keepalived
[root@hf-01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.74.200:80 wlc persistent 60
-> 192.168.74.131:80 Route 100 0 0
-> 192.168.74.133:80 Route 100 0 0
[root@hf-01 ~]#
- 注意事项:两点
echo 1 > /proc/sys/net/ipv4/ip_forward //打开端口转发
-
- 在rs机器上建立的/usr/local/sbin/lvs_rs.sh脚本,依然要执行它
#把vip绑定在lo上,是为了实现rs直接把结果返回给客户端
ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip lo:0
#如下操做为更改arp内核参数,目的是为了让rs顺利发送mac地址给客户端
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
总结
- keepalived 有一个比较好的功能,能够在一台rs宕机的时候,及时把他踢出 ipvsadm 集群,将再也不发送数据包给,也就很好的避免的访问无链接的状况发送