Docker 部署完整的先后端主从热备系统

系统解决问题

  1. 解决物理机不够用的问题
  2. 解决物理机资源使用不充分问题
  3. 解决系统高可用问题
  4. 解决不停机更新问题

系统部署准备工做

  1. 一台装有Ubuntu16.04.6版本的服务器,可联网
  2. 服务器配置最低为4核8G,内存越大越好

系统部署方案设计图

说明:图中为整个系统方案设计图,上半部分为一台业务服务器,下半部分为数据库服务器,本文档只介绍业务服务器的部署流程,下半部分比较简单。另外图中所标注IP为做者本身在虚拟机中的IP,可根据实际状况本身进行设定。php

相关概念

1 LVS

LVS是一个开源的软件,能够实现传输层四层负载均衡。LVS是Linux Virtual Server的缩写,意思是Linux虚拟服务器。目前有三种IP负载均衡技术(VS/NAT、VS/TUN和VS/DR);八种调度算法(rr,wrr,lc,wlc,lblc,lblcr,dh,sh)。html

2 Keepalived做用

LVS能够实现负载均衡,可是不可以进行健康检查,好比一个rs出现故障,LVS仍然会把请求转发给故障的rs服务器,这样就会致使请求的无效性。keepalive软件能够进行健康检查,并且能同时实现LVS的高可用性,解决LVS单点故障的问题,其实keepalive就是为LVS而生的。前端

3 keepalived和其工做原理

keepalived是一个相似于Layer2,4,7交换机制的软件。是Linux集群管理中保证集群高可用的一个服务软件,其功能是用来防止单点故障。vue

keepalived是基于VRRP协议实现的保证集群高可用的一个服务软件,主要功能是实现真机的故障隔离和负载均衡器间的失败切换,防止单点故障。在了解keepalived原理以前先了解一下VRRP协议。java

4 VRRP协议:Virtual Route

Redundancy Protocol虚拟路由冗余协议。是一种容错协议,保证当主机的下一跳路由出现故障时,由另外一台路由器来代替出现故障的路由器进行工做,从而保持网络通讯的连续性和可靠性。在介绍VRRP以前先介绍一些关于VRRP的相关术语:linux

虚拟路由器:由一个 Master 路由器和多个 Backup 路由器组成。主机将虚拟路由器看成默认网关。nginx

VRID:虚拟路由器的标识。有相同 VRID 的一组路由器构成一个虚拟路由器。c++

Master 路由器:虚拟路由器中承担报文转发任务的路由器。web

Backup 路由器: Master 路由器出现故障时,可以代替 Master 路由器工做的路由器。算法

虚拟 IP 地址:虚拟路由器的 IP 地址。一个虚拟路由器能够拥有一个或多个IP 地址。

IP 地址拥有者:接口 IP 地址与虚拟 IP 地址相同的路由器被称为 IP 地址拥有者。

虚拟 MAC 地址:一个虚拟路由器拥有一个虚拟 MAC 地址。虚拟 MAC 地址的格式为 00-00-5E-00-01-{VRID}。一般状况下,虚拟路由器回应 ARP 请求使用的是虚拟 MAC 地址,只有虚拟路由器作特殊配置的时候,才回应接口的真实 MAC 地址。

优先级: VRRP 根据优先级来肯定虚拟路由器中每台路由器的地位。

非抢占方式:若是 Backup 路由器工做在非抢占方式下,则只要 Master 路由器没有出现故障,Backup 路由器即便随后被配置了更高的优先级也不会成为Master 路由器。

抢占方式:若是 Backup 路由器工做在抢占方式下,当它收到 VRRP 报文后,会将本身的优先级与通告报文中的优先级进行比较。若是本身的优先级比当前的 Master 路由器的优先级高,就会主动抢占成为 Master 路由器;不然,将保持 Backup 状态。

VRRP将局域网内的一组路由器划分在一块儿,造成一个VRRP备份组,它在功能上至关于一台路由器的功能,使用虚拟路由器号进行标识(VRID)。虚拟路由器有本身的虚拟IP地址和虚拟MAC地址,它的外在变现形式和实际的物理路由彻底同样。局域网内的主机将虚拟路由器的IP地址设置为默认网关,经过虚拟路由器与外部网络进行通讯。

虚拟路由器是工做在实际的物理路由器之上的。它由多个实际的路由器组成,包括一个Master路由器和多个Backup路由器。 Master路由器正常工做时,局域网内的主机经过Master与外界通讯。当Master路由器出现故障时, Backup路由器中的一台设备将成为新的Master路由器,接替转发报文的工做。(路由器的高可用)

5 VRRP的工做流程

  1. 虚拟路由器中的路由器根据优先级选举出 Master。 Master 路由器经过发送免费 ARP 报文,将本身的虚拟 MAC 地址通知给与它链接的设备或者主机,从而承担报文转发任务;
  2. Master 路由器周期性发送 VRRP 报文,以公布其配置信息(优先级等)和工做情况;
  3. 若是 Master 路由器出现故障,虚拟路由器中的 Backup 路由器将根据优先级从新选举新的 Master;
  4. 虚拟路由器状态切换时, Master 路由器由一台设备切换为另一台设备,新的 Master 路由器只是简单地发送一个携带虚拟路由器的 MAC 地址和虚拟 IP地址信息的ARP 报文,这样就能够更新与它链接的主机或设备中的ARP 相关信息。网络中的主机感知不到 Master 路由器已经切换为另一台设备。
  5. Backup 路由器的优先级高于 Master 路由器时,由 Backup 路由器的工做方式(抢占方式和非抢占方式)决定是否从新选举 Master。
  6. VRRP优先级的取值范围为0到255(数值越大代表优先级越高)

6 Docker

  1. Docker是世界领先的软件容器平台。
  2. Docker使用Google公司推出的Go语言进行开发实现,基于Linux内核的cgroup,namespace,以及AUFS类的UnionFS等技术,对进程进行封装隔离,属于操做系统层面的虚拟化技术。 因为隔离的进程独立于宿主和其它的隔离的进程,所以也称其为容器。Docke最初实现是基于LXC。
  3. Docker可以自动执行重复性任务,例如搭建和配置开发环境,从而解放了开发人员以便他们专一在真正重要的事情上:构建杰出的软件。
  4. 用户能够方便地建立和使用容器,把本身的应用放入容器。容器还能够进行版本管理、复制、分享、修改,就像管理普通的代码同样。

可参考我写的一篇文章:juejin.im/post/5dae55…

7 Nginx

nginx是一款高性能的http 服务器/反向代理服务器及电子邮件(IMAP/POP3)代理服务器。

做用:集群(提升吞吐量,减轻单台服务器压力),反向代理(不暴露真实IP地址),虚拟服务器,静态服务器(动静分离)。解决跨域问题,使用nginx搭建企业级api接口网关

开始部署

安装Docker

1 卸载旧版本Docker,系统未安装则可跳过

sudo apt-get remove docker docker-engine docker.io containerd runc复制代码

2 更新索引列表

sudo apt-get update复制代码

3 容许apt经过https使用repository安装软件包

sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common复制代码

4 安装GPG证书

sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -复制代码

5 验证key的指纹

sudo apt-key fingerprint 0EBFCD88复制代码

6 添加稳定的仓库并更新索引

sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update复制代码

7 查看docker版本列表

apt-cache madison docker-ce复制代码

8 下载自定义版本docker

sudo apt-get install -y docker-ce=17.12.1~ce-0~ubuntu复制代码

9 验证docker 是否安装成功

docker --version复制代码

10 将root用户加入docker 组,以容许免sudo执行docker

sudo gpasswd -a 用户名 docker  #用户名改为本身的登陆名复制代码

11 重启服务并刷新docker组成员,到此完成

sudo service docker restartnewgrp - docker复制代码

Docker自定义网络

由于容器重启以后IP会变,可是这个不是咱们但愿的,咱们但愿容器有本身的固定IP。容器默认使用Docker0这个网桥,这个是没法自定义IP的,须要咱们本身建立一个网桥,而后指定容器IP,这样容器在重启以后IP会保持不变。

docker network create --subnet=172.18.0.0/24 mynet复制代码

使用ifconfig查看咱们建立的网络

宿主机安装Keepalived

1 预装编译环境

sudo apt-get install -y gcc
sudo apt-get install -y g++
sudo apt-get install -y libssl-dev
sudo apt-get install -y daemon
sudo apt-get install -y make
sudo apt-get install -y sysv-rc-conf复制代码

2 下载并安装keepalived

cd /usr/local/
wget http://www.keepalived.org/software/keepalived-1.2.18.tar.gz
tar zxvf keepalived-1.2.18.tar.gz


cd keepalived-1.2.18


./configure --prefix=/usr/local/keepalived


make && make insta复制代码

3 将keepalived设置为系统服务

mkdir /etc/keepalived 
mkdir /etc/sysconfig 
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/ 
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/ 
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ 
ln -s /usr/local/sbin/keepalived /usr/sbin/ 
ln -s /usr/local/keepalived/sbin/keepalived /sbin/

复制代码

4 修改keepalived启动的配置文件

由于除Readhat以外的linux没有/etc/rc.d/init.d/functions,因此须要修改原来的启动文件

  • 将 . /etc/rc.d/init.d/functions 修改成 . /lib/lsb/init-functions
  • 将 daemon keepalived ${KEEPALIVED_OPTIONS} 修改成 daemon keepalived start

修改后总体内容以下

#!/bin/sh
#
# Startup script for the Keepalived daemon
#
# processname: keepalived
# pidfile: /var/run/keepalived.pid
# config: /etc/keepalived/keepalived.conf
# chkconfig: - 21 79
# description: Start and stop Keepalived


# Source function library
#. /etc/rc.d/init.d/functions
. /lib/lsb/init-functions
# Source configuration file (we set KEEPALIVED_OPTIONS there)
. /etc/sysconfig/keepalived


RETVAL=0


prog="keepalived"


start() {
    echo -n $"Starting $prog: "
    daemon keepalived start
    RETVAL=$?
    echo
    [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
}


stop() {
    echo -n $"Stopping $prog: "
    killproc keepalived
    RETVAL=$?
    echo
    [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog
}


reload() {
    echo -n $"Reloading $prog: "
    killproc keepalived -1
    RETVAL=$?
    echo
}


# See how we were called.
case "$1" in
    start)
        start
        ;;
    stop)
        stop
        ;;
    reload)
        reload
        ;;
    restart)
        stop
        start
        ;;
    condrestart)
        if [ -f /var/lock/subsys/$prog ]; then
            stop
            start
        fi
        ;;
    status)
        status keepalived
        RETVAL=$?
        ;;
    *)
        echo "Usage: $0 {start|stop|reload|restart|condrestart|status}"
        RETVAL=1
esac


exit 复制代码

5 修改keepalived配置文件

cd /etc/keepalived
cp keepalived.conf keepalived.conf.back
rm keepalived.conf
vim keepalived.conf复制代码

添加内容以下

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.227.88
        192.168.227.99
        


    }
}


virtual_server  192.168.227.88 80 {
    delay_loop 6
    lb_algo rr
    lb_kind NAT
    persistence_timeout 50
    protocol TCP
     real_server 172.18.0.210 {
            weight 1


     }


}


virtual_server  192.168.227.99 80 {
    delay_loop 6
    lb_algo rr
    lb_kind NAT
    persistence_timeout 50
    protocol TCP
     real_server 172.18.0.220 {
            weight 1


     复制代码

须要注意的是:interface 这个是根据本身服务器网卡名称设定的,否则没法作VIP映射

6 启动keepalived

systemctl daemon-reload
systemctl enable keepalived.service
systemctl start keepalived.service

复制代码

每次更改配置文件以后必须执行 systemctl daemon-reload 操做,否则配置无效。

7 查看keepalived进程是否存在

ps -ef|grep keepalived复制代码

8 查看keepalived运行状态

systemctl status keepalived.service复制代码

9 查看虚拟IP是否完成映射

ip addr复制代码

10 Ping下两个IP

能够看到两个IP都是通的,到此keepalived安装成功



Docker容器实现前端主从热备系统

宿主机中的环境只须要Docker引擎和keepalived虚拟IP映射,后面的工做在Docker容器中进行。为何还要在centos7中安装keepalived呢,这是由于咱们没法经过外网IP直接访问容器内部,因此须要宿主机虚拟出来一个IP,与容器进行一个桥接,让他们实现内部对接。

看下图,一目了然:

图中访问的IP应该是容器内部虚拟出来的172.18.0.210,此处更正说明下。

接下来咱们安装前端服务器的主从系统部分

1 拉取centos7镜像

docker pull centos:7复制代码

2 建立容器

docker run -it -d --name centos1 -d centos:7复制代码

3 进入容器centos1

docker exec -it centos1 bash复制代码

4 安装经常使用工具

yum update -y
yum install -y vim
yum install -y wget
yum install -y  gcc-c++  
yum install -y pcre pcre-devel  
yum install -y zlib zlib-devel  
yum install -y  openssl-devel
yum install -y popt-devel
yum install -y initscripts
yum install -y net-tools

复制代码

5 将容器打包成新的镜像,之后直接以该镜像建立容器

docker commit -a 'cfh' -m 'centos with common tools' centos1 centos_base复制代码

6 删除以前建立的centos1 容器,从新以基础镜像建立容器,安装keepalived+nginx

docker rm -f centos1
#容器内须要使用systemctl服务,须要加上/usr/sbin/init
docker run -it --name centos_temp -d --privileged centos_base /usr/sbin/init
docker exec -it centos_temp bash复制代码

7 安装nginx

#使用yum安装nginx须要包括Nginx的库,安装Nginx的库
rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
# 使用下面命令安装nginx
yum install -y nginx
#启动nginx
systemctl start nginx.service


复制代码

8 安装keepalived

1.下载keepalived
wget http://www.keepalived.org/software/keepalived-1.2.18.tar.gz


2.解压安装:
tar -zxvf keepalived-1.2.18.tar.gz -C /usr/local/


3.下载插件openssl


yum install -y openssl openssl-devel(须要安装一个软件包)


4.开始编译keepalived 
cd  /usr/local/keepalived-1.2.18/ && ./configure --prefix=/usr/local/keepalived


5.make一下
make && make ins复制代码

9 将keepalived 安装成系统服务

mkdir /etc/keepalived
cp /usr/local/keepalived/etc/keepalived/keepalived.conf  /etc/keepalived/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
ln -s /usr/local/sbin/keepalived /usr/sbin/
能够设置开机启动:chkconfig keepalived on
到此咱们安装完毕!


#若启动报错,则执行下面命令
cd /usr/sbin/ 
rm -f keepalived  
cp /usr/local/keepalived/sbin/keepalived  /usr/sbin/ 


#启动keepalived
systemctl daemon-reload  从新加载
systemctl enable keepalived.service  设置开机自动启动
systemctl start keepalived.service 启动
systemctl status keepalived.service  查看服务状复制代码

10 修改/etc/keepalived/keepalived.conf文件

#备份配置文件 
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.backup
rm -f keepalived.conf 
vim keepalived.conf 


#配置文件以下


vrrp_script chk_nginx {
    script "/etc/keepalived/nginx_check.sh"
    interval 2
    weight -20
}


vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 121
    mcast_src_ip 172.18.0.201
    priority 100
    nopreempt
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }


    track_script {
        chk_nginx
    }


    virtual_ipaddress {
        172.18.0.210
    }复制代码

11 修改nginx的配置文件 

vim /etc/nginx/conf.d/default.conf

upstream tomcat{
  server 172.18.0.11:80;
  server 172.18.0.12:80;
  server 172.18.0.13:80;


}


server {
    listen       80;
    server_name  172.18.0.210;


    #charset koi8-r;
    #access_log /var/log/nginx/host.access.log main;


    location / {
        proxy_pass http://tomcat;
        index index.html index.html;
    }


    #error_page 404 /404.html;


    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }


    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    # proxy_pass http://127.0.0.1;
    #}


    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    # root html;
    # fastcgi_pass 127.0.0.1:9000;
    # fastcgi_index index.php;
    # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
    # include fastcgi_params;
    #}


    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    # deny all;
 复制代码

12 添加心跳检测文件

vim nginx_check.sh
#如下是脚本内容
#!/bin/bash
A=`ps -C nginx –no-header |wc -l`
if [ $A -eq 0 ];then
    /usr/local/nginx/sbin/nginx
    sleep 2
    if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
        killall keepalived
    fi
fi复制代码

13 给脚本赋予执行权限

chmod +x nginx_check.sh复制代码

14 设置开机启动

systemctl enable keepalived.service


#开启keepalived
systemctl daemon-reload
systemctl start keepalived.service

复制代码

15 检测虚拟IP是否成功

ping 172.18.0.210复制代码

16 将centos_temp 容器从新打包成镜像

docker commit -a 'cfh' -m 'centos with keepalived nginx' centos_temp centos_kn复制代码

17 删除全部容器

docker rm -f `docker ps -a -q`复制代码

18 使用以前打包的镜像从新建立容器

取名为centos_web_master,和centos_web_slave

docker run --privileged  -tid \
--name centos_web_master --restart=always \
--net mynet --ip 172.18.0.201 \
centos_kn /usr/sbin/init




docker run --privileged  -tid \
--name centos_web_slave --restart=always \
--net mynet --ip 172.18.0.202 \
centos_kn /usr/sbin/init复制代码

19 修改centos_web_slave里面的nginx和keepalived的配置文件

keepalived修改地方以下

state SLAVE #设置为从服务器
 mcast_src_ip 172.18.0.202  #修改成本机的IP
 priority 80  #权重设置比master小

复制代码

Nginx配置以下

upstream tomcat{
  server 172.18.0.14:80;
  server 172.18.0.15:80;
  server 172.18.0.16:80;


}


server {
    listen       80;
    server_name  172.18.0.210;


    #charset koi8-r;
    #access_log /var/log/nginx/host.access.log main;


    location / {
        proxy_pass http://tomcat;
        index index.html index.html;
    }


    #error_page 404 /404.html;


    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }


    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    # proxy_pass http://127.0.0.1;
    #}


    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    # root html;
    # fastcgi_pass 127.0.0.1:9000;
    # fastcgi_index index.php;
    # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
    # include fastcgi_params;
    #}


    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    # deny all;
 复制代码

重启keepalived和nginx

systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service复制代码

20 使用Nginx启动6台前端服务器

docker pull nginx


nginx_web_1='/home/root123/cfh/nginx1'
nginx_web_2='/home/root123/cfh/nginx2'
nginx_web_3='/home/root123/cfh/nginx3'
nginx_web_4='/home/root123/cfh/nginx4'
nginx_web_5='/home/root123/cfh/nginx5'
nginx_web_6='/home/root123/cfh/nginx6'


mkdir -p ${nginx_web_1}/conf ${nginx_web_1}/conf.d ${nginx_web_1}/html ${nginx_web_1}/logs
mkdir -p ${nginx_web_2}/conf ${nginx_web_2}/conf.d ${nginx_web_2}/html ${nginx_web_2}/logs
mkdir -p ${nginx_web_3}/conf ${nginx_web_3}/conf.d ${nginx_web_3}/html ${nginx_web_3}/logs
mkdir -p ${nginx_web_4}/conf ${nginx_web_4}/conf.d ${nginx_web_4}/html ${nginx_web_4}/logs
mkdir -p ${nginx_web_5}/conf ${nginx_web_5}/conf.d ${nginx_web_5}/html ${nginx_web_5}/logs
mkdir -p ${nginx_web_6}/conf ${nginx_web_6}/conf.d ${nginx_web_6}/html ${nginx_web_6}/logs






docker run -it --name temp_nginx -d nginx
docker ps
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_1}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_1}/conf.d/default.conf




docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_2}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_2}/conf.d/default.conf


docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_3}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_3}/conf.d/default.conf


docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_4}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_4}/conf.d/default.conf


docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_5}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_5}/conf.d/default.conf


docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_6}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_6}/conf.d/default.conf


docker rm -f temp_nginx




docker run -d  --name nginx_web_1 \
--network=mynet --ip 172.18.0.11 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_1}/html/:/usr/share/nginx/html \
-v ${nginx_web_1}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_1}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_1}/logs/:/var/log/nginx --privileged --restart=always nginx


docker run -d  --name nginx_web_2 \
--network=mynet --ip 172.18.0.12 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_2}/html/:/usr/share/nginx/html \
-v ${nginx_web_2}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_2}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_2}/logs/:/var/log/nginx --privileged --restart=always nginx


docker run -d  --name nginx_web_3 \
--network=mynet --ip 172.18.0.13 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_3}/html/:/usr/share/nginx/html \
-v ${nginx_web_3}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_3}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_3}/logs/:/var/log/nginx --privileged --restart=always nginx


docker run -d  --name nginx_web_4 \
--network=mynet --ip 172.18.0.14 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_4}/html/:/usr/share/nginx/html \
-v ${nginx_web_4}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_4}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_4}/logs/:/var/log/nginx --privileged --restart=always nginx


docker run -d  --name nginx_web_5 \
--network=mynet --ip 172.18.0.15 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_5}/html/:/usr/share/nginx/html \
-v ${nginx_web_5}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_5}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_5}/logs/:/var/log/nginx --privileged --restart=always nginx


docker run -d  --name nginx_web_6 \
--network=mynet --ip 172.18.0.16 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_6}/html/:/usr/share/nginx/html \
-v ${nginx_web_6}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_6}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_6}/logs/:/var/log/nginx --privileged --restart=always nginx






cd ${nginx_web_1}/html
cp /home/server/envconf/index.html ${nginx_web_1}/html/index.html


cd ${nginx_web_2}/html
cp /home/server/envconf/index.html ${nginx_web_2}/html/index.html


cd ${nginx_web_3}/html
cp /home/server/envconf/index.html ${nginx_web_3}/html/index.html


cd ${nginx_web_4}/html
cp /home/server/envconf/index.html ${nginx_web_4}/html/index.html


cd ${nginx_web_5}/html
cp /home/server/envconf/index.html ${nginx_web_5}/html/index.html


cd ${nginx_web_6}/html
cp /home/server/envconf/index.html ${ngi复制代码

/home/server/envconf/ 是我本身存放文件的地方,读者可自行新建目录,下面附上index.html文件内容

<!DOCTYPE html>
<html lang="en" xmlns:v-on="http://www.w3.org/1999/xhtml">
<head>
    <meta charset="UTF-8">
    <title>主从测试</title>


</head>


<script src="https://cdn.jsdelivr.net/npm/vue"></script>
<script src="https://cdn.staticfile.org/vue-resource/1.5.1/vue-resource.min.js"></script>
<body>
<div id="app" style="height: 300px;width: 600px">




    <h1 style="color: red">我是前端工程 WEB 页面</h1>


    <br>
    showMsg:{{message}}
    <br>
    <br>
    <br>
    <button v-on:click="getMsg">获取后台数据</button>


</div>
</body>
</html>


<script>


    var app = new Vue({
        el: '#app',
        data: {
            message: 'Hello Vue!'
        },
        methods: {
            getMsg: function () {
                var ip="http://192.168.227.99"
                var that=this;
                //发送get请求
                that.$http.get(ip+'/api/test').then(function(res){
                   that.message=res.data;
                },function(){
                    console.log('请求失败处理');
                });
;            }
        }
    })




</复制代码

21 浏览器访问 192.168.227.88,会看到index.html显示的界面。

22 测试

  1. 中止centos_web_master 容器,查看页面是否能正常访问
  2. 重启centos_web_master容器,查看访问的页面是否由从到主切换了(为了主从切换明显,能够在index.html 里面自行加上标记,好比在标题里面加上主或从的字样)
  3. 随意关闭主服务器所对应的web容器,看nginx负载均衡是否起做用。

以上测试正常,则表示前端主功能完成



Docker容器实现后端主从热备系统

后端服务器咱们使用openjdk做为jar包运行容器,主从容器建立使用上面的centos_kn镜像建立,而后修改配置就好了

为了让openjdk能够在容器运行的时候自动运行jar程序,咱们须要使用Dockerfile从新构建镜像,让其具有该功能

1 建立Dockerfile文件

FROM openjdk:10
MAINTAINER cfh
WORKDIR /home/soft
CMD ["nohup","java","-jar","docker_server.jar"]

复制代码

2 构建镜像

docker build -t myopenjdk .复制代码

3 使用构建的镜像建立6台后端服务器

docker volume create S1
docker volume inspect S1


docker volume create S2
docker volume inspect S2




docker volume create S3
docker volume inspect S3


docker volume create S4
docker volume inspect S4


docker volume create S5
docker volume inspect S5


docker volume create S6
docker volume inspect S6


cd /var/lib/docker/volumes/S1/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S1/_data/docker_server.jar


cd /var/lib/docker/volumes/S2/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S2/_data/docker_server.jar


cd /var/lib/docker/volumes/S3/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S3/_data/docker_server.jar


cd /var/lib/docker/volumes/S4/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S4/_data/docker_server.jar


cd /var/lib/docker/volumes/S5/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S5/_data/docker_server.jar


cd /var/lib/docker/volumes/S6/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S6/_data/docker_server.jar






docker run -it -d --name server_1  -v S1:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.101 --restart=always myopenjdk


docker run -it -d --name server_2  -v S2:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.102 --restart=always myopenjdk


docker run -it -d --name server_3  -v S3:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.103 --restart=always myopenjdk


docker run -it -d --name server_4  -v S4:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.104 --restart=always myopenjdk


docker run -it -d --name server_5  -v S5:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.105 --restart=always myopenjdk


docker run -it -d --name server_6  -v S6:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.106 --restar复制代码

docker_server.jar为测试程序,主要代码以下

import org.springframework.web.bind.annotation.RestController;


import javax.servlet.http.HttpServletResponse;
import java.util.LinkedHashMap;
import java.util.Map;


@RestController
@RequestMapping("api")
@CrossOrigin("*")
public class TestController {


    @Value("${server.port}")
    public int port;


    @RequestMapping(value = "/test",method = RequestMethod.GET)
    public Map<String,Object> test(HttpServletResponse response){
        response.setHeader("Access-Control-Allow-Origin", "*");
        response.setHeader("Access-Control-Allow-Methods", "GET");
        response.setHeader("Access-Control-Allow-Headers","token");
        Map<String,Object> objectMap=new LinkedHashMap<>();
        objectMap.put("code",10000);
        objectMap.put("msg","ok");
        objectMap.put("server_port","服务器端口:"+port);
        return objectMap;
    }复制代码

4 建立后端的主从容器

主服务器

docker run --privileged  -tid --name centos_server_master --restart=always --net mynet --ip 172.18.0.203 centos_kn /usr/sbin/init

复制代码

主服务器keepalived配置

vrrp_script chk_nginx {
    script "/etc/keepalived/nginx_check.sh"
    interval 2
    weight -20
}


vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 110
    mcast_src_ip 172.18.0.203
    priority 100
    nopreempt
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }


    track_script {
        chk_nginx
    }


    virtual_ipaddress {
        172.18.0.220
    }
复制代码

主服务器nginx配置

upstream tomcat{
  server 172.18.0.101:6001;
  server 172.18.0.102:6002;
  server 172.18.0.103:6003;


}


server {
    listen       80;
    server_name  172.18.0.220;


    #charset koi8-r;
    #access_log /var/log/nginx/host.access.log main;


    location / {
        proxy_pass http://tomcat;
        index index.html index.html;
    }


    #error_page 404 /404.html;


    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }


    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    # proxy_pass http://127.0.0.1;
    #}


    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    # root html;
    # fastcgi_pass 127.0.0.1:9000;
    # fastcgi_index index.php;
    # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
    # include fastcgi_params;
    #}


    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    # deny all;
 复制代码

重启keepalived和nginx

systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service

复制代码

从服务器

docker run --privileged  -tid --name centos_server_slave --restart=always --net mynet --ip 172.18.0.204 centos_kn /usr/sbin/init

复制代码

从服务器的keepalived配置

cript chk_nginx {
    script "/etc/keepalived/nginx_check.sh"
    interval 2
    weight -20
}


vrrp_instance VI_1 {
    state SLAVE
    interface eth0
    virtual_router_id 110
    mcast_src_ip 172.18.0.204
    priority 80
    nopreempt
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }


    track_script {
        chk_nginx
    }


    virtual_ipaddress {
        172.18.0.220
    }
复制代码

从服务器的nginx配置

upstream tomcat{
  server 172.18.0.104:6004;
  server 172.18.0.105:6005;
  server 172.18.0.106:6006;


}


server {
    listen       80;
    server_name  172.18.0.220;


    #charset koi8-r;
    #access_log /var/log/nginx/host.access.log main;


    location / {
        proxy_pass http://tomcat;
        index index.html index.html;
    }


    #error_page 404 /404.html;


    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }


    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    # proxy_pass http://127.0.0.1;
    #}


    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    # root html;
    # fastcgi_pass 127.0.0.1:9000;
    # fastcgi_index index.php;
    # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
    # include fastcgi_params;
    #}


    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    # deny all;
 复制代码

重启keepalived和nginx

systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service

复制代码

命令行验证

浏览器验证

portainer安装

它是容器管理界面,能够看到容器的运行状态

docker search portainer


docker pull portainer/portainer


docker run -d -p 9000:9000 \
    --restart=always \
    -v /var/run/docker.sock:/var/run/docker.sock \
    --name prtainer-eureka\
    portainer/portainer




http://192.168.227.171:90复制代码

首次进入的时候须要输入密码,默认帐号为admin,密码建立以后页面跳转到下一界面,选择管理本地的容器,也就是Local,点击肯定。

结语:

以上就是整个方案的设计与部署。另外更深一层的技术有docker composer的容器的统一管理,还有K8s的Docker集群管理。这部分须要花很大精力研究。

另外Docker的三要素要搞明白:镜像/容器,数据卷,网络管理。

相关文章
相关标签/搜索