说明:图中为整个系统方案设计图,上半部分为一台业务服务器,下半部分为数据库服务器,本文档只介绍业务服务器的部署流程,下半部分比较简单。另外图中所标注IP为做者本身在虚拟机中的IP,可根据实际状况本身进行设定。php
LVS是一个开源的软件,能够实现传输层四层负载均衡。LVS是Linux Virtual Server的缩写,意思是Linux虚拟服务器。目前有三种IP负载均衡技术(VS/NAT、VS/TUN和VS/DR);八种调度算法(rr,wrr,lc,wlc,lblc,lblcr,dh,sh)。html
LVS能够实现负载均衡,可是不可以进行健康检查,好比一个rs出现故障,LVS仍然会把请求转发给故障的rs服务器,这样就会致使请求的无效性。keepalive软件能够进行健康检查,并且能同时实现LVS的高可用性,解决LVS单点故障的问题,其实keepalive就是为LVS而生的。前端
keepalived是一个相似于Layer2,4,7交换机制的软件。是Linux集群管理中保证集群高可用的一个服务软件,其功能是用来防止单点故障。vue
keepalived是基于VRRP协议实现的保证集群高可用的一个服务软件,主要功能是实现真机的故障隔离和负载均衡器间的失败切换,防止单点故障。在了解keepalived原理以前先了解一下VRRP协议。java
Redundancy Protocol虚拟路由冗余协议。是一种容错协议,保证当主机的下一跳路由出现故障时,由另外一台路由器来代替出现故障的路由器进行工做,从而保持网络通讯的连续性和可靠性。在介绍VRRP以前先介绍一些关于VRRP的相关术语:linux
虚拟路由器:由一个 Master 路由器和多个 Backup 路由器组成。主机将虚拟路由器看成默认网关。nginx
VRID:虚拟路由器的标识。有相同 VRID 的一组路由器构成一个虚拟路由器。c++
Master 路由器:虚拟路由器中承担报文转发任务的路由器。web
Backup 路由器: Master 路由器出现故障时,可以代替 Master 路由器工做的路由器。算法
虚拟 IP 地址:虚拟路由器的 IP 地址。一个虚拟路由器能够拥有一个或多个IP 地址。
IP 地址拥有者:接口 IP 地址与虚拟 IP 地址相同的路由器被称为 IP 地址拥有者。
虚拟 MAC 地址:一个虚拟路由器拥有一个虚拟 MAC 地址。虚拟 MAC 地址的格式为 00-00-5E-00-01-{VRID}。一般状况下,虚拟路由器回应 ARP 请求使用的是虚拟 MAC 地址,只有虚拟路由器作特殊配置的时候,才回应接口的真实 MAC 地址。
优先级: VRRP 根据优先级来肯定虚拟路由器中每台路由器的地位。
非抢占方式:若是 Backup 路由器工做在非抢占方式下,则只要 Master 路由器没有出现故障,Backup 路由器即便随后被配置了更高的优先级也不会成为Master 路由器。
抢占方式:若是 Backup 路由器工做在抢占方式下,当它收到 VRRP 报文后,会将本身的优先级与通告报文中的优先级进行比较。若是本身的优先级比当前的 Master 路由器的优先级高,就会主动抢占成为 Master 路由器;不然,将保持 Backup 状态。
VRRP将局域网内的一组路由器划分在一块儿,造成一个VRRP备份组,它在功能上至关于一台路由器的功能,使用虚拟路由器号进行标识(VRID)。虚拟路由器有本身的虚拟IP地址和虚拟MAC地址,它的外在变现形式和实际的物理路由彻底同样。局域网内的主机将虚拟路由器的IP地址设置为默认网关,经过虚拟路由器与外部网络进行通讯。
虚拟路由器是工做在实际的物理路由器之上的。它由多个实际的路由器组成,包括一个Master路由器和多个Backup路由器。 Master路由器正常工做时,局域网内的主机经过Master与外界通讯。当Master路由器出现故障时, Backup路由器中的一台设备将成为新的Master路由器,接替转发报文的工做。(路由器的高可用)
可参考我写的一篇文章:juejin.im/post/5dae55…
nginx是一款高性能的http 服务器/反向代理服务器及电子邮件(IMAP/POP3)代理服务器。
做用:集群(提升吞吐量,减轻单台服务器压力),反向代理(不暴露真实IP地址),虚拟服务器,静态服务器(动静分离)。解决跨域问题,使用nginx搭建企业级api接口网关
sudo apt-get remove docker docker-engine docker.io containerd runc复制代码
sudo apt-get update复制代码
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common复制代码
sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -复制代码
sudo apt-key fingerprint 0EBFCD88复制代码
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update复制代码
apt-cache madison docker-ce复制代码
sudo apt-get install -y docker-ce=17.12.1~ce-0~ubuntu复制代码
docker --version复制代码
sudo gpasswd -a 用户名 docker #用户名改为本身的登陆名复制代码
sudo service docker restartnewgrp - docker复制代码
由于容器重启以后IP会变,可是这个不是咱们但愿的,咱们但愿容器有本身的固定IP。容器默认使用Docker0这个网桥,这个是没法自定义IP的,须要咱们本身建立一个网桥,而后指定容器IP,这样容器在重启以后IP会保持不变。
docker network create --subnet=172.18.0.0/24 mynet复制代码
sudo apt-get install -y gcc
sudo apt-get install -y g++
sudo apt-get install -y libssl-dev
sudo apt-get install -y daemon
sudo apt-get install -y make
sudo apt-get install -y sysv-rc-conf复制代码
cd /usr/local/
wget http://www.keepalived.org/software/keepalived-1.2.18.tar.gz
tar zxvf keepalived-1.2.18.tar.gz
cd keepalived-1.2.18
./configure --prefix=/usr/local/keepalived
make && make insta复制代码
mkdir /etc/keepalived
mkdir /etc/sysconfig
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
ln -s /usr/local/sbin/keepalived /usr/sbin/
ln -s /usr/local/keepalived/sbin/keepalived /sbin/
复制代码
由于除Readhat以外的linux没有/etc/rc.d/init.d/functions,因此须要修改原来的启动文件
修改后总体内容以下
#!/bin/sh
#
# Startup script for the Keepalived daemon
#
# processname: keepalived
# pidfile: /var/run/keepalived.pid
# config: /etc/keepalived/keepalived.conf
# chkconfig: - 21 79
# description: Start and stop Keepalived
# Source function library
#. /etc/rc.d/init.d/functions
. /lib/lsb/init-functions
# Source configuration file (we set KEEPALIVED_OPTIONS there)
. /etc/sysconfig/keepalived
RETVAL=0
prog="keepalived"
start() {
echo -n $"Starting $prog: "
daemon keepalived start
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
}
stop() {
echo -n $"Stopping $prog: "
killproc keepalived
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog
}
reload() {
echo -n $"Reloading $prog: "
killproc keepalived -1
RETVAL=$?
echo
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
reload)
reload
;;
restart)
stop
start
;;
condrestart)
if [ -f /var/lock/subsys/$prog ]; then
stop
start
fi
;;
status)
status keepalived
RETVAL=$?
;;
*)
echo "Usage: $0 {start|stop|reload|restart|condrestart|status}"
RETVAL=1
esac
exit 复制代码
cd /etc/keepalived
cp keepalived.conf keepalived.conf.back
rm keepalived.conf
vim keepalived.conf复制代码
添加内容以下
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.227.88
192.168.227.99
}
}
virtual_server 192.168.227.88 80 {
delay_loop 6
lb_algo rr
lb_kind NAT
persistence_timeout 50
protocol TCP
real_server 172.18.0.210 {
weight 1
}
}
virtual_server 192.168.227.99 80 {
delay_loop 6
lb_algo rr
lb_kind NAT
persistence_timeout 50
protocol TCP
real_server 172.18.0.220 {
weight 1
复制代码
须要注意的是:interface 这个是根据本身服务器网卡名称设定的,否则没法作VIP映射
systemctl daemon-reload
systemctl enable keepalived.service
systemctl start keepalived.service
复制代码
每次更改配置文件以后必须执行 systemctl daemon-reload 操做,否则配置无效。
ps -ef|grep keepalived复制代码
systemctl status keepalived.service复制代码
ip addr复制代码
能够看到两个IP都是通的,到此keepalived安装成功
宿主机中的环境只须要Docker引擎和keepalived虚拟IP映射,后面的工做在Docker容器中进行。为何还要在centos7中安装keepalived呢,这是由于咱们没法经过外网IP直接访问容器内部,因此须要宿主机虚拟出来一个IP,与容器进行一个桥接,让他们实现内部对接。
看下图,一目了然:
图中访问的IP应该是容器内部虚拟出来的172.18.0.210,此处更正说明下。
接下来咱们安装前端服务器的主从系统部分
docker pull centos:7复制代码
docker run -it -d --name centos1 -d centos:7复制代码
docker exec -it centos1 bash复制代码
yum update -y
yum install -y vim
yum install -y wget
yum install -y gcc-c++
yum install -y pcre pcre-devel
yum install -y zlib zlib-devel
yum install -y openssl-devel
yum install -y popt-devel
yum install -y initscripts
yum install -y net-tools
复制代码
docker commit -a 'cfh' -m 'centos with common tools' centos1 centos_base复制代码
docker rm -f centos1
#容器内须要使用systemctl服务,须要加上/usr/sbin/init
docker run -it --name centos_temp -d --privileged centos_base /usr/sbin/init
docker exec -it centos_temp bash复制代码
#使用yum安装nginx须要包括Nginx的库,安装Nginx的库
rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
# 使用下面命令安装nginx
yum install -y nginx
#启动nginx
systemctl start nginx.service
复制代码
1.下载keepalived
wget http://www.keepalived.org/software/keepalived-1.2.18.tar.gz
2.解压安装:
tar -zxvf keepalived-1.2.18.tar.gz -C /usr/local/
3.下载插件openssl
yum install -y openssl openssl-devel(须要安装一个软件包)
4.开始编译keepalived
cd /usr/local/keepalived-1.2.18/ && ./configure --prefix=/usr/local/keepalived
5.make一下
make && make ins复制代码
mkdir /etc/keepalived
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
ln -s /usr/local/sbin/keepalived /usr/sbin/
能够设置开机启动:chkconfig keepalived on
到此咱们安装完毕!
#若启动报错,则执行下面命令
cd /usr/sbin/
rm -f keepalived
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
#启动keepalived
systemctl daemon-reload 从新加载
systemctl enable keepalived.service 设置开机自动启动
systemctl start keepalived.service 启动
systemctl status keepalived.service 查看服务状复制代码
#备份配置文件
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.backup
rm -f keepalived.conf
vim keepalived.conf
#配置文件以下
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 121
mcast_src_ip 172.18.0.201
priority 100
nopreempt
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
172.18.0.210
}复制代码
vim /etc/nginx/conf.d/default.conf
upstream tomcat{
server 172.18.0.11:80;
server 172.18.0.12:80;
server 172.18.0.13:80;
}
server {
listen 80;
server_name 172.18.0.210;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://tomcat;
index index.html index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
复制代码
vim nginx_check.sh
#如下是脚本内容
#!/bin/bash
A=`ps -C nginx –no-header |wc -l`
if [ $A -eq 0 ];then
/usr/local/nginx/sbin/nginx
sleep 2
if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
killall keepalived
fi
fi复制代码
chmod +x nginx_check.sh复制代码
systemctl enable keepalived.service
#开启keepalived
systemctl daemon-reload
systemctl start keepalived.service
复制代码
ping 172.18.0.210复制代码
docker commit -a 'cfh' -m 'centos with keepalived nginx' centos_temp centos_kn复制代码
docker rm -f `docker ps -a -q`复制代码
取名为centos_web_master,和centos_web_slave
docker run --privileged -tid \
--name centos_web_master --restart=always \
--net mynet --ip 172.18.0.201 \
centos_kn /usr/sbin/init
docker run --privileged -tid \
--name centos_web_slave --restart=always \
--net mynet --ip 172.18.0.202 \
centos_kn /usr/sbin/init复制代码
keepalived修改地方以下
state SLAVE #设置为从服务器
mcast_src_ip 172.18.0.202 #修改成本机的IP
priority 80 #权重设置比master小
复制代码
Nginx配置以下
upstream tomcat{
server 172.18.0.14:80;
server 172.18.0.15:80;
server 172.18.0.16:80;
}
server {
listen 80;
server_name 172.18.0.210;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://tomcat;
index index.html index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
复制代码
重启keepalived和nginx
systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service复制代码
docker pull nginx
nginx_web_1='/home/root123/cfh/nginx1'
nginx_web_2='/home/root123/cfh/nginx2'
nginx_web_3='/home/root123/cfh/nginx3'
nginx_web_4='/home/root123/cfh/nginx4'
nginx_web_5='/home/root123/cfh/nginx5'
nginx_web_6='/home/root123/cfh/nginx6'
mkdir -p ${nginx_web_1}/conf ${nginx_web_1}/conf.d ${nginx_web_1}/html ${nginx_web_1}/logs
mkdir -p ${nginx_web_2}/conf ${nginx_web_2}/conf.d ${nginx_web_2}/html ${nginx_web_2}/logs
mkdir -p ${nginx_web_3}/conf ${nginx_web_3}/conf.d ${nginx_web_3}/html ${nginx_web_3}/logs
mkdir -p ${nginx_web_4}/conf ${nginx_web_4}/conf.d ${nginx_web_4}/html ${nginx_web_4}/logs
mkdir -p ${nginx_web_5}/conf ${nginx_web_5}/conf.d ${nginx_web_5}/html ${nginx_web_5}/logs
mkdir -p ${nginx_web_6}/conf ${nginx_web_6}/conf.d ${nginx_web_6}/html ${nginx_web_6}/logs
docker run -it --name temp_nginx -d nginx
docker ps
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_1}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_1}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_2}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_2}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_3}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_3}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_4}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_4}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_5}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_5}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_6}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_6}/conf.d/default.conf
docker rm -f temp_nginx
docker run -d --name nginx_web_1 \
--network=mynet --ip 172.18.0.11 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_1}/html/:/usr/share/nginx/html \
-v ${nginx_web_1}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_1}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_1}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d --name nginx_web_2 \
--network=mynet --ip 172.18.0.12 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_2}/html/:/usr/share/nginx/html \
-v ${nginx_web_2}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_2}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_2}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d --name nginx_web_3 \
--network=mynet --ip 172.18.0.13 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_3}/html/:/usr/share/nginx/html \
-v ${nginx_web_3}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_3}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_3}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d --name nginx_web_4 \
--network=mynet --ip 172.18.0.14 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_4}/html/:/usr/share/nginx/html \
-v ${nginx_web_4}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_4}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_4}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d --name nginx_web_5 \
--network=mynet --ip 172.18.0.15 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_5}/html/:/usr/share/nginx/html \
-v ${nginx_web_5}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_5}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_5}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d --name nginx_web_6 \
--network=mynet --ip 172.18.0.16 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_6}/html/:/usr/share/nginx/html \
-v ${nginx_web_6}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_6}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_6}/logs/:/var/log/nginx --privileged --restart=always nginx
cd ${nginx_web_1}/html
cp /home/server/envconf/index.html ${nginx_web_1}/html/index.html
cd ${nginx_web_2}/html
cp /home/server/envconf/index.html ${nginx_web_2}/html/index.html
cd ${nginx_web_3}/html
cp /home/server/envconf/index.html ${nginx_web_3}/html/index.html
cd ${nginx_web_4}/html
cp /home/server/envconf/index.html ${nginx_web_4}/html/index.html
cd ${nginx_web_5}/html
cp /home/server/envconf/index.html ${nginx_web_5}/html/index.html
cd ${nginx_web_6}/html
cp /home/server/envconf/index.html ${ngi复制代码
/home/server/envconf/ 是我本身存放文件的地方,读者可自行新建目录,下面附上index.html文件内容
<!DOCTYPE html>
<html lang="en" xmlns:v-on="http://www.w3.org/1999/xhtml">
<head>
<meta charset="UTF-8">
<title>主从测试</title>
</head>
<script src="https://cdn.jsdelivr.net/npm/vue"></script>
<script src="https://cdn.staticfile.org/vue-resource/1.5.1/vue-resource.min.js"></script>
<body>
<div id="app" style="height: 300px;width: 600px">
<h1 style="color: red">我是前端工程 WEB 页面</h1>
<br>
showMsg:{{message}}
<br>
<br>
<br>
<button v-on:click="getMsg">获取后台数据</button>
</div>
</body>
</html>
<script>
var app = new Vue({
el: '#app',
data: {
message: 'Hello Vue!'
},
methods: {
getMsg: function () {
var ip="http://192.168.227.99"
var that=this;
//发送get请求
that.$http.get(ip+'/api/test').then(function(res){
that.message=res.data;
},function(){
console.log('请求失败处理');
});
; }
}
})
</复制代码
以上测试正常,则表示前端主功能完成
后端服务器咱们使用openjdk做为jar包运行容器,主从容器建立使用上面的centos_kn镜像建立,而后修改配置就好了
为了让openjdk能够在容器运行的时候自动运行jar程序,咱们须要使用Dockerfile从新构建镜像,让其具有该功能
FROM openjdk:10
MAINTAINER cfh
WORKDIR /home/soft
CMD ["nohup","java","-jar","docker_server.jar"]
复制代码
docker build -t myopenjdk .复制代码
docker volume create S1
docker volume inspect S1
docker volume create S2
docker volume inspect S2
docker volume create S3
docker volume inspect S3
docker volume create S4
docker volume inspect S4
docker volume create S5
docker volume inspect S5
docker volume create S6
docker volume inspect S6
cd /var/lib/docker/volumes/S1/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S1/_data/docker_server.jar
cd /var/lib/docker/volumes/S2/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S2/_data/docker_server.jar
cd /var/lib/docker/volumes/S3/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S3/_data/docker_server.jar
cd /var/lib/docker/volumes/S4/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S4/_data/docker_server.jar
cd /var/lib/docker/volumes/S5/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S5/_data/docker_server.jar
cd /var/lib/docker/volumes/S6/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S6/_data/docker_server.jar
docker run -it -d --name server_1 -v S1:/home/soft -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.101 --restart=always myopenjdk
docker run -it -d --name server_2 -v S2:/home/soft -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.102 --restart=always myopenjdk
docker run -it -d --name server_3 -v S3:/home/soft -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.103 --restart=always myopenjdk
docker run -it -d --name server_4 -v S4:/home/soft -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.104 --restart=always myopenjdk
docker run -it -d --name server_5 -v S5:/home/soft -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.105 --restart=always myopenjdk
docker run -it -d --name server_6 -v S6:/home/soft -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.106 --restar复制代码
docker_server.jar为测试程序,主要代码以下
import org.springframework.web.bind.annotation.RestController;
import javax.servlet.http.HttpServletResponse;
import java.util.LinkedHashMap;
import java.util.Map;
@RestController
@RequestMapping("api")
@CrossOrigin("*")
public class TestController {
@Value("${server.port}")
public int port;
@RequestMapping(value = "/test",method = RequestMethod.GET)
public Map<String,Object> test(HttpServletResponse response){
response.setHeader("Access-Control-Allow-Origin", "*");
response.setHeader("Access-Control-Allow-Methods", "GET");
response.setHeader("Access-Control-Allow-Headers","token");
Map<String,Object> objectMap=new LinkedHashMap<>();
objectMap.put("code",10000);
objectMap.put("msg","ok");
objectMap.put("server_port","服务器端口:"+port);
return objectMap;
}复制代码
docker run --privileged -tid --name centos_server_master --restart=always --net mynet --ip 172.18.0.203 centos_kn /usr/sbin/init
复制代码
主服务器keepalived配置
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 110
mcast_src_ip 172.18.0.203
priority 100
nopreempt
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
172.18.0.220
}
复制代码
主服务器nginx配置
upstream tomcat{
server 172.18.0.101:6001;
server 172.18.0.102:6002;
server 172.18.0.103:6003;
}
server {
listen 80;
server_name 172.18.0.220;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://tomcat;
index index.html index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
复制代码
重启keepalived和nginx
systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service
复制代码
docker run --privileged -tid --name centos_server_slave --restart=always --net mynet --ip 172.18.0.204 centos_kn /usr/sbin/init
复制代码
从服务器的keepalived配置
cript chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state SLAVE
interface eth0
virtual_router_id 110
mcast_src_ip 172.18.0.204
priority 80
nopreempt
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
172.18.0.220
}
复制代码
从服务器的nginx配置
upstream tomcat{
server 172.18.0.104:6004;
server 172.18.0.105:6005;
server 172.18.0.106:6006;
}
server {
listen 80;
server_name 172.18.0.220;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://tomcat;
index index.html index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
复制代码
重启keepalived和nginx
systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service
复制代码
它是容器管理界面,能够看到容器的运行状态
docker search portainer
docker pull portainer/portainer
docker run -d -p 9000:9000 \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
--name prtainer-eureka\
portainer/portainer
http://192.168.227.171:90复制代码
首次进入的时候须要输入密码,默认帐号为admin,密码建立以后页面跳转到下一界面,选择管理本地的容器,也就是Local,点击肯定。
以上就是整个方案的设计与部署。另外更深一层的技术有docker composer的容器的统一管理,还有K8s的Docker集群管理。这部分须要花很大精力研究。
另外Docker的三要素要搞明白:镜像/容器,数据卷,网络管理。