docker-swarm部署mongo分片集群

概述

  • 本文主要介绍在docker-swarm环境下搭建mongo分片集群。
  • 本文以受权模式建立集群,可是若是之间启动受权的脚本,将没法建立用户。须要在无受权模式下把用户建立好,而后再以受权模式重启。(这两种模式启动脚本不一样,可是挂载同一个文件目录)

架构图

架构图

  • 共三个节点:breakpad(主服务器),bpcluster,bogon

前置步骤

  • 安装docker
  • 初始化swarm集群
    • docker swarm init

部署步骤

前面三步执行完集群就可使用了,不须要受权登陆可不用执行后面4个步骤node

  1. 建立目录
  2. 部署服务(无受权模式)
  3. 配置分片信息
  4. 生成keyfile文件,并修改权限
  5. 拷贝keyfile到其余节点
  6. 添加用户信息
  7. 重启服务(受权模式)

1. 建立目录

全部服务器执行before-deploy.shdocker

#!/bin/bash

DIR=/data/fates
DATA_PATH="${DIR}/mongo"
PWD='1qaz2wsx!@#'

DATA_DIR_LIST=('config' 'shard1' 'shard2' 'shard3' 'script')

function check_directory() {
  if [ ! -d "${DATA_PATH}" ]; then
    echo "create directory: ${DATA_PATH}"
    echo ${PWD} | sudo -S mkdir -p ${DATA_PATH}
  else
    echo "directory ${DATA_PATH} already exists."
  fi


  cd "${DATA_PATH}"

  for SUB_DIR in ${DATA_DIR_LIST[@]}
  do
    if [ ! -d "${DATA_PATH}/${SUB_DIR}" ]; then
      echo "create directory: ${DATA_PATH}/${SUB_DIR}"
      echo ${PWD} | sudo -S mkdir -p "${DATA_PATH}/${SUB_DIR}"
    else
      echo "directory: ${DATA_PATH}/${SUB_DIR} already exists."
    fi
  done

  echo ${PWD} | sudo -S chown -R $USER:$USER "${DATA_PATH}"
}

check_directory

复制代码

2. 无受权模式启动mongo集群

  • 这一步尚未受权,无需登陆就能够操做,用于建立用户

主服务器下建立fate-mongo.yaml,并执行如下脚本(注意根据本身的机器名称修改constraints属性)shell

docker stack deploy -c fates-mongo.yaml fates-mongo
复制代码
version: '3.4'
services:
 shard1-server1:
 image: mongo:4.0.5
    # --shardsvr: 这个参数仅仅只是将默认的27017端口改成27018,若是指定--port参数,可用不须要这个参数
    # --directoryperdb:每一个数据库使用单独的文件夹
 command: mongod --shardsvr --directoryperdb --replSet shard1
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard1:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 shard2-server1:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard2
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard2:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 shard3-server1:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard3
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard3:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 shard1-server2:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard1
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard1:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 shard2-server2:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard2
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard2:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 shard3-server2:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard3
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard3:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 shard1-server3:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard1
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard1:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 shard2-server3:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard2
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard2:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 shard3-server3:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard3
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard3:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 config1:
 image: mongo:4.0.5
    # --configsvr: 这个参数仅仅是将默认端口由27017改成27019, 若是指定--port可不添加该参数
 command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/config:/data/configdb
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 config2:
 image: mongo:4.0.5
 command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/config:/data/configdb
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 config3:
 image: mongo:4.0.5
 command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/config:/data/configdb
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 mongos:
 image: mongo:4.0.5
    # mongo3.6版默认绑定IP为127.0.0.1,此处绑定0.0.0.0是容许其余容器或主机能够访问
 command: mongos --configdb fates-mongo-config/config1:27019,config2:27019,config3:27019 --bind_ip 0.0.0.0 --port 27017
 networks:
 - mongo
 ports:
 - 27017:27017
 volumes:
 - /etc/localtime:/etc/localtime
 depends_on:
 - config1
 - config2
 - config3
 deploy:
 restart_policy:
 condition: on-failure
 mode: global

networks:
 mongo:
 driver: overlay
    # 若是外部已经建立好网络,下面这句话放开
    # external: true

复制代码

3. 配置分片信息

# 添加配置服务器
docker exec -it $(docker ps | grep "config" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id: \"fates-mongo-config\",configsvr: true, members: [{ _id : 0, host : \"config1:27019\" },{ _id : 1, host : \"config2:27019\" }, { _id : 2, host : \"config3:27019\" }]})' | mongo --port 27019"
 # 添加分片服务器
docker exec -it $(docker ps | grep "shard1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard1\", members: [{ _id : 0, host : \"shard1-server1:27018\" },{ _id : 1, host : \"shard1-server2:27018\" },{ _id : 2, host : \"shard1-server3:27018\", arbiterOnly: true }]})' | mongo --port 27018"
docker exec -it $(docker ps | grep "shard2" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard2\", members: [{ _id : 0, host : \"shard2-server1:27018\" },{ _id : 1, host : \"shard2-server2:27018\" },{ _id : 2, host : \"shard3-server3:27018\", arbiterOnly: true }]})' | mongo --port 27018"
docker exec -it $(docker ps | grep "shard3" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard3\", members: [{ _id : 0, host : \"shard3-server1:27018\" },{ _id : 1, host : \"shard2-server2:27018\" },{ _id : 2, host : \"shard3-server3:27018\", arbiterOnly: true }]})' | mongo --port 27018"
 # 添加分片集群到mongos中
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/shard1-server1:27018,shard1-server2:27018,shard1-server3:27018\")' | mongo "
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/shard2-server1:27018,shard2-server2:27018,shard2-server3:27018\")' | mongo "
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/shard3-server1:27018,shard3-server2:27018,shard3-server3:27018\")' | mongo "
复制代码

4. 生成密钥文件

执行前面三步,已经可用确保mongo分片集群启动成功可以使用了,若是不须要加受权,后面的步骤不用看。数据库

主服务器执行generate-keyfile.shbash

#!/bin/bash

DATA_PATH=/data/fates/mongo
PWD='1qaz2wsx!@#'

function check_directory() {
  if [ ! -d "${DATA_PATH}" ]; then
    echo "directory: ${DATA_PATH} not exists, please run before-depoly.sh."
  fi
}

function generate_keyfile() {
  cd "${DATA_PATH}/script"
  if [ ! -f "${DATA_PATH}/script/mongo-keyfile" ]; then
    echo 'create mongo-keyfile.'
    openssl rand -base64 756 -out mongo-keyfile
    echo "${PWD}" | sudo -S chmod 600 mongo-keyfile
    echo "${PWD}" | sudo -S chown 999 mongo-keyfile
  else
    echo 'mongo-keyfile already exists.'
  fi
}

check_directory
generate_keyfile

复制代码

5. 拷贝密钥文件到其余服务器的script目录下

在刚才生成keyfile文件的服务器上执行拷贝(注意-p参数,保留前面修改的权限)服务器

sudo scp -p /data/fates/mongo/script/mongo-keyfile username@server2:/data/fates/mongo/script
sduo scp -p /data/fates/mongo/script/mongo-keyfile username@server3:/data/fates/mongo/script
复制代码

6. 添加用户信息

主服务器下执行add-user.sh网络

脚本给的用户名和密码都是root,权限为root权限。可自定义修改架构

docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo -e 'use admin\n db.createUser({user:\"root\",pwd:\"root\",roles:[{role:\"root\",db:\"admin\"}]})' | mongo"
复制代码

7. 建立docker启动的yaml脚本文件(受权)

  • 这一步受权登陆,须要输入上一步建立的用户名和密码才可操做

主服务器下建立fate-mongo-key.yaml,而后再以受权模式重启(脚本不一样,挂载路径使用以前的)spa

docker stack deploy -c fates-mongo-key.yaml fates-mongo
复制代码
version: '3.4'
services:
 shard1-server1:
 image: mongo:4.0.5
    # --shardsvr: 这个参数仅仅只是将默认的27017端口改成27018,若是指定--port参数,可用不须要这个参数
    # --directoryperdb:每一个数据库使用单独的文件夹
 command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard1:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 shard2-server1:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard2:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 shard3-server1:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard3:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 shard1-server2:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard1:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 shard2-server2:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard2:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 shard3-server2:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard3:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 shard1-server3:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard1:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 shard2-server3:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard2:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 shard3-server3:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard3:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 config1:
 image: mongo:4.0.5
    # --configsvr: 这个参数仅仅是将默认端口由27017改成27019, 若是指定--port可不添加该参数
 command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/config:/data/configdb
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 config2:
 image: mongo:4.0.5
 command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/config:/data/configdb
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 config3:
 image: mongo:4.0.5
 command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/config:/data/configdb
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 mongos:
 image: mongo:4.0.5
    # mongo3.6版默认绑定IP为127.0.0.1,此处绑定0.0.0.0是容许其余容器或主机能够访问
 command: mongos --configdb fates-mongo-config/config1:27019,config2:27019,config3:27019 --bind_ip 0.0.0.0 --port 27017  --keyFile /data/mongo-keyfile
 networks:
 - mongo
 ports:
 - 27017:27017
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 depends_on:
 - config1
 - config2
 - config3
 deploy:
 restart_policy:
 condition: on-failure
 mode: global

networks:
 mongo:
 driver: overlay
    # 若是外部已经建立好网络,下面这句话放开
    # external: true
复制代码

遇到的问题

启动失败

经过docker service logs name查看日志,发现配置文件找不到,由于没有挂载进容器内部rest

config3启动失败

配置文件中挂载路径写错了

容器启动成功,可是链接失败,被拒绝

只执行了启动容器的脚本,后续的配置都没有设置(第3步)

mongo-keyfile没权限:error opening file: /data/mongo-keyfile: Permission denied

  • mongo-keyfile文件必须修改全部者为999, 权限为600

addShard失败

  • 必须等mongos启动完毕才能执行
  • 根据服务器名称,自动修改脚本里面constraints的属性

分片所有完成后发现数据只保存在一个分片上:

分片的一个chrunk默认200MB,数据量过小,只用一个chunk就够。可修改小这个参数验证效果

相关文章
相关标签/搜索