MongoDB是一个基于分布式文件存储的数据库。由C++语言编写。旨在为WEB应用提供可扩展的高性能数据存储解决方案。php
MongoDB 将数据存储为一个文档,数据结构由键值(key=>value)对组成。它支持的数据结构很是松散,MongoDB 文档相似于 JSON 对象。字段值能够包含其余文档、数组及文档数组。html
什么是 JSON ? JSON 指的是 JavaScript 对象表示法(JavaScript Object Notation) JSON 是轻量级的文本数据交换格式 JSON 独立于语言 * JSON 具备自我描述性,更易理解 JSON 使用 JavaScript 语法来描述数据对象,可是 JSON 仍然独立于语言和平台。JSON 解析器和 JSON 库支持许多不一样的编程语言。mysql
MongoDB和关系型数据库对比linux
关系型数据库数据结构nginx
MongoDB数据结构web
按照官方网站 https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/, 编辑repo文件面试
[root@ying01 ~]# cd /etc/yum.repos.d/ [root@ying01 yum.repos.d]# vim mongodb-org-4.0.repo [mongodb-org-4.0] name = MongoDB Repository baseurl = https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/ gpgcheck = 1 enabled = 1 gpgkey = https://www.mongodb.org/static/pgp/server-4.0.asc
查看yum list当中新生成的mongodb-org,并用yum安装全部的mongodb-orgredis
[root@ying01 yum.repos.d]# yum list|grep mongodb-org mongodb-org.x86_64 4.0.1-1.el7 @mongodb-org-4.0 mongodb-org-mongos.x86_64 4.0.1-1.el7 @mongodb-org-4.0 mongodb-org-server.x86_64 4.0.1-1.el7 @mongodb-org-4.0 mongodb-org-shell.x86_64 4.0.1-1.el7 @mongodb-org-4.0 mongodb-org-tools.x86_64 4.0.1-1.el7 @mongodb-org-4.0 [root@ying01 yum.repos.d]# yum install -y mongodb-org
在mongod.conf文件中,添加本机ip,用逗号隔开sql
[root@ying01 ~]# vim /etc/mongod.conf bindIp: 127.0.0.1,192.168.112.136 //增长IP
启动,并查看进程和端口mongodb
[root@ying01 ~]# systemctl start mongod [root@ying01 ~]# ps aux |grep mongod mongod 8169 0.8 3.2 1074460 60764 ? Sl 09:51 0:01 /usr/bin/mongod -f /etc/mongod.conf root 8197 0.0 0.0 112720 984 pts/0 S+ 09:54 0:00 grep --color=auto mongod [root@ying01 ~]# netstat -lntp |grep mongod tcp 0 0 192.168.112.136:27017 0.0.0.0:* LISTEN 8169/mongod tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 8169/mongod [root@ying01 ~]#
链接MongoDB
直接链接
[root@ying01 ~]# mongo
指定端口(配置文件没有明确指定)
[root@ying01 ~]# mongo --port 27017
远程链接,须要IP 和端口
[root@ying01 ~]# mongo --host 192.168.112.136 --port 27017
进入数据库,建立用户
[root@ying01 ~]# mongo > use admin //切换到admin库 switched to db admin > db.createUser( { user: "admin", customData: {description: "superuser"}, pwd: "www123", roles: [ { role: "root", db: "admin" } ] } ) //建立admin用户 Successfully added user: { "user" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ] }
语句释义:
db.createUser( { user: "admin", customData: {description: "superuser"}, pwd: "admin122", roles: [ { role: "root", db: "admin" } ] } )
- db.createUser 建立用户的命令
- user: "admin" 定义用户名
- customData: {description: "superuser"}
- pwd: "admin122" 定义用户密码
- roles: [ { role: "root", db: "admin" } ] 规则:角色 root,数据库admin
列出全部用户,须要切换到admin库
> db.system.users.find() { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, "salt" : "NRoDD1kSxLktW8vyDg4mpw==", "storedKey" : "yX+2kSbCl1bPpsh+0ZeE7QlcW6A=", "serverKey" : "XM9NgrMNOwXAvuWusY6iVhpyuFw=" }, "SCRAM-SHA-256" : { "iterationCount" : 15000, "salt" : "MOokBWPCOobBeNwHnhm/2QagzAT8h2yIuCzROg==", "storedKey" : "tAqs7zMF8InT0FU09lCgq2ZVB9wRgeIyoa1UONgRDM0=", "serverKey" : "lN2TYZX5Snik4gMthUNZE7jw71Nkxo13LAChh9K8ZiI=" } }, "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ] }
查看当前库下全部的用户
> show users { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] }
建立新用户ying
> db.createUser({user:"ying",pwd:"www123",roles:[{role:"read",db:"testdb"}]}) Successfully added user: { "user" : "ying", "roles" : [ { "role" : "read", "db" : "testdb" } ] } > show users { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] } { "_id" : "admin.ying", "user" : "ying", "db" : "admin", "roles" : [ { "role" : "read", "db" : "testdb" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] }
删除用户
> db.dropUser('ying') true > show users { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] }
切换到testdb库,若此库不存在,会自动建立
> use testdb switched to db testdb > show users //当前库,无用户 > db.system.user.find() > ^C bye
要使用用户生效,须要编辑启动服务脚本,添加 --auth,之后登陆须要身份验证
[root@ying01 ~]# vim /usr/lib/systemd/system/mongod.service Environment="OPTIONS=-f /etc/mongod.conf" 改成 Environment="OPTIONS=--auth -f /etc/mongod.conf"
重启mongod服务
[root@ying01 ~]# systemctl restart mongod Warning: mongod.service changed on disk. Run 'systemctl daemon-reload' to reload units. [root@ying01 ~]# systemctl daemon-reload [root@ying01 ~]# systemctl restart mongod [root@ying01 ~]# ps aux |grep mongod mongod 8611 12.6 2.8 1068324 52744 ? Sl 11:42 0:01 /usr/bin/mongod --auth -f /etc/mongod.conf root 8642 0.0 0.0 112720 980 pts/0 S+ 11:42 0:00 grep --color=auto mongod
如今直接访问,不加密码,不会查看数据库,由于须要认证,
[root@ying01 ~]# mongo --host 192.168.112.136 --port 27017 MongoDB shell version v4.0.1 connecting to: mongodb://192.168.112.136:27017/ MongoDB server version: 4.0.1 > use admin switched to db admin > show users 2018-08-27T11:43:36.654+0800 E QUERY [js] Error: command usersInfo requires authentication : //须要身份验证
使用密码,身份认证,登陆
[root@ying01 ~]# mongo --host 192.168.112.136 --port 27017 -u admin -p 'admin122' --authenticationDatabase "admin" > use admin switched to db admin > show users { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] }
切换到db1下,建立新用户
> use db1 switched to db db1 > show users > db.createUser( { user: "test1", pwd: "www123", roles: [ { role: "readWrite", db: "db1" }, {role: "read", db: "db2" } ] } ) Successfully added user: { "user" : "test1", "roles" : [ { "role" : "readWrite", "db" : "db1" }, { "role" : "read", "db" : "db2" } ] } > show users //查看在db1库下,建立的用户 { "_id" : "db1.test1", "user" : "test1", "db" : "db1", "roles" : [ { "role" : "readWrite", "db" : "db1" }, { "role" : "read", "db" : "db2" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] } > use db1 switched to db db1 > db.auth('test1','www123') //验证用户test1,返回值为1,则验证成功 1 >
MongoDB用户角色
- Read:容许用户读取指定数据库
- readWrite:容许用户读写指定数据库
- dbAdmin:容许用户在指定数据库中执行管理函数,如索引建立、删除,查看统计或访问system.profile
- userAdmin:容许用户向system.users集合写入,能够找指定数据库里建立、删除和管理用户
- clusterAdmin:只在admin数据库中可用,赋予用户全部分片和复制集相关函数的管理权限。
- readAnyDatabase:只在admin数据库中可用,赋予用户全部数据库的读权限
- readWriteAnyDatabase:只在admin数据库中可用,赋予用户全部数据库的读写权限
- userAdminAnyDatabase:只在admin数据库中可用,赋予用户全部数据库的userAdmin权限
- dbAdminAnyDatabase:只在admin数据库中可用,赋予用户全部数据库的dbAdmin权限。
- root:只在admin数据库中可用。超级帐号,超级权限
在db1下,建立 mycol的集合
[root@ying01 ~]# mongo --host 192.168.112.136 --port 27017 -u admin -p 'www123' --authenticationDatabase "admin" > use db1 switched to db db1 > show users { "_id" : "db1.test1", "user" : "test1", "db" : "db1", "roles" : [ { "role" : "readWrite", "db" : "db1" }, { "role" : "read", "db" : "db2" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] } > db.createCollection("mycol", { capped : true, autoIndexID : true, size : 6142800, max : 10000 } ) { "ok" : 0, "errmsg" : "too many users are authenticated", "code" : 13, "codeName" : "Unauthorized" }
语法:db.createCollection(name,options)
- name就是集合的名字,
- options可选,用来配置集合的参数,参数以下
- capped true/false (可选)若是为true,则启用封顶集合。封顶集合是固定大小的集合,当它达到其最大大小,会自动覆盖最先的条目。若是指定true,则也须要指定尺寸参数。
- autoindexID true/false (可选)若是为true,自动建立索引_id字段的默认值是false。
- size (可选)指定最大大小字节封顶集合。若是封顶若是是 true,那么你还须要指定这个字段。单位B
- max (可选)指定封顶集合容许在文件的最大数量。
错误排查
此时出现错误信息:
"errmsg" : "too many users are authenticated" “太多用户经过身份验证”,
这个在百度上,也没有搜到。此时我认为是用户问题,所以经过屡次尝试,决定不用 --host --port 登陆
[root@ying01 ~]# mongo -u admin -p 'www123' --authenticationDatabase "admin" > use db1 switched to db db1 > show users { "_id" : "db1.test1", "user" : "test1", "db" : "db1", "roles" : [ { "role" : "readWrite", "db" : "db1" }, { "role" : "read", "db" : "db2" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] } > db.createCollection("mycol", { capped : true, autoIndexID : true, size : 6142800, max : 10000 } ) { "ok" : 0, "errmsg" : "The field 'autoIndexID' is not a valid collection option. Options: { capped: true, autoIndexID: true, size: 6142800.0, max: 10000.0 }", "code" : 72, "codeName" : "InvalidOptions" }此时又出现错误信息,可是还好,不是以前那一条。
"The field 'autoIndexID' is not a valid collection option. Options: { capped: true, autoIndexID: true, size:
大意:“字段'autoIndexID'不是有效的集合选项。选项:{capped:true,autoIndexID:true,size:
那么此时取消: autoIndexID : true
> db.createCollection("mycol", { capped : true, size : 6142800, max : 10000 } ) { "ok" : 1 } > show tables mycol此时,发现OK了。再用show tables 命令,有mycol输出; 问题解决,可是原理,先记录下来,往后有时间及知识不断积累,再详解其中原理。
查看集合:show collections 或者使用show tables
> show collections mycol
在集合Account中,直接插入数据。若是该集合不存在,则mongodb会自动建立集合
> db.Account.insert({AccountID:1,UserName:"123",password:"123456"}) WriteResult({ "nInserted" : 1 }) > show tables //多了一个 Account mycol > db.Account.insert({AccountID:2,UserName:"ying",password:"abcdef"}) //再插入一条信息 WriteResult({ "nInserted" : 1 }) > show tables Account mycol
在集合中更新信息数据;
> db.Account.update({AccountID:1},{"$set":{"Age":20}}) //在集合Account中,第一条中,增长一项信息 WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
查看全部文档:db.Account.find(),此时在第一条 有更新的信息
> db.Account.find() { "_id" : ObjectId("5b83bf9209eb45c97dce1c4c"), "AccountID" : 1, "UserName" : "123", "password" : "123456", "Age" : 20 } { "_id" : ObjectId("5b83bfe509eb45c97dce1c4d"), "AccountID" : 2, "UserName" : "ying", "password" : "abcdef" }
查看指定的文档, db.Account.find({AccountID:2})
> db.Account.find({AccountID:2}) //查看数据库集合Account中,ID为2的文档 { "_id" : ObjectId("5b83bfe509eb45c97dce1c4d"), "AccountID" : 2, "UserName" : "ying", "password" : "abcdef" }
移除指定的文档:db.Account.remove({AccountID:1})
> db.Account.remove({AccountID:1}) //移除id为1d 的文档 WriteResult({ "nRemoved" : 1 }) > db.Account.find() //查看全部文档,此时已经没有ID为1的文档 { "_id" : ObjectId("5b83bfe509eb45c97dce1c4d"), "AccountID" : 2, "UserName" : "ying", "password" : "abcdef" }
删除集合:db.Account.drop()
> db.Account.drop() //删除Account集合 true > show tables mycol > db.mycol.drop() //删除mycol集合 true > show tables
从新建立集合col2
> db.col2.insert({AccountID:1,UserName:"123",password:"123456"}) WriteResult({ "nInserted" : 1 })
查看全部集合的状态:db.printCollectionStats()
> db.printCollectionStats() col2 { "ns" : "db1.col2", "size" : 80, "count" : 1, "avgObjSize" : 80, "storageSize" : 16384, "capped" : false, "wiredTiger" : { "metadata" : { "formatVersion" : 1 }, ......
下载源码包,并解压
[root@ying01 ~]# cd /usr/local/src/ [root@ying01 src]# wget https://pecl.php.net/get/mongodb-1.3.0.tgz [root@ying01 src]# ls mongodb-1.3.0.tgz mongodb-1.3.0.tgz [root@ying01 src]# tar zxf mongodb-1.3.0.tgz
用phpize生成configure文件
[root@ying01 src]# cd mongodb-1.3.0 [root@ying01 mongodb-1.3.0]# ls config.m4 CREDITS Makefile.frag phongo_compat.h php_phongo.c php_phongo.h README.md src Vagrantfile config.w32 LICENSE phongo_compat.c php_bson.h php_phongo_classes.h php_phongo_structs.h scripts tests [root@ying01 mongodb-1.3.0]# /usr/local/php-fpm/bin/phpize //生成configure文件 Configuring for: PHP Api Version: 20131106 Zend Module Api No: 20131226 Zend Extension Api No: 220131226
在解压目录下,定制相应的功能,并成成makefile文件,编译,安装
[root@ying01 mongodb-1.3.0]# ./configure --with-php-config=/usr/local/php-fpm/bin/php-config [root@ying01 mongodb-1.3.0]# make [root@ying01 mongodb-1.3.0]# make install [root@ying01 mongodb-1.3.0]# ls /usr/local/php-fpm/lib/php/extensions/no-debug-non-zts-20131226/ memcache.so mongodb.so opcache.a opcache.so redis.so
在php配置文件中添加:extension = mongodb.so;并检测php是否加载mongodb模块
[root@ying01 mongodb-1.3.0]# vim /usr/local/php-fpm/etc/php.ini extension=memcache.so extension=redis.so //以前添加上 extension = mongodb.so //添加此语句 [root@ying01 mongodb-1.3.0]# /usr/local/php-fpm/bin/php -m|grep mongodb mongodb [root@ying01 mongodb-1.3.0]# /etc/init.d/php-fpm restart //重启php服务 Gracefully shutting down php-fpm . done Starting php-fpm done
下载mongo源码包,解压,用phpize生成configure文件
[root@ying01 ~]# cd /usr/local/src/ [root@ying01 src]# wget https://pecl.php.net/get/mongo-1.6.16.tgz [root@ying01 src]# ls mongo-1.6.16.tgz mongo-1.6.16.tgz [root@ying01 src]# tar zxf mongo-1.6.16.tgz [root@ying01 src]# cd mongo-1.6.16/ [root@ying01 mongo-1.6.16]# /usr/local/php-fpm/bin/phpize Configuring for: PHP Api Version: 20131106 Zend Module Api No: 20131226 Zend Extension Api No: 220131226
定制相应的功能,并生成makefile文件,根据makefile的预设参数进行编译,而后安装
[root@ying01 mongo-1.6.16]# ./configure --with-php-config=/usr/local/php-fpm/bin/php-config [root@ying01 mongo-1.6.16]# make [root@ying01 mongo-1.6.16]# make install
在php配置文件中添加:extension = mongo.so;并检测php是否加载mongodb模块
[root@ying01 mongo-1.6.16]# vim /usr/local/php-fpm/etc/php.ini extension=memcache.so extension=redis.so extension = mongodb.so extension = mongo.so //新增长 [root@ying01 mongo-1.6.16]# /usr/local/php-fpm/bin/php -m|grep mongo mongo mongodb
写一个php,测试mongo扩展是否成功
[root@ying01 mongo-1.6.16]# ls /data/wwwroot/default/ 1.php index.html [root@ying01 mongo-1.6.16]# vim /data/wwwroot/default/1.php <?php $m = new MongoClient(); // 链接 $db = $m->test; // 获取名称为 "test" 的数据库 $collection = $db->createCollection("runoob"); echo "集合建立成功"; ?>
输出 "集合建立成功",说明 runoob集合建立成功
[root@ying01 mongo-1.6.16]# systemctl stop httpd ;systemctl start nginx [root@ying01 mongo-1.6.16]# ps aux |grep nginx root 64145 0.0 0.0 45832 1268 ? Ss 19:26 0:00 nginx: master process /usr/ [root@ying01 mongo-1.6.16]# curl localhost/1.php 集合建立成功
登入mogodb,查看test集合
[root@ying01 mongo-1.6.16]# vim /usr/lib/systemd/system/mongod.service Environment="OPTIONS=-f /etc/mongod.conf" //把--auth 去掉,不用认证登陆 [root@ying01 mongo-1.6.16]# systemctl daemon-reload [root@ying01 mongo-1.6.16]# systemctl restart mongod [root@ying01 mongo-1.6.16]# curl localhost/1.php 集合建立成功 [root@ying01 mongo-1.6.16]# mongo --host 192.168.112.136 --port 27017 > use test switched to db test > show tables runoob >
下图为副本集架构,其中有一个主服务器(primary),用于处理客户端请求;还有多个备份服务器(secondary),用于保存主服务器的数据副本。
主服务器(primary)主宕机后,权重最高的从切换为主。
如图 一个仲裁(arbiter)的角色,它只负责裁决,而不存储数据。
机器分配:
- ying01 192.168.112.136 PRIMARY
- ying02 192.168.112.138 SECONDARY
- ying03 192.168.112.139 SECONDARY
分别在ying0二、ying03上安装mongdb
[root@ying02 ~]# cd /etc/yum.repos.d/ [root@ying02 yum.repos.d]# vim mongodb-org-4.0.repo [mongodb-org-4.0] name = MongoDB Repository baseurl = https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/ gpgcheck = 1 enabled = 1 gpgkey = https://www.mongodb.org/static/pgp/server-4.0.asc
[root@ying02 yum.repos.d]# yum install -y mongodb-org
在ying03机器
[root@ying03 ~]# cd /etc/yum.repos.d/ [root@ying03 yum.repos.d]# vim mongodb-org-4.0.repo [mongodb-org-4.0] name = MongoDB Repository baseurl = https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/ gpgcheck = 1 enabled = 1 gpgkey = https://www.mongodb.org/static/pgp/server-4.0.asc
[root@ying03 yum.repos.d]# yum install -y mongodb-org
在ying01上编辑配置文件
[root@ying01 yum.repos.d]# vim /etc/mongod.conf bindIp: 127.0.0.1,192.168.112.136 replication: //让其加载上,去掉#号 oplogSizeMB: 20 //添加 replSetName: yinglinux //添加
重启 mongod服务
[root@ying01 yum.repos.d]# systemctl restart mongod.service [root@ying01 yum.repos.d]# ps aux|grep mongod mongod 64534 9.7 3.1 1102168 58380 ? Sl 20:42 0:03 /usr/bin/mongod -f /etc/mongod.conf root 64569 0.0 0.0 112720 984 pts/0 S+ 20:43 0:00 grep --color=auto mongod
与ying01上的配置同样,在ying02机器上,编辑mongod配置文件
[root@ying02 ~]# vim /etc/mongod.conf bindIp: 127.0.0.1,192.168.112.138 replication: oplogSizeMB: 20 replSetName: yinglinux
在ying03机器上,编辑mongod配置文件
[root@ying03 ~]# vim /etc/mongod.conf bindIp: 127.0.0.1,192.168.112.139 //添加上内网IP replication: //增长语句 oplogSizeMB: 20 replSetName: yinglinux
在ying0二、ying03上机器都开启mongod服务
root@ying02 ~]# ps aux|grep mongod mongod 16421 6.5 2.7 1102140 52160 ? Sl 20:50 0:00 /usr/bin/mongod -f /etc/mongod.conf root 16452 0.0 0.0 112720 984 pts/0 R+ 20:50 0:00 grep --color=auto mongod [root@ying02 ~]# netstat -lntp |grep mongod tcp 0 0 192.168.112.138:27017 0.0.0.0:* LISTEN 16421/mongod tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 16421/mongod [root@ying02 ~]#
root@ying03 ~]# systemctl start mongod [root@ying03 ~]# ps aux|grep mongod mongod 3773 5.6 2.7 1102148 52088 ? Sl 20:50 0:00 /usr/bin/mongod -f /etc/mongod.conf root 3804 0.0 0.0 112720 984 pts/0 S+ 20:50 0:00 grep --color=auto mongod [root@ying03 ~]# netstat -lntp |grep mongod tcp 0 0 192.168.112.139:27017 0.0.0.0:* LISTEN 3773/mongod tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 3773/mongod
开始配置副本集架构
> config={_id:"yinglinux",members:[{_id:0,host:"192.168.112.136:27017"},{_id:1,host:"192.168.112.138:27017"},{_id:2,host:"192.168.112.139:27017"}]} //配置副本集 { "_id" : "yinglinux", "members" : [ { "_id" : 0, "host" : "192.168.112.136:27017" }, { "_id" : 1, "host" : "192.168.112.138:27017" }, { "_id" : 2, "host" : "192.168.112.139:27017" } ] } > rs.initiate(config) //初始化 { "ok" : 1, "operationTime" : Timestamp(1535374960, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535374960, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } yinglinux:OTHER> rs.status() //查看rs状态 由于篇幅关系,只是显示 重要的信息 { "_id" : 0, "name" : "192.168.112.136:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", //PRIMARY,主 "uptime" : 1274, "optime" : { "ts" : Timestamp(1535375032, 1), "t" : NumberLong(1) { "_id" : 1, "name" : "192.168.112.138:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", //SECONDARY,从 "uptime" : 73, "optime" : { "ts" : Timestamp(1535375032, 1), "t" : NumberLong(1) { "_id" : 2, "name" : "192.168.112.139:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", //SECONDARY,从 "uptime" : 73, "optime" : { "ts" : Timestamp(1535375032, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1535375032, 1), "t" : NumberLong(1) yinglinux:PRIMARY>
在ying01机器上:
在mydb库中,新增长acc表
yinglinux:PRIMARY> use admin switched to db admin yinglinux:PRIMARY> use mydb switched to db mydb yinglinux:PRIMARY> db.acc.insert({AccountID:1,UserName:"123",password:"123456"}) WriteResult({ "nInserted" : 1 }) yinglinux:PRIMARY> show dbs //查看全部库 admin 0.000GB config 0.000GB db1 0.000GB local 0.000GB mydb 0.000GB test 0.000GB yinglinux:PRIMARY> use mydb //切换为mydb库 switched to db mydb yinglinux:PRIMARY> show tables //查看mydb下表,acc已经生成 acc
在ying02机器上:
想在从库,查看全部库,须要先执行 rs.slaveOk()
yinglinux:SECONDARY> rs.slaveOk() yinglinux:SECONDARY> show dbs admin 0.000GB config 0.000GB db1 0.000GB local 0.000GB mydb 0.000GB test 0.000GB yinglinux:SECONDARY> use mydb //切换为mydb库 switched to db mydb yinglinux:SECONDARY> show tables //这时也能再从库,看到acc表 acc
在ying03机器上:
想在从库,查看全部库,须要先执行 rs.slaveOk()
yinglinux:SECONDARY> rs.slaveOk() yinglinux:SECONDARY> show dbs admin 0.000GB config 0.000GB db1 0.000GB local 0.000GB mydb 0.000GB test 0.000GB yinglinux:SECONDARY> use mydb //切换为mydb库 switched to db mydb yinglinux:SECONDARY> show tables //这时也能再从库,看到acc表 acc
如今模拟ying01宕机,看一下,其他两台机器之一会成为 PRIMARY
在ying01上增长iptables规则
[root@ying01 ~]# iptables -I INPUT -p tcp --dport 27017 -j DROP
在ying02机器上执行,查看状态,此时可以看到ying01宕机,ying02变为PRIMARY
yinglinux:PRIMARY> rs.status() "_id" : 0, "name" : "192.168.112.136:27017", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", //ying01机器出现问题了 "_id" : 1, "name" : "192.168.112.138:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", //ying02变为PRIMARY
因为三台机器的 权重都是 1,所以以前ying0二、ying03没有优先权,他们成为PRIMARY,可能性为50%。
如今开始设置权重:
在ying01上把以前的iptables的规则删除
[root@ying01 ~]# iptables -D INPUT -p tcp --dport 27017 -j DROP
此时ying01的链接恢复,可是变为SECONDARY。这个不可以变为PRIMARY
"_id" : 0, "name" : "192.168.112.136:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY",
ying02成为PRIMARY,如今在其上面操做
yinglinux:PRIMARY> cfg = rs.conf() //查看权重 "_id" : 0, "host" : "192.168.112.136:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, //权重为1 "_id" : 1, "host" : "192.168.112.138:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, //权重也为1 "_id" : 2, "host" : "192.168.112.139:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, //权重为1
继续在PRIMARY上,分配权重
yinglinux:PRIMARY> cfg.members[0].priority = 3 //给ying01设置为3 3 yinglinux:PRIMARY> cfg.members[1].priority = 2 //给ying02设置为2 yinglinux:PRIMARY> cfg.members[2].priority = 1 //给ying03设置为1 1 yinglinux:PRIMARY> rs.reconfig(cfg) //从新加载,使其生效 { "ok" : 1, "operationTime" : Timestamp(1535381404, 2), "$clusterTime" : { "clusterTime" : Timestamp(1535381404, 2), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }
此时ying01会成为PRIMARY,由于他的权重设置的为最高,这一步也只能在PRIMARY操做
yinglinux:PRIMARY> cfg = rs.conf() //查看权重 "_id" : 0, "host" : "192.168.112.136:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 3, "_id" : 1, "host" : "192.168.112.138:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 2, "_id" : 2, "host" : "192.168.112.139:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1,
从上面试验结果,ying01由于分配为最高的权重,所以又成为PRIMARY
MongoDB分片的三个角色:
mongos: 数据库集群请求的入口,全部的请求都经过mongos进行协调,不须要在应用程序添加一个路由选择器,mongos本身就是一个请求分发中心,它负责把对应的数据请求请求转发到对应的shard服务器上。在生产环境一般有多mongos做为请求的入口,防止其中一个挂掉全部的mongodb请求都没有办法操做。
config server: 配置服务器,存储全部数据库元信息(路由、分片)的配置。mongos自己没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。mongos第一次启动或者关掉重启就会从 config server 加载配置信息,之后若是配置服务器信息变化会通知到全部的 mongos 更新本身的状态,这样 mongos 就能继续准确路由。在生产环境一般有多个 config server 配置服务器,由于它存储了分片路由的元数据,防止数据丢失!
shard: 存储了一个集合部分数据的MongoDB实例,每一个分片是单独的mongodb服务或者副本集,在生产环境中,全部的分片都应该是副本集。
注意:此处出现几处莫名错误,查询了几个小时,以及反复试验,都未能解决。理论步骤都正确,由于时间紧急,先把问题记录下来,往后再解决。
分片搭建 -服务器规划
三台机器 ying01 ying02 ying03
ying01搭建:mongos、config server、副本集1主节点、副本集2仲裁、副本集3从节点
ying02搭建:mongos、config server、副本集1从节点、副本集2主节点、副本集3仲裁
ying03搭建:mongos、config server、副本集1仲裁、副本集2从节点、副本集3主节点
端口分配:mongos 20000、config 21000、副本集1 2700一、副本集2 2700二、副本集3 27003
三台机器所有关闭firewalld服务和selinux,或者增长对应端口的规则
[root@ying01 ~]# mkdir -p /data/mongodb/mongos/log [root@ying01 ~]# mkdir -p /data/mongodb/config/{data,log} [root@ying01 ~]# mkdir -p /data/mongodb/shard1/{data,log} [root@ying01 ~]# mkdir -p /data/mongodb/shard2/{data,log} [root@ying01 ~]# mkdir -p /data/mongodb/shard3/{data,log} [root@ying01 ~]# mkdir /etc/mongod/
config server配置
[root@ying01 ~]# vim /etc/mongod/config.conf pidfilepath = /var/run/mongodb/configsrv.pid dbpath = /data/mongodb/config/data logpath = /data/mongodb/config/log/congigsrv.log logappend = true bind_ip = 192.168.112.136 port = 21000 fork = true configsvr = true #declare this is a config db of a cluster; replSet=configs #副本集名称 maxConns=20000 #设置最大链接数
启动config server
[root@ying01 ~]# mongod -f /etc/mongod/config.conf [root@ying01 ~]# ps aux|grep mongod mongod 64534 0.9 5.4 1509136 101344 ? Sl 20:42 1:35 /usr/bin/mongod -f /etc/mongod.conf root 65446 6.5 3.2 1147180 60648 ? Sl 23:22 0:01 mongod -f /etc/mongod/config.conf root 65482 0.0 0.0 112720 980 pts/0 S+ 23:23 0:00 grep --color=auto mongod [root@ying01 ~]# netstat -lntp |grep mongod tcp 0 0 192.168.112.136:21000 0.0.0.0:* LISTEN 65446/mongod tcp 0 0 192.168.112.136:27017 0.0.0.0:* LISTEN 64534/mongod tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 64534/mongod
以端口21000登陆mongdb,初始化副本集
[root@ying01 ~]# mongo --host 192.168.112.136 --port 21000 config = { _id: "configs", members: [ {_id : 0, host : "192.168.112.136:21000"},{_id : 1, host : "192.168.112.138:21000"},{_id : 2, host : "192.168.112.139:21000"}] } { "_id" : "configs", "members" : [ { "_id" : 0, "host" : "192.168.112.136:21000" }, { "_id" : 1, "host" : "192.168.112.138:21000" }, { "_id" : 2, "host" : "192.168.112.139:21000" } ] } > rs.initiate(config) { "ok" : 1, "operationTime" : Timestamp(1535383937, 1), "$gleStats" : { "lastOpTime" : Timestamp(1535383937, 1), "electionId" : ObjectId("000000000000000000000000") }, "lastCommittedOpTime" : Timestamp(0, 0), "$clusterTime" : { "clusterTime" : Timestamp(1535383937, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } configs:SECONDARY> rs.status() "_id" : 0, "name" : "192.168.112.136:21000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", //成为主
分片配置
shard1配置
[root@ying01 ~]# vim /etc/mongod/shard1.conf pidfilepath = /var/run/mongodb/shard1.pid dbpath = /data/mongodb/shard1/data logpath = /data/mongodb/shard1/log/shard1.log logappend = true bind_ip = 192.168.112.136 port = 27001 fork = true #httpinterface=true #打开web监控 #rest=true replSet=shard1 #副本集名称 shardsvr = true #declare this is a shard db of a cluster; maxConns=20000 #设置最大链接数
shard2配置
[root@ying01 ~]# vim /etc/mongod/shard2.conf pidfilepath = /var/run/mongodb/shard2.pid dbpath = /data/mongodb/shard2/data logpath = /data/mongodb/shard2/log/shard2.log logappend = true bind_ip = 192.168.112.136 port = 27002 fork = true #httpinterface=true #打开web监控 #rest=true replSet=shard2 #副本集名称 shardsvr = true #declare this is a shard db of a cluster; maxConns=20000 #设置最大链接数
shard3配置
[root@ying01 ~]# vim /etc/mongod/shard3.conf pidfilepath = /var/run/mongodb/shard3.pid dbpath = /data/mongodb/shard3/data logpath = /data/mongodb/shard3/log/shard3.log logappend = true bind_ip = 192.168.112.136 port = 27003 fork = true #httpinterface=true #打开web监控 #rest=true replSet=shard3 #副本集名称 shardsvr = true #declare this is a shard db of a cluster; maxConns=20000 #设置最大链接数
三台机子启动shard一、shard二、shard3
[root@ying01 ~]# mongod -f /etc/mongod/shard1.conf 2018-08-28T00:08:34.971+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' about to fork child process, waiting until server is ready for connections. forked process: 676 child process started successfully, parent exiting
以27001端口登陆mongdb
[root@ying01 ~]# mongo --host 192.168.112.136 --port 27001 > use admin switched to db admin > > config = { _id: "shard1", members: [ {_id : 0, host : "192.168.112.136:27001"}, {_id: 1,host : "192.168.112.138:27001"},{_id : 2, host : "192.168.112.139:27001",arbiterOnly:true}] } //此处注意,不能以ying03登陆,这里139倍设为仲裁节点 { "_id" : "shard1", "members" : [ { "_id" : 0, "host" : "192.168.112.136:27001" }, { "_id" : 1, "host" : "192.168.112.138:27001" }, { "_id" : 2, "host" : "192.168.112.139:27001", "arbiterOnly" : true } ] } > rs.initiate(config) { "ok" : 1, //有OK 1 代表配置正确 "operationTime" : Timestamp(1535386695, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535386695, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } shard1:SECONDARY> rs.status() "_id" : 0, "name" : "192.168.112.136:27001", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", //此时ying01为PRIMARY "uptime" : 438, "optime" : { "ts" : Timestamp(1535386758, 1), "t" : NumberLong(1)
建立shard2副本集
ying02 以27002端口登陆mongdb
[root@ying02 ~]# mongo --host 192.168.112.138 --port 27002 > use admin switched to db admin > config = { _id: "shard2", members: [ {_id : 0, host : "192.168.112.136:27002" ,arbiterOnly:true},{_id : 1, host : "192.168.112.138:27002"},{_id : 2, host : "192.168.112.139:27002"}] } //不能用136登陆,这里136被设为仲裁节点 { "_id" : "shard2", "members" : [ { "_id" : 0, "host" : "192.168.112.136:27002", "arbiterOnly" : true }, { "_id" : 1, "host" : "192.168.112.138:27002" }, { "_id" : 2, "host" : "192.168.112.139:27002" } ] } > rs.initiate() { "info2" : "no configuration specified. Using a default configuration for the set", "me" : "192.168.112.138:27002", "ok" : 1, "operationTime" : Timestamp(1535387519, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535387519, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } shard2:SECONDARY> rs.status() { "set" : "shard2", "date" : ISODate("2018-08-27T16:32:32.274Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1535387551, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1535387551, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1535387551, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1535387551, 1), "t" : NumberLong(1) } }, "lastStableCheckpointTimestamp" : Timestamp(1535387521, 1), "members" : [ { "_id" : 0, "name" : "192.168.112.138:27002", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 318, "optime" : { "ts" : Timestamp(1535387551, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-27T16:32:31Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1535387519, 2), "electionDate" : ISODate("2018-08-27T16:31:59Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" } ], "ok" : 1, "operationTime" : Timestamp(1535387551, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535387551, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } shard2:PRIMARY>
[root@ying03 ~]# mongo --host 192.168.112.139 --port 27003 > use admin switched to db admin > config = { _id: "shard3", members: [ {_id : 0, host : "192.168.112.136:27003"}, {_id : 1, host : "192.168.112.138:27003", arbiterOnly:true}, {_id : 2, host : "192.168.112.139:27003"}] } { "_id" : "shard3", "members" : [ { "_id" : 0, "host" : "192.168.112.136:27003" }, { "_id" : 1, "host" : "192.168.112.138:27003", "arbiterOnly" : true }, { "_id" : 2, "host" : "192.168.112.139:27003" } ] } > rs.initiate() { "info2" : "no configuration specified. Using a default configuration for the set", "me" : "192.168.112.139:27003", "ok" : 1, "operationTime" : Timestamp(1535387799, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535387799, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } shard3:SECONDARY> rs.status() { "set" : "shard3", "date" : ISODate("2018-08-27T16:36:54.608Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1535387812, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1535387812, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1535387812, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1535387812, 1), "t" : NumberLong(1) } }, "lastStableCheckpointTimestamp" : Timestamp(1535387802, 2), "members" : [ { "_id" : 0, "name" : "192.168.112.139:27003", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 165, "optime" : { "ts" : Timestamp(1535387812, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-27T16:36:52Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1535387800, 1), "electionDate" : ISODate("2018-08-27T16:36:40Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" } ], "ok" : 1, "operationTime" : Timestamp(1535387812, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535387812, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } shard3:PRIMARY>
开始
[root@ying01 ~]# mongo --host 192.168.112.136 --port 20000 MongoDB shell version v4.0.1 connecting to: mongodb://192.168.112.136:20000/ MongoDB server version: 4.0.1 Server has startup warnings: 2018-08-28T00:58:01.517+0800 I CONTROL [main] 2018-08-28T00:58:01.517+0800 I CONTROL [main] ** WARNING: Access control is not enabled for the database. 2018-08-28T00:58:01.517+0800 I CONTROL [main] ** Read and write access to data and configuration is unrestricted. 2018-08-28T00:58:01.517+0800 I CONTROL [main] ** WARNING: You are running this process as the root user, which is not recommended. 2018-08-28T00:58:01.517+0800 I CONTROL [main] mongos> sh.addShard("shard1/192.168.112.136:27001,192.168.112.138:27001,192.168.112.139:27001") { "shardAdded" : "shard1", "ok" : 1, "operationTime" : Timestamp(1535389509, 4), "$clusterTime" : { "clusterTime" : Timestamp(1535389509, 4), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> sh.addShard("shard2/192.168.112.136:27002,192.168.112.138:27002,192.168.112.139:27002") { "ok" : 0, "errmsg" : "in seed list shard2/192.168.112.136:27002,192.168.112.138:27002,192.168.112.139:27002, host 192.168.112.136:27002 does not belong to replica set shard2; found { hosts: [ \"192.168.112.138:27002\" ], setName: \"shard2\", setVersion: 1, ismaster: true, secondary: false, primary: \"192.168.112.138:27002\", me: \"192.168.112.138:27002\", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1535389581, 1), t: 1 }, lastWriteDate: new Date(1535389581000), majorityOpTime: { ts: Timestamp(1535389581, 1), t: 1 }, majorityWriteDate: new Date(1535389581000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1535389587403), logicalSessionTimeoutMinutes: 30, minWireVersion: 0, maxWireVersion: 7, readOnly: false, compression: [ \"snappy\" ], ok: 1.0, operationTime: Timestamp(1535389581, 1), $clusterTime: { clusterTime: Timestamp(1535389586, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }", "code" : 96, "codeName" : "OperationFailed", "operationTime" : Timestamp(1535389586, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535389586, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> sh.addShard("shard2/192.168.112.136:27002,192.168.112.138:27002,192.168.112.139:27002") { "ok" : 0, "errmsg" : "in seed list shard2/192.168.112.136:27002,192.168.112.138:27002,192.168.112.139:27002, host 192.168.112.136:27002 does not belong to replica set shard2; found { hosts: [ \"192.168.112.138:27002\" ], setName: \"shard2\", setVersion: 1, ismaster: true, secondary: false, primary: \"192.168.112.138:27002\", me: \"192.168.112.138:27002\", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1535389731, 1), t: 1 }, lastWriteDate: new Date(1535389731000), majorityOpTime: { ts: Timestamp(1535389731, 1), t: 1 }, majorityWriteDate: new Date(1535389731000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1535389736979), logicalSessionTimeoutMinutes: 30, minWireVersion: 0, maxWireVersion: 7, readOnly: false, compression: [ \"snappy\" ], ok: 1.0, operationTime: Timestamp(1535389731, 1), $clusterTime: { clusterTime: Timestamp(1535389735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }", "code" : 96, "codeName" : "OperationFailed", "operationTime" : Timestamp(1535389735, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535389735, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> sh.addShard("shard3/192.168.112.136:27003,192.168.112.138:27003,192.168.112.139:27003") { "ok" : 0, "errmsg" : "in seed list shard3/192.168.112.136:27003,192.168.112.138:27003,192.168.112.139:27003, host 192.168.112.136:27003 does not belong to replica set shard3; found { hosts: [ \"192.168.112.139:27003\" ], setName: \"shard3\", setVersion: 1, ismaster: true, secondary: false, primary: \"192.168.112.139:27003\", me: \"192.168.112.139:27003\", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1535390003, 1), t: 1 }, lastWriteDate: new Date(1535390003000), majorityOpTime: { ts: Timestamp(1535390003, 1), t: 1 }, majorityWriteDate: new Date(1535390003000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1535390011340), logicalSessionTimeoutMinutes: 30, minWireVersion: 0, maxWireVersion: 7, readOnly: false, compression: [ \"snappy\" ], ok: 1.0, operationTime: Timestamp(1535390003, 1), $clusterTime: { clusterTime: Timestamp(1535390007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }", "code" : 96, "codeName" : "OperationFailed", "operationTime" : Timestamp(1535390007, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535390007, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> sh.addShard("shard3/192.168.112.136:27003,192.168.112.138:27003,192.168.112.139:27003") { "ok" : 0, "errmsg" : "in seed list shard3/192.168.112.136:27003,192.168.112.138:27003,192.168.112.139:27003, host 192.168.112.136:27003 does not belong to replica set shard3; found { hosts: [ \"192.168.112.139:27003\" ], setName: \"shard3\", setVersion: 1, ismaster: true, secondary: false, primary: \"192.168.112.139:27003\", me: \"192.168.112.139:27003\", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1535390463, 1), t: 1 }, lastWriteDate: new Date(1535390463000), majorityOpTime: { ts: Timestamp(1535390463, 1), t: 1 }, majorityWriteDate: new Date(1535390463000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1535390472093), logicalSessionTimeoutMinutes: 30, minWireVersion: 0, maxWireVersion: 7, readOnly: false, compression: [ \"snappy\" ], ok: 1.0, operationTime: Timestamp(1535390463, 1), $clusterTime: { clusterTime: Timestamp(1535390469, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }", "code" : 96, "codeName" : "OperationFailed", "operationTime" : Timestamp(1535390469, 2), "$clusterTime" : { "clusterTime" : Timestamp(1535390469, 2), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos>