1、所有备份和导入node
安装:
nginx
git clone https://github.com/taskrabbit/elasticsearch-dump.git
git
cd elasticsearch-dump
github
npm install elasticdump -gnpm
sudo yum install npmjson
1
2
3
4
5
6
7
8
9
10
|
(1)建立备份路径
mkdir /data/es_data_backup
(2)迁移原机器上的全部索引到目标机器
#把原始索引的mapping结构和数据导出
elasticdump --input=http:
//10.200.57.118:9200/ --output=/data/es_data_backup/cmdb_dump-mapping.json --all=true --type=mapping
elasticdump --input=http:
//10.200.57.118:9200/ --output=/data/es_data_backup/cmdb_dump.json --all=true --type=data
#mapping结构和数据导入新的cluster节点
elasticdump --input=/data/es_data_backup/cmdb_dump-mapping.json --output=http:
//10.200.57.118:9200/ --bulk=true
elasticdump --input=/data/es_data_backup/cmdb_dump.json --output=http:
//10.200.57.118:9200/ --bulk=true
|
2、指定库备份和导入app
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
curl -XGET
'192.168.11.10:9200/_cat/indices?v&pretty'
. #查看都有哪些索引
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open jyall-test 5 1 18908740 2077368 25gb 12.5gb
# Backup index data to a file:
elasticdump --input=http:
//10.200.57.118:9200/ele_nginx_clusters --output=/data/es_data_backup/ele_nginx_clusters_mapping.json --type=mapping
elasticdump --input=http:
//10.200.57.118:9200/ele_nginx_clusters --output=/data/es_data_backup/ele_nginx_clusters.json --type=data
#或者采用gzip的方式,这种方式亲测节省10多倍的空间,导入时gunzip ele_nginx_clusters.json.gz后再进行导入
#Backup and index to a gzip using stdout:
elasticdump --input=http:
//10.200.57.118:9200/ele_nginx_clusters --output=$ | gzip > /data/es_data_backup/ele_nginx_clusters.json.gz
导入:
elasticdump --input=/data/es_data_backup/ele_nginx_clusters_mapping.json --output=http:
//10.200.57.118:9200/ --bulk=true
elasticdump --input=/data/es_data_backup/ele_nginx_clusters.json --output=http:
//10.200.57.118:9200/ --bulk=true
|
3、导出遇到的报错及问题curl
1
2
3
4
5
6
7
8
9
10
11
12
|
(1)报错以下:
Thu, 26 Apr 2018 09:14:49 GMT | Error Emitted => read ECONNRESET
Thu, 26 Apr 2018 09:14:49 GMT | Total Writes: 19800
Thu, 26 Apr 2018 09:14:49 GMT | dump ended with error (
get
phase) => Error: read ECONNRESET
(2)
<1>
It sounds like your issue
is
being caused
by
the elasticdump opening too many sockets to your elasticsearch cluster. You can use the --maxSockets option to limit the number of sockets opened.
elasticdump --input http:
//192.168.2.222:9200/index1 --output http://192.168.2.222:9200/index2 --type=data --maxSockets=5
Reference:
https:
//stackoverflow.com/questions/33248267/dump-ended-with-error-set-phase-error-read-econnreset
https:
//github.com/nodejs/node/issues/10563
|
Elasticsearch6.0数据导入elasticsearch6.7方法:socket
bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.7.0/elasticsearch-analysis-ik-6.7.0.zip
elasticsearch
curl http://192.168.150.116:9210/_cat/plugins
elasticdump --input=http://192.168.150.166:9200/ --output=http://192.168.150.114:9210 --all=true --type=data