1.在浏览器中运行http://XXX.XXX.XXX.XXX:9200/_flush,确保索引数据能保存到硬盘中。
2.原数据的备份。主要是elasticsearch数据目录下的nodes目录的备份。nodes目录为索引数据目录。
3.将原集群中的每一个elasticsearch节点下的data目录拷贝至新的elasticsearch数据目录下。
node
4 利用快照来备份还原。浏览器
下面是备份及还原的脚本,分别存成 esback.sh,esrestore.sh,并 chmod 777 esback.sh.给予执行权限bash
脚本以下:curl
-----自动备份elasticsearch数据并压缩---
#!/bin/bash
filename=`date +%Y%m%d%H`
backesFile=es$filename.tar.gz
cd /home/elasticsearch/back
mkdir es_dump
cd es_dump
curl -XDELETE 192.168.1.7:9200/_snapshot/backup/$filename?pretty
echo 'sleep 30'
sleep 30
curl -XPUT 192.168.1.7:9200/_snapshot/backup/$filename?wait_for_completion=true&pretty
echo 'sleep 30'
sleep 30
cp /home/elasticsearch/snapshot/* /home/elasticsearch/back/es_dump -rf
cd ..
tar czf $backesFile es_dump/
rm es_dump -rf
-----自动解压并还原elasticsearch数据---
#!/bin/bash
filename='XXXXXXX'
backesFile=es$filename.tar.gz
cd /home/elasticsearch/back
tar zxvf $backesFile
rm /home/elasticsearch/snapshot/* -rf
cp /home/elasticsearch/back/es_dump/* /home/elasticsearch/snapshot -rf
curl -XPOST 192.168.1.7:9200/users/_close
curl -XPOST 192.168.1.7:9200/products/_close
echo 'sleep 5'
sleep 5
curl -XPOST 192.168.1.7:9200/_snapshot/backup/$filename/_restore?pretty -d '{
"indices":"users"
}'
echo 'sleep 5'
sleep 5
curl -XPOST 192.168.1.7:9200/_snapshot/backup/$filename/_restore?pretty -d '{
"indices":"products"
}'
echo 'sleep 5'
sleep 5
curl -XPOST 192.168.1.7:9200/users/_open
curl -XPOST 192.168.1.7:9200/products/_open
rm es_dump -rf elasticsearch
---end----url
备份的脚本有几个前提条件rest
1 先建立快照存储库索引
--建立快照存储库 backup--
curl -XPUT 192.168.1.7:9200/_snapshot/backup -d '
{
"type":"fs",
"settings":{"location":"/home/elasticsearch/snapshot"}
}'it
且/home/elasticsearch/snapshot 该目录要有权限io
备份目录 /home/elasticsearch/back要先建好
还原的时候是按索引来分别还原的,可改为须要的方式