1、elasticsearch安装并运行html
1.下载并解压便可node
2.进入elasticsearch的bin目录git
3.执行 ./elasticsearch 命令便可运行github
4.浏览器中运行:http://ip:9200/ 若是出现如下画面表示运行成功bootstrap
2、集群搭建浏览器
1.以本人为例,在三台机器上安装了es,来搭建集群app
2进入elasticsearch的config目录,打开并编辑elasticsearch.ymlcurl
# ======================== Elasticsearch Configuration =========================elasticsearch
# NOTE: Elasticsearch comes with reasonable defaults for most settings.ide
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#三台电脑cluster.name:保持一致
cluster.name: my-askingdata
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#三台电脑的node.name不能同样
node.name: node-1
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.mlockall: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
# ip
network.host: 192.168.1.106
#
# Set a custom port for HTTP:
#端口
http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#写入三台电脑的ip
discovery.zen.ping.unicast.hosts: ["192.168.1.106", "192.168.1.108","192.168.1.109"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
3、head插件安装和使用
1.在es的plugin 目录下新建head目录
2.下载head并解压进plugin/head目录便可
3.分别启动三台 机器的es
4. 打开http://192.168.1.106:9200/_plugin/head/
4、索引的操做
一、索引文档的建立
将以下一条歌曲信息的数据提交到ES中建立索引:
[plain] view plain copy
url:http://127.0.0.1:9200/song001/list001/1
data:{"number":32768,"singer":"杨坤","size":"5109132","song":"今夜二十岁","tag":"中国好声音","timelen":319}
索引名字是:song001;
索引的类型是:list001;
本记录的id是:1
返回的信息能够看到建立是成功的,而且版本号是1;ES会对记录修改进行版本跟踪,第一次建立记录为1,同一条记录每修改一次就追加1。
至此一条记录就提交到ES中创建了索引,注意HTTP的方法是PUT,不要选择错了。
二、索引文档的查询
根据索引时的ID查询的文档的RESTful接口以下
url:http://127.0.0.1:9200/song001/list001/1
HTTP方法采用GET的形式。
三、索引文档的更新
根据索引时的ID更新的文档的内容其RESTful接口以下
url:http://127.0.0.1:9200/song001/list001/1
HTTP方法采用PUT的形式。
将歌手名由“杨坤”改为“杨坤独唱”;
结果中的version字段已经成了2,由于咱们这是是修改,索引版本递增;created字段是false,表示此次不是新建而是更新。
更新接口与建立接口彻底同样,ES会查询记录是否存在,若是不存在就是建立,存在就是更新操做。
四、索引文档的删除
根据索引时的ID更新的文档的内容其RESTful接口以下
url:http://127.0.0.1:9200/song001/list001/1
HTTP方法采用DELETE的形式。
删除事后,再经过查询接口去查询将得不到结果。
总结:
增删改查的RESTful接口URL形式:http://localhost:9200/<index>/<type>/[<id>]
增删改查分别对应:HTTP请求的PUT、GET、DELETE方法。PUT调用是若是不存在就是建立,已存在是更新。
快速bulk插入数据
下载 https://github.com/codelibs/elasticsearch-reindexing
执行命令在线安装,不用下载 $ $ES_HOME/bin/plugin install org.codelibs/elasticsearch-reindexing/2.1.1
在查询时出现max_result_window错误时,
执行如下命令,红字要>=文档总数
curl -XPUT "http://192.168.1.106:9200/shark/_settings" -d '{ "index" : { "max_result_window" :30000 } }'
关闭节点
ps -ef | grep elasticsearch
kill ****