本文以最新的elasticsearch-6.3.0.tar.gz为例,为了节约资源,本文将副本调为0, 无client角色html
https://www.elastic.co/blog/hot-warm-architecture-in-elasticsearch-5-xnode
之前es2.x版本配置elasticsearch.yml 里的node.tag: hot这个配置不生效了 被改为了这个 node.attr.box_type: hot
master节点: [root@n1 ~]# cat /usr/local/elasticsearch/config/elasticsearch.yml cluster.name: elk node.master: true node.data: false node.name: 192.168.2.11 #node.attr.box_type: hot #node.tag: hot path.data: /data/es path.logs: /data/log network.host: 192.168.2.11 http.port: 9200 transport.tcp.port: 9300 transport.tcp.compress: true discovery.zen.ping.unicast.hosts: ["192.168.2.11"] cluster.routing.allocation.disk.watermark.low: 85% cluster.routing.allocation.disk.watermark.high: 90% indices.fielddata.cache.size: 10% indices.breaker.fielddata.limit: 30% http.cors.enabled: true http.cors.allow-origin: "*"
- client节点(这里就不配置了) node.master: false node.data: false
- hot节点 [root@n2 ~]# cat /usr/local/elasticsearch/config/elasticsearch.yml cluster.name: elk node.master: false node.data: true node.name: 192.168.2.12 node.attr.box_type: hot path.data: /data/es network.host: 192.168.2.12 http.port: 9200 transport.tcp.port: 9300 transport.tcp.compress: true discovery.zen.ping.unicast.hosts: ["192.168.2.11"] cluster.routing.allocation.disk.watermark.low: 85% cluster.routing.allocation.disk.watermark.high: 90% indices.fielddata.cache.size: 10% indices.breaker.fielddata.limit: 30% http.cors.enabled: true http.cors.allow-origin: "*"
- cold节点 [root@n3 ~]# cat /usr/local/elasticsearch/config/elasticsearch.yml cluster.name: elk node.master: false node.data: true node.name: 192.168.2.13 node.attr.box_type: cold path.data: /data/es network.host: 192.168.2.13 http.port: 9200 transport.tcp.port: 9300 transport.tcp.compress: true discovery.zen.ping.unicast.hosts: ["192.168.2.11"] cluster.routing.allocation.disk.watermark.low: 85% cluster.routing.allocation.disk.watermark.high: 90% indices.fielddata.cache.size: 10% indices.breaker.fielddata.limit: 30% http.cors.enabled: true http.cors.allow-origin: "*"
我hot节点打了tagapi
node.attr.box_type: cold
建立一个template(这里我用kibana来操做es的api) PUT _template/test { "index_patterns": "test-*", "settings": { "index.number_of_replicas": "0", "index.routing.allocation.require.box_type": "hot" } }
意思是test-*
索引命名的,都将其数据放到hot节点上.ruby
以test-2018.07.05索引为例,将它从hot节点迁移到cold节点架构
kibana里操做: PUT /test-2018.07.05/_settings { "settings": { "index.routing.allocation.require.box_type": "cold" } }
生产中可能天天,或每h,生成一个index.app
test-2018.07.01 test-2018.07.02 test-2018.07.03 test-2018.07.04 test-2018.07.05 ...
我能够写一个sh定时任务,天天晚上定时迁移数据.
如我在hot节点只保留7天的数据,7天之前的索引我匹配到, 天天晚上执行如下迁移命令便可.cors
https://www.cnblogs.com/iiiiher/p/8029062.htmlelasticsearch
1.为了提升吞吐量tcp
path.data:/data1,/data2,/data3,/data4,/data5 能够每一个目录挂一块盘
2.若是有10台hot节点,能够设置10个shardside
input { stdin { } } output { elasticsearch { index => "test-%{+YYYY.MM.dd}" hosts => ["192.168.2.11:9200"] } stdout {codec => rubydebug} } /usr/local/logstash/bin/logstash -f logstash.yaml --config.reload.automatic
关于es的template
es数据入库时候都会匹配一个index template,默认匹配的是logstash这个template
template大体分红setting和mappings两部分
根据index name来匹配使用哪一个index template. index template属于节点范围,而非全局. 须要给某个节点单独设置index_template(如给设置一些特有的tag).