因为公司zabbix的历史数据存储在elasticsearch中,有个需求是尽量地把监控的历史数据存储的长一点,最好是一年,目前的状况是三台ES节点,天天监控历史数据量有5G,目前最多可存储一个月的数据,超过30天的会被定时删除,每台内存分了8G,且所有使用机械硬盘,主分片为5,副本分片为1,查询需求通常只获取一周的历史数据,偶尔会有查一个月到两个月历史数据的需求。php
为了让ES能存储更长的历史数据,我将节点数量增长至4节点,并将部分节点内存提升,部分节点采用SSD存储html
192.168.179.133 200GSSD 4G内存 tag:hot node.name=es1 192.168.179.134 200GSSD 4G内存 tag:hot node.name=es2 192.168.179.135 500GHDD 64G内存 tag:cold node.name=es3 192.168.179.136 500GHDD 64G内存 tag:cold node.name=es4
对数据mapping从新建模,对str类型的数据不进行分词,采用冷热节点对数据进行存储,前七天数据的索引分片设计为2主1副,索引存储在热节点上,超过七天的数据将被存储在冷节点,并修改冷节点上的副本分片数为0,ES提供了一个shrink的api来进行压缩。因为ES是基于Lucene的搜索引擎,Lucene的索引由多个segment组成,每个段都会消耗文件句柄,内存和CPU运行周期,段数量过多会使资源消耗变大,搜索也会变慢,这里我将前一天的索引分片强制合并为1个segment,修改refresh的时间间隔至60s,减小段的产生频率。对超过3个月的索引进行关闭。以上操做均使用ES的管理工具curator来定时执行。node
ES地址填写集群中任意一个节点就能够web
HistoryStorageURL=192.168.179.133:9200 HistoryStorageTypes=str,text,log,uint,dbl HistoryStorageDateIndex=1
global $DB, $HISTORY; $HISTORY['url'] = 'http://192.168.179.133:9200'; // Value types stored in Elasticsearch. $HISTORY['types'] = ['str', 'text', 'log','uint','dbl'];
vim elasticsearch.yml
热节点配置vim
node.attr.box_type=hot
冷节点配置后端
node.attr.box_type=cold
每种数据类型的模板都须要建立,能够根据elasticsearch.map文件来获取api的信息,模板定义内容有匹配的索引,主副分片数设置,refresh间隔,新建索引分配节点设置以及mapping的设置,这里我只是以uint和str数据的索引为例api
PUT _template/uint_template { "template": "uint*", "index_patterns": ["uint*"], "settings" : { "index" : { "routing.allocation.require.box_type": "hot", "refresh_interval": "60s", "number_of_replicas" : 1, "number_of_shards" : 2 } }, "mappings" : { "values" : { "properties" : { "itemid" : { "type" : "long" }, "clock" : { "format" : "epoch_second", "type" : "date" }, "value" : { "type" : "long" } } } } } PUT _template/str_template { "template": "str*", "index_patterns": ["str*"], "settings" : { "index" : { "routing.allocation.require.box_type": "hot", "refresh_interval": "60s", "number_of_replicas" : 1, "number_of_shards" : 2 } }, "mappings" : { "values" : { "properties" : { "itemid" : { "type" : "long" }, "clock" : { "format" : "epoch_second", "type" : "date" }, "value" : { "index" : false, "type" : "keyword" } } } } }
定义管道的做用是对写入索引以前的数据进行预处理,使其按天产生索引。app
PUT _ingest/pipeline/uint-pipeline { "description": "daily uint index naming", "processors": [ { "date_index_name": { "field": "clock", "date_formats": ["UNIX"], "index_name_prefix": "uint-", "date_rounding": "d" } } ] } PUT _ingest/pipeline/str-pipeline { "description": "daily str index naming", "processors": [ { "date_index_name": { "field": "clock", "date_formats": ["UNIX"], "index_name_prefix": "str-", "date_rounding": "d" } } ] }
4.修改完成后重启zabbix,并查看zabbix是否有数据elasticsearch
systemctl restart zabbix-server
curator官方文档地址以下
https://www.elastic.co/guide/en/elasticsearch/client/curator/5.8/installation.htmlide
pip install -U elasticsearch-curator
mkdir /root/.curator vim /root/.curator/curator.yml --- client: hosts: - 192.168.179.133 - 192.168.179.134 port: 9200 url_prefix: use_ssl: False certificate: client_cert: client_key: ssl_no_validate: False http_auth: timeout: 30 master_only: False logging: loglevel: INFO logfile: logformat: default blacklist: ['elasticsearch', 'urllib3']
将7天之前的索引分配到冷节点
1: action: allocation description: "Apply shard allocation filtering rules to the specified indices" options: key: box_type value: cold allocation_type: require wait_for_completion: true timeout_override: continue_if_exception: false disable_action: false filters: - filtertype: pattern kind: regex value: '^(uint-|dbl-|str-).*$' - filtertype: age source: name direction: older timestring: '%Y-%m-%d' unit: days unit_count: 7
将前一天的索引强制合并,每一个分片1个segment。
2: action: forcemerge description: "Perform a forceMerge on selected indices to 'max_num_segments' per shard" options: max_num_segments: 1 delay: timeout_override: 21600 continue_if_exception: false disable_action: false filters: - filtertype: pattern kind: regex value: '^(uint-|dbl-|str-).*$' - filtertype: age source: name direction: older timestring: '%Y-%m-%d' unit: days unit_count: 1
修改冷节点得副本分片数量为0
3: action: replicas description: "Set the number of replicas per shard for selected" options: count: 0 wait_for_completion: True max_wait: 600 wait_interval: 10 filters: - filtertype: pattern kind: regex value: '^(uint-|dbl-|str-).*$' - filtertype: age source: name direction: older timestring: '%Y-%m-%d' unit: days unit_count: 7
对超过六个月的索引进行关闭
4: action: close description: "Close selected indices" options: delete_aliases: false skip_flush: false ignore_sync_failures: false filters: - filtertype: pattern kind: regex value: '^(uint-|dbl-|str-).*$' - filtertype: age source: name direction: older timestring: '%Y-%m-%d' unit: days unit_count: 180
超过一年的索引进行删除
5: action: delete_indices description: "Delete selected indices" options: continue_if_exception: False filters: - filtertype: pattern kind: regex value: '^(uint-|dbl-|str-).*$' - filtertype: age source: name direction: older timestring: '%Y-%m-%d' unit: days unit_count: 365
curator action.yml
crontab -e 10 0 * * * curator /root/action.yml
以上就是对zabbix后端存储elasticsearch存储优化的所有实践,参考连接
https://www.elastic.co/cn/blog/hot-warm-architecture-in-elasticsearch-5-x
欢迎关注我的公号“没有故事的陈师傅”