docker pull logstash:7.5.1
docker run --name logstash -d -p 5044:5044 --net esnet 8b94897b4254
命令中的--net设置的网络要和ES、kibana保持一致docker
// 0.0.0.0:容许任何IP访问 http.host: "0.0.0.0" // 配置elasticsearch集群地址 xpack.monitoring.elasticsearch.hosts: [ "http://192.168.172.131:9200","http://192.168.172.129:9200","http://192.168.172.128:9200" ] // 容许监控 xpack.monitoring.enabled: true // 启动时读取配置文件指定 path.config: /usr/share/logstash/config/logstash.conf // 指定的该文件能够配置Logstash读取一些文件导入ES
# Sample Logstash configuration for creating a simple # Beats -> Logstash -> Elasticsearch pipeline. input { //这块端口能够不配,由于默认就是5044端口 beats { port => 5044 } } output { elasticsearch { // 配置成ES节点,集群则能够配置全部节点 hosts => ["http://localhost:9200"] // 能够自定义 index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" #user => "elastic" #password => "changeme" } }
若是在logstash启动的时候要导入文档网络
配置以下:elasticsearch
input { file { path => "/usr/share/logstash/bin/file.csv" start_position => "beginning" sincedb_path => "/dev/null" } } ````` output { elasticsearch { hosts => "http://localhost:9200" index => "file" document_id => "%{id}" } stdout {} }
固然,7.X版本默认只有一个主分片和一个副分片,若是咱们想要指定多个分片,那么须要在启动logstash的时候预先建立好索引,并设置好分片分配ide
PUT /file { "settings": { "number_of_shards": 3, "number_of_replicas": 1 } }
具体Elasticsearch集群搭建请移步到:http://www.javashuo.com/article/p-trpfayux-gt.htmlcode