最近工做须要,用filebeat将Hadoop日志导入到Elasticsearch中,在kibana中展现,记录下。浏览器
版本分别是elasticsearch:6.5.一、kibana:6.5.一、filebeat:6.4,版本要对应,开始我用的filebeat版本是7.0,致使在kibana中展现时有问题,后来用6.4版本就没有这个问题了。bash
首先要启动elasticearch和kibana,我是在Docker中运行这俩个的,具体能够看个人elasticsearch、kibana博客。浏览器打开localhost:5601能够看到kibana的界面。elasticsearch
去filebeat官网下载6.4版本的压缩包,以后解压并进入目录,以后修改filebeat.yml:oop
List-1fetch
#=========================== Filebeat inputs ============================= filebeat.inputs: # Each - is an input. Most options can be set at the input level, so # you can use different inputs for various configurations. # Below are the input specific configurations. - type: log # Change to true to enable this input configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: #- /var/log/*.log - /opt/software/tool/hadoop/hadoop/logs/*.log #- c:\programdata\elasticsearch\logs\*
修改filebeat的input:ui
List-2this
#============================== Kibana ===================================== # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API. # This requires a Kibana endpoint configuration. setup.kibana: # Kibana Host # Scheme and port can be left out and will be set to the default (http and 5601) # In case you specify and additional path, the scheme is required: http://localhost:5601/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601 host: "localhost:5601"
如上List-2中所示,修改Kibana模块,将'host: "localhost:5601"'以前的注释去掉。.net
List-3日志
#================================ Outputs ===================================== # Configure what output to use when sending the data collected by the beat. #-------------------------- Elasticsearch output ------------------------------ output.elasticsearch: # Array of hosts to connect to. hosts: ["localhost:9200"] # Optional protocol and basic auth credentials. #protocol: "https" #username: "elastic" #password: "changeme"
如List-3所示,修改Outputs模块,将Elasticsearch output的hosts: ["localhost:9200"]的值改成咱们本身elasticsearch的IP与端口。code
以后如List-4所示,没有报错就说明成功了。
List-4
./filebeat setup --dashboards Loading dashboards (Kibana must be running and reachable) Loaded dashboards #启动filebeat ./filebeat -e -c filebeat.yml
来看kibana的界面,以下图1中的Discover能够看到日志,图2中的logs能够看到随着hadoop的日志文件内容被修改,kibana中的日志会相应的滚动显示,图1和图2中的搜索框中能够输入值进行搜索。
图1
图2