ELK由Elasticsearch、Logstash和Kibana三部分组件组成; Elasticsearch是个开源分布式搜索引擎,它的特色有:分布式,零配置,自动发现,索引自动分片,索引副本机制, restful风格接口,多数据源,自动搜索负载等。 Logstash是一个彻底开源的工具,它能够对你的日志进行收集、分析,并将其存储供之后使用 kibana 是一个开源和免费的工具,它能够为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面, 能够帮助您汇总、分析和搜索重要数据日志。
Logstash: logstash server端用来搜集日志; Elasticsearch: 存储各种日志; Kibana: web化接口用做查寻和可视化日志; Logstash Forwarder: logstash client端用来经过lumberjack 网络协议发送日志到logstash server;
在须要收集日志的全部服务上部署logstash,做为logstash agent(logstash shipper)用于监控并过滤收集日志,将过滤后 的内容发送到Redis,而后logstash indexer将日志收集在一块儿交给全文搜索服务ElasticSearch,能够用ElasticSearch进行 自定义搜索经过Kibana 来结合自定义 搜索进行页面展现。
ELK官网:https://www.elastic.co/ ELK官网文档:https://www.elastic.co/guide/index.html ELK中文手册:http://kibana.logstash.es/content/elasticsearch/monitor/logging.html 注释:ELK有两种安装方式 (1)集成环境:Logstash有一个集成包,里面包括了其全套的三个组件;也就是安装一个集成包。 (2)独立环境:三个组件分别单独安装、运行、各司其职。(比较经常使用)
注释:logstash依赖JDK环境 首先 java -version 检查服务器java环境 如发现环境未安装 则先安装java环境 wget https://download.elastic.co/logstash/logstash/logstash-1.5.4.tar.gz tar zxf logstash-1.5.4.tar.gz -C /usr/local/ 配置logstash的环境变量 echo "export PATH=\$PATH:/usr/local/logstash-1.5.4/bin" > /etc/profile.d/logstash.sh . /etc/profile
logstash经常使用参数 -e :指定logstash的配置信息,能够用于快速测试; -f :指定logstash的配置文件;能够用于生产环境;
下面咱们使用 -e参数指定logstash的配置信息,用于快速测试,直接输出到屏幕php
# logstash -e "input {stdin{}} output {stdout{}}" my name is MikePeng. //手动输入后回车,等待10秒后会有返回结果 Logstash startup completed 2016-12-26T13:55:50.660Z 0.0.0.0 my name is MikePeng. 这种输出是直接原封不动的返回...
下面咱们经过-e参数指定logstash的配置信息,用于快速测试,以json格式输出到屏幕。html
# logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}' my name is MikePeng. //手动输入后回车,等待10秒后会有返回结果 Logstash startup completed { "message" => "my name is MikePeng.", "@version" => "1", "@timestamp" => "2016-12-26T13:57:31.851Z", "host" => "0.0.0.0" }
logstash以配置文件方式启动java
vim logstash-simple.conf ----------------------------logstash-simple.conf---------------- input { stdin {} } output { stdout { codec=> rubydebug } } ---------------------------------------------------------------- logstash -f logstash-simple.conf //普通方式启动 Logstash startup completed logstash agent -f logstash-simple.conf --verbose //开启debug模式 Pipeline started {:level=>:info} Logstash startup completed hello world. //手动输入hello world. { "message" => "hello world.", "@version" => "1", "@timestamp" => "2016-12-26T14:01:43.724Z", "host" => "0.0.0.0" }
logstash输出信息存储到redispython
vim logstash_to_redis.conf -------------------------- logstash_to_redis.conf ------------ input { stdin { } } output { stdout { codec => rubydebug } redis { host => '192.168.201.73:7351' data_type => 'list' key => 'logstash:redis' } } --------------------------------------------------------------- 注:若是提示Failed to send event to Redis,表示链接Redis失败或者没有安装,请检查... 查看logstash的监听端口号 logstash agent -f logstash_to_redis.conf --verbose netstat -tnlp |grep java tcp 0 0 :::9301 :::* LISTEN 1326/java
logstash消费kafka消息并写入elasticsearchlinux
vim kafka_logstash_elasticsearch.conf -------------------------- kafka_logstash_elasticsearch.conf ---------------- input { kafka { zk_connect => "192.168.201.73:2181" #kafka border group_id => "elk_consumer" #所属消费组 topic_id => "boyaa" #消费的topic reset_beginning => false consumer_threads => 5 decorate_events => true } } output { elasticsearch { host => "192.168.201.73" codec => "json" protocol => "http" } } logstash agent -f kafka_logstash_elasticsearch.conf --verbose -------------------------------------------------------------------------------
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.2.tar.gz tar zxf elasticsearch-1.7.2.tar.gz -C /usr/local/
修改elasticsearch配置文件elasticsearch.ymlc++
vim /usr/local/elasticsearch-1.7.2/config/elasticsearch.yml -------------------------------elasticsearch.yml----------------------------- discovery.zen.ping.multicast.enabled: false #关闭广播,若是局域网有机器开9300 端口,服务会启动不了 network.host: 192.168.201.73 #指定主机地址,实际上是可选的,可是最好指定由于后面跟kibana集成的时候会 #报http链接出错(直观体现好像是监听了:::9200 而不是0.0.0.0:9200) http.cors.allow-origin: "/.*/" http.cors.enabled: true #这2项都是解决跟kibana集成的问题,错误体现是 你的 elasticsearch 版本太低,其实不是 -----------------------------------------------------------------------------
/usr/local/elasticsearch-1.7.2/bin/elasticsearch #日志会输出到stdout /usr/local/elasticsearch-1.7.2/bin/elasticsearch -d #表示以daemon的方式启动 nohup /usr/local/elasticsearch-1.7.2/bin/elasticsearch > /var/log/logstash.log 2>&1 & netstat -tnlp |grep java #查看elasticsearch的监听端口 tcp 0 0 :::9200 :::* LISTEN 7407/java tcp 0 0 :::9300 :::* LISTEN 7407/java
将logstash的信息输出到elasticsearch中 vim logstash-elasticsearch.conf ----------------------------logstash-elasticsearch.conf----------------------- input { stdin {} } output { elasticsearch { host => "192.168.201.73" } stdout { codec=> rubydebug } } ------------------------------------------------------------------------------ /usr/local/logstash-1.5.4/bin/logstash agent -f logstash-elasticsearch.conf #启动logstash Pipeline started {:level=>:info} Logstash startup completed python linux java c++ //手动输入 { "message" => "python linux java c++", "@version" => "1", "@timestamp" => "2016-12-26T14:51:56.899Z", "host" => "0.0.0.0" }
curl命令发送请求来查看elasticsearch是否接收到了数据git
curl http://192.168.201.73:9200/_search?pretty { "took" : 28, "timed_out" : false, "_shards" : { "total" : 5, "successful" : 5, "failed" : 0 }, "hits" : { "total" : 1, "max_score" : 1.0, "hits" : [ { "_index" : "logstash-2016.12.26", "_type" : "logs", "_id" : "AVBH7-6MOwimSJSPcXjb", "_score" : 1.0, "_source":{"message":"python linux java c++","@version":"1","@timestamp":"2016-12-26T14:51:56.899Z","host":"0.0.0.0"} } ] } }
vim redis-logstash-Elasticsearch.conf ---------------------------------- redis-logstash-Elasticsearch.conf --------------------- input { redis { host => '192.168.201.73' # 我方便测试没有指定password,最好指定password data_type => 'list' port => "6379" key => 'logstash:redis' #自定义 type => 'redis-input' #自定义 } } output { elasticsearch { host => "192.168.201.73" codec => "json" protocol => "http" #版本1.0+ 必须指定协议http } } ------------------------------------------------------------------------------ /usr/local/logstash-1.5.4/bin/logstash agent -f redis-logstash-Elasticsearch.conf #启动logstash
注释:Elasticsearch-kopf插件能够查询Elasticsearch中的数据,安装elasticsearch-kopf,只要在你安装Elasticsearch的目录中执行如下命令便可: cd /usr/local/elasticsearch-1.7.2/bin/ ./plugin install lmenezes/elasticsearch-kopf > Installing lmenezes/elasticsearch-kopf... Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip... Downloading ............................................................................................. Installed lmenezes/elasticsearch-kopf into /usr/local/elasticsearch-1.7.2/plugins/kopf 执行插件安装后会提示失败,颇有多是网络等状况... -> Installing lmenezes/elasticsearch-kopf... Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip... Failed to install lmenezes/elasticsearch-kopf, reason: failed to download out of all possible locations..., use --verbose to get detailed information 解决办法就是手动下载该软件,不经过插件安装命令... cd /usr/local/elasticsearch-1.7.2/plugins wget https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip unzip master.zip mv elasticsearch-kopf-master kopf 以上操做就彻底等价于插件的安装命令 netstat -tnlp |grep java tcp 0 0 :::9200 :::* LISTEN 7969/java tcp 0 0 :::9300 :::* LISTEN 7969/java tcp 0 0 :::9301 :::* LISTEN 8015/java
浏览器访问kopf页面访问elasticsearch保存的数据github
wget https://download.elastic.co/kibana/kibana/kibana-4.1.2-linux-x64.tar.gz tar zxf kibana-4.1.2-linux-x64.tar.gz -C /usr/local # vim /usr/local/kibana-4.1.2-linux-x64/config/kibana.yml elasticsearch_url: "http://192.168.201.73:9200"
/usr/local/kibana-4.1.2-linux-x64/bin/kibana #启动kinaba 输出如下信息,代表kinaba成功. {"name":"Kibana","hostname":"localhost.localdomain","pid":1943,"level":30,"msg":"No existing kibana index found","time":"2016-12-26T00:39:21.617Z","v":0} {"name":"Kibana","hostname":"localhost.localdomain","pid":1943,"level":30,"msg":"Listening on 0.0.0.0:5601","time":"2016-12-26T00:39:21.637Z","v":0} kinaba默认监听在本地的5601端口上
浏览器访问kinaba http://192.168.201.73:5601/#/settings/indices/?_g=()