ELK简介
ELKStack即Elasticsearch + Logstash + Kibana。日志监控和分析在保障业务稳定运行时,起到了很重要的做用。好比对nginx日志的监控分析,nginx是有日志文件的,它的每一个请求的状态等都有日志文件进行记录,因此能够经过读取日志文件来分析;redis的list结构正好能够做为队列使用,用来存储logstash传输的日志数据。而后elasticsearch就能够进行分析和查询了。html
本文搭建的的是一个分布式的日志收集和分析系统。logstash有agent和indexer两个角色。对于agent角色,放在单独的web机器上面,而后这个agent不断地读取nginx的日志文件,每当它读到新的日志信息之后,就将日志传送到网络上的一台redis队列上。对于队列上的这些未处理的日志,有不一样的几台logstash indexer进行接收和分析。分析以后存储到elasticsearch进行搜索分析。再由统一的kibana进行日志web界面的展现[3]。java
目前我用两台机器作测试,hadoop-master安装nginx和logstash agent(tar源码包安装),hadoop-slave机器安装安装logstash agent、elasticsearch、redis、nginx。
同时分析两台机器的nginx日志,具体配置可参见说明文档。如下记录了ELK+redis来收集和分析日志的配置过程,参考了官方文档和前人的文章。node
系统环境
主机环境
1 2
|
hadoop-master 192.168.186.128 #logstash index、nginx hadoop-slave 192.168.186.129 #安装logstash agent、elasticsearch、redis、nginx
|
系统信息
1 2 3 4 5 6 7
|
[root@hadoop-slave ~]# java -version #Elasticsearch是java开发的,须要JDK环境,本机安装JDK 1.8 java version "1.8.0_20" Java(TM) SE Runtime Environment (build 1.8.0_20-b26) Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode) [root@hadoop-slave ~]# cat /etc/issue CentOS release 6.4 (Final) Kernel \r on an \m
|
Redis安装
1 2 3 4 5
|
[root@hadoop-slave ~]# wget https: |
执行完后,会在当前目录中的src目录中生成相应的执行文件,如:redis-server redis-cli等;
咱们在/usr/local/目录中建立redis位置目录和相应的数据存储目录、配置文件目录等.linux
1 2 3 4 5 6
|
[root@hadoop-slave local]# mkdir /usr/local/redis/{conf,run,db} -pv [root@hadoop-slave local]# cd /usr/local/src/redis-2.8.20/ [root@hadoop-slave redis-2.8.20]# cp redis.conf /usr/local/redis/conf/ [root@hadoop-slave redis-2.8.20]# cd src/ [root@hadoop-slave src]# cp redis-benchmark redis-check-aof redis-check-dump redis-cli redis-server mkreleasehdr.sh /usr/local/redis/ `
|
到此Redis安装完成了。
下面来试着启动一下,并查看相应的端口是否已经启动:nginx
1 2 3 4
|
[root@hadoop-slave src]# /usr/local/redis/redis-server /usr/local/redis/conf/redis.conf & #能够打入后台 [root@hadoop-slave redis]# netstat -antulp | grep 6379 tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 72669/redis-server tcp 0 0 :::6379 :::* LISTEN 72669/redis-server
|
启动没问题了,ok!git
Elasticserach安装
ElasticSearch默认的对外服务的HTTP端口是9200,节点间交互的TCP端口是9300,注意打开tcp端口。github
Elasticsearch安装
从官网下载最新版本的tar包
Search & Analyze in Real Time: Elasticsearch is a distributed, open source search and analytics engine, designed for horizontal scalability, reliability, and easy management.web
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
|
[root@hadoop-slave ~]# wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.tar.gz [root@hadoop-slave ~]# mkdir /usr/local/elk [root@hadoop-slave ~]# tar zxf elasticsearch-1.7.1.tar.gz -C /usr/local/elk/ [root@hadoop-slave bin]# ln -s /usr/local/elk/elasticsearch-1.7.1/bin/elasticsearch /usr/bin [root@hadoop-slave bin]# elasticsearch start [2015-08-17 20:49:21,566][INFO ][node ] [Eliminator] version[1.7.1], pid[5828], build[b88f43f/2015-07-29T09:54:16Z] [2015-08-17 20:49:21,585][INFO ][node ] [Eliminator] initializing ... [2015-08-17 20:49:21,870][INFO ][plugins ] [Eliminator] loaded [], sites [] [2015-08-17 20:49:22,101][INFO ][env ] [Eliminator] using [1] data paths, mounts [[/ (/dev/sda2)]], net usable_space [27.9gb], net total_space [37.1gb], types [ext4] [2015-08-17 20:50:08,097][INFO ][node ] [Eliminator] initialized [2015-08-17 20:50:08,099][INFO ][node ] [Eliminator] starting ... [2015-08-17 20:50:08,593][INFO ][transport ] [Eliminator] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.186.129:9300]} [2015-08-17 20:50:08,764][INFO ][discovery ] [Eliminator] elasticsearch/XbpOYtsYQbO-6kwawxd7nQ [2015-08-17 20:50:12,648][INFO ][cluster.service ] [Eliminator] new_master [Eliminator][XbpOYtsYQbO-6kwawxd7nQ][hadoop-slave][inet[/192.168.186.129:9300]], reason: zen-disco-join (elected_as_master) [2015-08-17 20:50:12,683][INFO ][http ] [Eliminator] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.186.129:9200]} [2015-08-17 20:50:12,683][INFO ][node ] [Eliminator] started [2015-08-17 20:50:12,771][INFO ][gateway ] [Eliminator] recovered [0] indices into cluster_state
|
测试
出现200返回码表示okredis
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
|
[root@hadoop-slave ~]# elasticsearch start -d [root@hadoop-slave ~]# curl -X GET http://localhost:9200 { "status" : 200, "name" : "Wasp", "cluster_name" : "elasticsearch", "version" : { "number" : "1.7.1", "build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19", "build_timestamp" : "2015-07-29T09:54:16Z", "build_snapshot" : false, "lucene_version" : "4.10.4" }, "tagline" : "You Know, for Search" }
|
Logstash安装
Logstash is a flexible, open source, data collection, enrichment, and transport pipeline designed to efficiently process a growing list of log, event, and unstructured data sources for distribution into a variety of outputs, including Elasticsearch.
Logstash默认的对外端口是9292,若是防火墙开启了要打开tcp端口。centos
源码安装
192.168.186.128主机源码安装,解压到/usr/local/目录下
1 2 3
|
[root@hadoop-master ~]# wget https: |
yum安装
192.168.186.129采用yum安装
1 2 3 4 5 6 7 8 9
|
[root@hadoop-slave ~] |
测试
1 2 3 4
|
[root@hadoop-slave ~]# cd /opt/logstash/ [root@hadoop-slave logstash]# ls bin CHANGELOG.md CONTRIBUTORS Gemfile Gemfile.jruby-1.9.lock lib LICENSE NOTICE.TXT vendor [root@hadoop-slave logstash]# bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
|
而后你会发现终端在等待你的输入。没问题,敲入 Hello World,回车,而后看看会返回什么结果!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
|
[root@hadoop-slave logstash]# vi logstash-simple.conf #sleasticsearch的host是本机 input { stdin { } } output { elasticsearch { host => localhost } stdout { codec => rubydebug } } [root@hadoop-slave logstash]# ./bin/logstash -f logstash-simple.conf #能够打入后台运行 …… { "message" => "", "@version" => "1", "@timestamp" => "2015-08-18T06:26:19.348Z", "host" => "hadoop-slave" } ……
|
代表elasticsearch已经收到logstash传来的数据了,通讯ok!
也能够经过下面的方式
1 2
|
[root@hadoop-slave etc]# curl 'http://192.168.186.129:9200/_search?pretty' #出现一堆数据表示ok!
|
logstash配置
logstash语法
摘录自说明文档:
Logstash 社区一般习惯用 shipper,broker 和 indexer 来描述数据流中不一样进程各自的角色。以下图:

broker通常选择redis。不过我见过不少运用场景里都没有用 logstash 做为 shipper(也是agent的概念),或者说没有用 elasticsearch 做为数据存储也就是说也没有 indexer。因此,咱们其实不须要这些概念。只须要学好怎么使用和配置 logstash 进程,而后把它运用到你的日志管理架构中最合适它的位置就够了。
设置nginx日志格式
两台机器都安装了nginx
,因此都要修改nginx.conf
,设置日志格式。
1 2 3 4 5 6 7
|
[root@hadoop-master ~] |
hadoop-slave
机器同上操做
开启logstash agent
logstash agent负责收集信息传送到redis队列上
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
|
[root@hadoop-master ~]# cd /usr/local/logstash-1.5.3/ [root@hadoop-master logstash-1.5.3]# mkdir etc [root@hadoop-master etc]# vi logstash_agent.conf input { file { type => "nginx access log" path => ["/usr/local/nginx/logs/host.access.log"] } } output { redis { host => "192.168.186.129" #redis server data_type => "list" key => "logstash:redis" } } [root@hadoop-master etc]# nohup /usr/local/logstash-1.5.3/bin/logstash -f /usr/local/logstash-1.5.3/etc/logstash_agent.conf & #在另外一台机器上的logstash_agent也一样配置
|
开启logstash indexer
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
|
[root@hadoop-slave conf] |
配置完成!
Kibana安装
Explore and Visualize Your Data: Kibana is an open source data visualization platform that allows you to interact with your data through stunning, powerful graphics that can be combined into custom dashboards that help you share insights from your data far and wide.
1 2 3 4 5 6
|
[root@hadoop-slave ~]# wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz [root@hadoop-slave elk]# tar -zxf kibana-4.1.1-linux-x64.tar.gz [root@hadoop-slave elk]# mv kibana-4.1.1-linux-x64 /usr/local/elk [root@hadoop-slave bin]# pwd /usr/local/elk/kibana/bin [root@hadoop-slave bin]# ./kibana &
|
打开http://192.168.186.129:5601/
若是须要远程访问,须要打开iptables的tcp的5601端口。
ELK+redis测试
若是ELK+redis都没启动,如下命令启动:
1 2 3 4 5 6 7
|
[root@hadoop-slave src]# /usr/local/redis/redis-server /usr/local/redis/conf/redis.conf & #启动redis [root@hadoop-slave ~]# elasticsearch start -d #启动elasticsearch [root@hadoop-master etc]# nohup /usr/local/logstash-1.5.3/bin/logstash -f /usr/local/logstash-1.5.3/etc/logstash_agent.conf & [root@hadoop-slave etc]# nohup /opt/logstash/bin/logstash -f /opt/logstash/etc/logstash_indexer.conf & [root@hadoop-slave etc]# nohup /opt/logstash/bin/logstash -f /opt/logstash/etc/logstash_agent.conf & [root@hadoop-slave bin]# ./kibana & #启动kibana `
|
打开http://192.168.186.129/ 和 http://192.168.186.128/
每刷新一次页面会产生一条访问记录,记录在host.access.log文件中。
1 2 3 4 5 6 7 8
|
[root@hadoop-master logs]# cat host.access.log …… 192.168.186.1 - - [18/Aug/2015:22:59:00 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-" 192.168.186.1 - - [18/Aug/2015:23:00:21 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-" 192.168.186.1 - - [18/Aug/2015:23:06:38 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-" 192.168.186.1 - - [18/Aug/2015:23:15:52 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-" 192.168.186.1 - - [18/Aug/2015:23:16:52 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36" "-" [root@hadoop-master logs]#
|
打开kibana
页面便可显示两台机器nginx的访问日志信息
,显示时间是因为虚拟机的时区和物理机时区不一致,不影响。
此时访问出现以下界面

后记:
本身在安装过程当中遇到的问题,
1.启动elasticsearch 和 logstash 的时候须要使用jdk,最好是1.8版本
2.elasticsearch有版本的问题。在本教程中使用的1.7版本,启动使用 ./bin/elasticsearch start ,而且默认使用内存1G,高点的版本启动使用./bin/elasticsearch 默认使用2G内存
3.启动elasticsearch 的时候若是报jvm内存溢出,就要修改elasticsearch的内存,1.7版本是使用命令 ./bin/elasticsearch -Xmx70m -Xms70m 。测试5.0版本是 $Elasticsearch_HOME/conf/jvm.options 文件中修改
-Xms512m
-Xmx512m
4.配置logstash 配置input 和 output 的时候 版本不用格式有所变化
1.5.3 版本 output写法
elasticsearch {
embedded => false
protocol => "http"
host => "localhost"
port => "9200"
}
2.1.0版本 output写法:
elasticsearch {
hosts => ["localhost:9200"]
}