Elstaicsearch:存储和搜索 logstash:收集 kibana:展现.专门为ES设计的展现平台
架构图:java
环境准备node
IP 主机名 操做系统 192.168.56.11 linux-node1 centos7 192.168.56.12 linux-node2 centos7
生产环境需求分析linux
访问日志:apache访问日志、nginx访问日志、tomcat 错误日志:error log、java日志(多行处理) 系统日志:/var/log/* 、 syslog 运行日志:程序写的 (file插件) 网络日志:防火墙、交换机、路由器日志
(1).标准化: 日志存放位置标准化:/data/logs/ 日志格式标准化: JSON 命名规则标准化: access_log,error_log,runtime_log 日志切割标准化:按天、按小时 原始日志文件处理标准化: 经过rsync到NAS,而后删除最近三天前的。 (2).工具化: 如何使用logstash进行收集
在192.168.56.11上咱们用apache的日志和elasticsearch的日志作分析,用logstash收取apache的访问日志和es的日志(es的日志是java日志),写入redisnginx
[root@linux-node1 /etc/logstash/conf.d]# cat apache.conf input { file { path => "/var/log/httpd/access_log" #apache日志目录 start_position => "beginning" #从头开始收集 type => "apache-accesslog" #日志类型 } file{ path => "/var/log/elasticsearch/myes.log" #es日志,根据本身的实际状况,在es的配置文件中本身定义 type => "es-log" start_position => "beginning" codec => multiline{ pattern => "^\[" #以“[”做为切分java的分隔符 negate => true what => "previous" } } } output { if [type] == "apache-accesslog" { redis { host => "192.168.56.12" port => "6379" db => "6" data_type => "list" key => "apache-accesslog" } } if [type] == "es-log"{ redis { host => "192.168.56.12" port => "6379" db => "6" data_type => "list" key => "es-log" } } }
在192.168.56.12上面,从redis中读取日志,并写入elasticsearch;redis
同时使用logstash收取syslog日志,并写入es;收取syslog,经过网络传输便可,注意要开通514端口。apache
咱们须要修改rsync的配置文件。centos
[root@linux-node1 /etc/logstash/conf.d]# tail -n 3 /etc/rsyslog.conf # remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional *.* @@192.168.56.12:514 # ### end of the forwarding rule ### [root@linux-node1 /etc/logstash/conf.d]#
查看端口监听状况tomcat
[root@linux-node2 conf.d]# netstat -lntp|grep 514 tcp6 0 0 :::514 :::* LISTEN 43148/java [root@linux-node2 conf.d]# netstat -lnup|grep 514 udp6 0 0 :::514 :::* 43148/java [root@linux-node2 conf.d]#
端口监听正常网络
在192.168.56.12的配置文件以下:架构
[root@linux-node2 conf.d]# cat indexer.conf input{ redis { type => "apache-accesslog" host => "192.168.56.12" port => "6379" db => "6" data_type => "list" key => "apache-accesslog" } syslog { type => "system-syslog" #收取syslog port => 514 #监听514端口 } redis { type => "es-log" host => "192.168.56.12" port => "6379" db => "6" data_type => "list" key => "es-log" } } filter { if [tyep] == "apache-accesslog"{ grok { match => { "message" => "%{COMBINEDAPACHELOG}" } } } #使用grok处理apache的日志 } output{ if [type] == "apache-accesslog"{ elasticsearch { hosts => ["192.168.56.11:9200"] index => "apache-accesslog-%{+YYYY.MM.dd}" } } if [type] == "es-log"{ elasticsearch { hosts => ["192.168.56.11:9200"] index => "es-log-%{+YYYY.MM}" } } if [type] == "system-syslog"{ elasticsearch { hosts => ["192.168.56.11:9200"] index => "system-syslog-%{+YYYY.MM}" } } } [root@linux-node2 conf.d]#
注意: 注意: 注意:因为收集的日志类型比较多,最好把type跟index的名称统一,这样不容易搞混
注意: 注意: 注意:若是使用redis list做为elk的消息队列,那么须要对全部的list key的长度进行监控。 好比使用llen key_name获取key的值,再用zabbix设置监控 正常状况下,写入redis的日志会很快被logstash消费掉。 若是突然涨到了十万,那确定是故障了,能够在zabbix设置阀值,进行报警。