ELK日志系统介绍
开源实时日志分析ELK平台可以完美的解决咱们上述的问题,ELK由ElasticSearch、Logstash和Kiabana三个开源工具组成。官方网站:https://www.elastic.co/productsjava
一、Elasticsearch是个开源分布式搜索引擎,它的特色有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等node
二、Logstash是一个彻底开源的工具,他能够对你的日志进行收集、过滤,并将其存储供之后使用(如,搜索)linux
三、Kibana 也是一个开源和免费的工具,它Kibana能够为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,能够帮助您汇总、分析和搜索重要数据日志vim
安装环境准备
首先须要下载好相关的软件安装包
官方网站:https://www.elastic.co
https://artifacts.elastic.co/downloads/logstash/logstash-5.3.1.tar.gz
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.1.tar.gz
https://artifacts.elastic.co/downloads/kibana/kibana-5.3.1-linux-x86_64.tar.gz
安装配置JAVA环境centos
JDK版本:jdk-8u144-linux-x64.tar.gz [root@centos7-1 ~]# cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core) [root@centos7-1 ~]# uname -r 3.10.0-693.el7.x86_64 [root@centos7-1 ~]# tar zxf jdk-8u144-linux-x64.tar.gz -C /usr/local/ [root@centos7-1 ~]# ln -s /usr/local/jdk1.8.0_144 /usr/local/jdk [root@centos7-1 ~]# cat >>/etc/profile <<EOF export JAVA_HOME=/usr/local/jdk export PATH=$PATH: $JAVA_HOME/bin export CLASSPATH=.CLASSPATH:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar EOF [root@centos7-1 ~]# source /etc/profile [root@centos7-1 ~]# java -version java version "1.8.0_144" Java(TM) SE Runtime Environment (build 1.8.0_144-b01) Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
安装Elasticserach
修改系统参数ruby
[root@centos7-1 config]# vim /etc/sysctl.conf #增长下面的配置 vm.max_map_count=655360 [root@centos7-1 config]# sysctl -p vm.max_map_count = 655360 [root@centos7-1 config]# tail -5 /etc/security/limits.conf * soft nofile 65536 * hard nofile 131072 * soft nproc 65536 * hard nproc 131072 # End of file
[root@centos7-1 config]# vim /etc/security/limits.d/20-nproc.conf #增长下面的配置 elk soft nproc 65536
建立用户与目录restful
[root@centos7-1 config]# useradd elk [root@centos7-1 config]# mkdir /elk/data /elk/logs -p [root@centos7-1 config]# chown -R elk.elk /elk/ [root@centos7-1 config]# chown -R elk.elk /usr/local/elasticsearch/
安装与配置jvm
[root@centos7-1 ~]# tar zxf elasticsearch-5.3.1.tar.gz -C /usr/local/ [root@centos7-1 ~]# ln -s /usr/local/elasticsearch-5.3.1 /usr/local/elasticsearch [root@centos7-1 ~]# cd /usr/local/elasticsearch/config/
修改配置文件elasticsearch
[root@centos7-1 config]# egrep -v "^#|^$" elasticsearch.yml cluster.name: myelk #集群名 node.name: centos7-1 path.data: /elk/data path.logs: /elk/logs network.host: 0.0.0.0 http.port: 9200 discovery.zen.ping.unicast.hosts: ["host1", "host2"]
启动服务分布式
[root@centos7-1 config]# su - elk [elk@centos7-1 ~]$ who root pts/0 2017-10-13 22:22 (10.0.0.1) [elk@centos7-1 ~]$ cd /usr/local/elasticsearch/bin/ [elk@centos7-1 bin]$ ./elasticsearch&
安装Logstash
[root@centos7-1 ~]# tar zxf logstash-5.3.1.tar.gz -C /usr/local/ [root@centos7-1 ~]# /usr/local/logstash-5.3.1/bin/logstash -e 'input { stdin { } } output { stdout {} }' Sending Logstash's logs to /usr/local/logstash-5.3.1/logs which is now configured via log4j2.properties [2017-10-16T01:39:36,983][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/local/logstash-5.3.1/data/queue"} [2017-10-16T01:39:37,181][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"a2e3b22a-4785-42f6-a073-f7fad4d60a44", :path=>"/usr/local/logstash-5.3.1/data/uuid"} [2017-10-16T01:39:37,623][INFO ][logstash.pipeline] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125} [2017-10-16T01:39:37,700][INFO ][logstash.pipeline ] Pipeline main started The stdin plugin is now waiting for input: [2017-10-16T01:39:38,042][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} hello 2017-10-16T05:39:45.692Z centos7-1 hello
创始配置文件
[root@centos7-1 logstash-5.3.1]# cd config/ [root@centos7-1 config]# ll total 20 -rw-rw-r-- 1 root root 1738 Apr 17 12:07 jvm.options -rw-rw-r-- 1 root root 3958 Apr 17 12:07 log4j2.properties -rw-rw-r-- 1 root root 4433 Apr 17 12:07 logstash.yml -rw-rw-r-- 1 root root 1701 Apr 17 12:07 startup.options [root@centos7-1 config]# vim logstash.conf input { stdin { } } output { stdout { codec=> rubydebug } }
Logstash 使用 input 和 output 定义收集日志时的输入和输出的相关配置,本例中 input 定义了一个叫 "stdin" 的 input , output 定义一个叫 "stdout" 的 output 。不管咱们输入什么字符, Logstash 都会按照某种格式来返回咱们输入的字符,其中 output 被定义为 "stdout" 并使用了 codec 参数来指定 logstash 输出格式
[root@centos7-1 config]# /usr/local/logstash-5.3.1/bin/logstash -f /usr/local/logstash-5.3.1/config/logstash.conf
安装Kibana
[root@centos7-1 ~]# tar zxf kibana-5.3.1-linux-x86_64.tar.gz -C /usr/local/ [root@centos7-1 ~]# cd /usr/local/kibana-5.3.1-linux-x86_64/config/ [root@centos7-1 config]# vim kibana.yml # Kibana is served by a back end server. This setting specifies the port to use. server.port: 5601 # To allow connections from remote users, set this parameter to a non-loopback address. server.host: "10.0.0.5" # The URL of the Elasticsearch instance to use for all your queries. elasticsearch.url: "http://10.0.0.5:9200" # Kibana uses an index in Elasticsearch to store saved searches, visualizations and # dashboards. Kibana creates a new index if the index doesn't already exist. kibana.index: ".kibana"
启动服务
[root@centos7-1 config]# /usr/local/kibana-5.3.1-linux-x86_64/bin/kibana &
[root@centos7-1 config]# lsof -i :5601 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME node 11535 root 12u IPv4 30703 0t0 TCP centos7-1:esmagent (LISTEN)
登录WEB界面
测试Kibana与Elasticsearch链接
相关组件安装完成后,就须要测试下几个组件之间可否相互协同工做,也就是能正常收集日志,存储日志并展现日志信息
[root@centos7-1 config]# cd /usr/local/logstash-5.3.1/config/ [root@centos7-1 config]# vim logstash.conf input { stdin { } } output { elasticsearch { action => "index" hosts => "10.0.0.5:9200" index => "logstash-%{+YYYY-MM}" } } [root@centos7-1 config]# /usr/local/logstash-5.3.1/bin/logstash -f /usr/local/logstash-5.3.1/config/logstash.conf
登录WEB查看是否有日志产生