ELK是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是开源软件。新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash,官方也推荐此工具。java
Elasticsearch 是个开源分布式搜索引擎,提供搜集、分析、存储数据三大功能。它的特色有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。node
Logstash 主要是用来日志的搜集、分析、过滤日志的工具,支持大量的数据获取方式。通常工做方式为c/s架构,client端安装在须要收集日志的主机上,server端负责将收到的各节点日志进行过滤、修改等操做在一并发往elasticsearch上去。linux
Kibana 也是一个开源和免费的工具,Kibana能够为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,能够帮助汇总、分析和搜索重要数据日志。nginx
Filebeat 隶属于Beats。目前Beats包含四种工具:vim
官方文档:
Filebeat:
https://www.elastic.co/cn/products/beats/filebeatwindows
Logstash:
https://www.elastic.co/cn/products/logstashcentos
Kibana:
https://www.elastic.co/cn/products/kibana浏览器
Elasticsearch:
https://www.elastic.co/cn/products/elasticsearchruby
elasticsearch中文社区:
https://elasticsearch.cn/服务器
通常咱们须要进行日志分析场景:直接在日志文件中 grep、awk 就能够得到本身想要的信息。但在规模较大的场景中,此方法效率低下,面临问题包括日志量太大如何归档、文本搜索太慢怎么办、如何多维度查询。须要集中化的日志管理,全部服务器上的日志收集汇总。常看法决思路是创建集中式日志收集系统,将全部节点上的日志统一收集,管理,访问。
通常大型系统是一个分布式部署的架构,不一样的服务模块部署在不一样的服务器上,问题出现时,大部分状况须要根据问题暴露的关键信息,定位到具体的服务器和服务模块,构建一套集中式日志系统,能够提升定位问题的效率。
一个完整的集中式日志系统,须要包含如下几个主要特色:
ELK提供了一整套解决方案,而且都是开源软件,之间互相配合使用,完美衔接,高效的知足了不少场合的应用。目前主流的一种日志系统。
主机名 | 操做系统 | IP地址 | 服务名 |
---|---|---|---|
es | centos7.4 | 192.168.96.85 | elasticsearch 6.4.0、kibana 6.4.0、rsyslog |
nginx | centos7.4 | 192.168.96.60 | elasticsearch 6.4.0、logstash-6.4.0 |
httpd | centos7.4 | 192.168.96.86 | elasticsearch 6.4.0、filebeat-6.4 |
客户机 | windows 10 | 192.168.96.2 | 网页浏览器 |
以上全部服务器均关闭防火墙及SElinux功能
setenforece 0 systemctl stop firewalld
这里分别使用了3种收集日志的方法,官网建议选用filebeat,由于轻量、高效
vim /etc/hosts
192.168.96.85 es 192.168.96.86 httpd 192.168.96.60 nginx
# 导入key rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch # 建立es仓库源 vim /etc/yum.repos.d/elasticsearch.repo [elasticsearch-6.x] name=Elasticsearch repository for 6.x packages baseurl=https://artifacts.elastic.co/packages/6.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md # 安装es软件包 yum -y install elasticsearch
vim /etc/elasticsearch/elasticsearch.yml
17 cluster.name: es-server 23 node.name: master 24 node.master: true 25 node.data: true 58 network.host: 0.0.0.0 62 http.port: 9200 71 discovery.zen.ping.unicast.hosts: ["192.168.96.85", "192.168.96.86","192.168.96.60"]
scp /etc/elasticsearch/elasticsearch.yml httpd:/etc/elasticsearch/ scp /etc/elasticsearch/elasticsearch.yml nginx:/etc/elasticsearch/
# httpd服务器: vim /etc/elasticsearch/elasticsearch.yml node.name: httpd node.master: false # nginx服务器: vim /etc/elasticsearch/elasticsearch.yml node.name: nginx node.master: false
# es服务器:(先启动master,在启动其余es) systemctl enable elasticsearch.service systemctl start elasticsearch.service # nginx服务器: systemctl enable elasticsearch.service systemctl start elasticsearch.service # httpd服务器: systemctl enable elasticsearch.service systemctl start elasticsearch.service
http://192.168.96.85:9200/_cluster/health?pretty
http://192.168.96.85:9200/_cluster/state?pretty
至此,es集群已经部署完成了
yum -y install kibana
vim /etc/kibana/kibana.yml
2 server.port: 5601 7 server.host: "192.168.96.85" 28 elasticsearch.url: "http://192.168.96.85:9200" 96 logging.dest: /var/log/kibana.log
touch /var/log/kibana.log chmod 777 /var/log/kibana.log
systemctl enable kibana systemctl start kibana
[root@es ~]# netstat -tunlp | grep 5601 tcp 0 0 192.168.96.85:5601 0.0.0.0:* LISTEN 2597/node
yum install logstash -y
vim /etc/rsyslog.conf
第91行 *.* @@192.168.96.85:10514
systemctl restart rsyslog
vim /etc/logstash/conf.d/syslog.conf
input { syslog { type => "system-syslog" port => 10514 } } output { elasticsearch { hosts => ["192.168.96.85:9200"] //es服务器ip地址 index => "system-syslog-%{+YYYY.MM}" //索引 } }
./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-x86_64.rpm rpm -ivh filebeat-6.4.0-x86_64.rpm
vim /etc/filebeat/filebeat.yml
#注释掉下一行 #enabled: false paths: - /var/log/messages //指定日志路径 output.elasticsearch: # Array of hosts to connect to. hosts: ["192.168.96.85:9200"] //指向es服务器
systemctl enable filebeat systemctl start filebeat
ps aux | grep filebeat
curl '192.168.96.85:9200/_cat/indices?v'
vim /etc/logstash/conf.d/nginx.conf
input { file { path => "/var/log/logstash/elk_access.log" start_position => "beginning" type => "nginx" } } filter { grok { match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"} } geoip { source => "clientip" } } output { stdout { codec => rubydebug } elasticsearch { hosts => ["192.168.96.85:9200"] //es服务器ip index => "nginx-test-%{+YYYY.MM.dd}" } }
/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
vim /etc/nginx/conf.d/elk.conf
server { listen 80; server_name www.test.com; location / { proxy_pass http://192.168.96.85:5601; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } access_log /var/log/logstash/elk_access.log main2; }
log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$upstream_addr" $request_time';
systemctl enable logstash systemctl start logstash
重启nginx服务
systemctl restart nginx
开启logstash收集nginx日志
/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf