对于日志来讲,最多见的需求就是收集、存储、查询、展现,开源社区正好有相对应的开源项目:logstash(收集)、elasticsearch(存储+搜索)、kibana(展现),咱们将这三个组合起来的技术称之为ELKStack,因此说ELKStack指的是Elasticsearch、Logstash、Kibana技术栈的结合java
CentOS Linux release 7.6.1810 (Core)
服务 | IP地址 | 主机名 |
---|---|---|
elasticsearch | 10.201.1.145 | k8s-m1 |
elasticsearch | 10.201.1.146 | k8s-n1 |
logstash | 10.201.1.145 | k8s-n1 |
kibana | 10.201.1.146 | k8s-n1 |
yum install -y java
rpm -ivh elasticsearch-2.3.5.rpm sudo systemctl daemon-reload systemctl enable elasticsearch.service rpm -ql elasticsearch
grep '^[a-z]' /etc/elasticsearch/elasticsearch.yml cluster.name: myes #节点名称,全部节点要保持一致 node.name: k8s-m1 #节点名称,通常设置为主机名,不能和其余节点重复 path.data: /data/es-data #数据存放路径 path.logs: /var/log/elasticsearch #log存放路径 bootstrap.mlockall: true #保证内存不被放到交换分区里面 network.host: 10.201.1.145 #当前主机IP http.port: 9200 #服务端口
grep '^[a-z]' /etc/elasticsearch/elasticsearch.yml cluster.name: myes node.name: k8s-n1 path.data: /data/es-data path.logs: /var/log/elasticsearch bootstrap.mlockall: true #保证内存不被放到交换分区里面 network.host: 10.201.1.146 http.port: 9200 discovery.zen.ping.unicast.hosts: ["10.201.1.145", "10.201.1.146"] #单播的方式发布本身的服务,让其余es节点识别
mkdir -p /data/es-data chown -R elasticsearch:elasticsearch /data/es-data
/etc/init.d/elasticsearch start tail -f /var/log/elasticsearch/myes.log netstat -lntpu|grep 9200
安装head插件node
/usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
浏览器访问插件,查看状态nginx
http://10.201.1.145:9200/_plugin/head/apache
以下图示,代表安装正常json
安装kopf插件bootstrap
/usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
浏览器访问插件,查看状态浏览器
http://10.201.1.145:9200/_plugin/kopf/ruby
以下图示,代表安装正常服务器
logstash的运行须要依赖Java环境,前面已经安装过elasticsearch
rpm -ivh logstash-2.3.4-1.noarch.rpm rpm -ql logstash
命令行模式下验证:
/opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }' #标准的输入和输出 /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }' /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["10.201.1.145"] index => "logstash-%{+YYYY.MM.dd}"} }' #将输入输出到es中,并建立索引
写入配置文件形式验证(默认读取的目录为/etc/logstash/conf.d/):
cat file.conf input{ file{ path => ["/var/log/messages", "/var/log/secure"] type => "system-log" start_position => "beginning" } } filter{ } output{ elasticsearch { hosts => ["10.201.1.145:9200"] index => "system-log-%{+YYYY.MM}" } }
cat /etc/logstash/conf.d/file.conf input{ file{ #file 是收集本地文件 path => ["/var/log/messages", "/var/log/secure"] #收集的路径 type => "system-log" #索引和指定的类型,为下面作判断 start_position => "beginning" #从头开始收集 } file{ path => "/var/log/elasticsearch/myes.log" type => "es-log" start_position => "beginning" codec => multiline{ #使用codec插件,至关于转换格式,使用多行合并参数,适合java日志收集 pattern => "^\[" # 设置正则匹配。根据本身java日志格式来设置 negate => true what => "previous" #匹配到正则则和上文合并 } } } filter{ } output{ if [type] == "system-log" { #作判断,区分不一样的日志类型 elasticsearch { hosts => ["10.201.1.145:9200"] index => "system-log-%{+YYYY.MM}" } } if [type] == "es-log" { elasticsearch { hosts => ["10.201.1.145:9200"] index => "es-log-%{+YYYY.MM}" } } }
须要先把nginx的日志类型改成json
log_format access_log_json '{"user_ip":"$http_x_real_ip","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_req":"$request","http_code":"$status","body_bytes_sent":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}';
写配置文件
cat /etc/logstash/conf.d/nginx.conf input{ file { path => "/var/log/nginx/access.log_json" codec => "json" } } filter{ } output{ elasticsearch { hosts => ["10.201.1.145:9200"] index => "nginx-access-log-%{+YYYY.MM.dd}" } }
修改要收集日志主机的rsyslog配置文件
[root@k8s-m1 ~]# tail -2 /etc/rsyslog.conf *.* @@10.201.1.146:514 # ### end of the forwarding rule ###
编写收集配置文件
[root@k8s-n1 ~]# cat rsyslog.conf input{ syslog { type => "system-syslog" port => 514 } } filter{ } output{ elasticsearch { hosts => ["10.201.1.146:9200"] index => "system-syslog-%{+YYYY.MM}" } }
编写配置文件
[root@k8s-n1 ~]# cat tcp.conf input{ tcp { port => 6666 mode => "server" type => "tcp" } } output{ stdout { codec => rubydebug } }
用另外一台机器,发信息验证(下面列举几种方法)
yum -y install nc echo "lxd" | nc 10.201.1.146 6666 nc 10.201.1.146 6666 < /etc/resolv.conf echo "123" > /dev/tcp/10.201.1.146/6666
[root@k8s-m1 ~]# cat apache.conf input { file { path => "/var/log/httpd/access_log" type => "apache_log" start_position => "beginning" } } filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}" } } } output { elasticsearch { hosts => ["10.201.1.145:9200"] index => "apache-log-%{+YYYY.MM.dd}" } }
前台运行,而后去登陆es查看收集的日志
http://10.201.1.145:9200/_plugin/head/
/opt/logstash/bin/logstash -f /etc/logstash/conf.d/file.conf
Kibana 是为 Elasticsearch 设计的开源分析和可视化平台。你可使用 Kibana 来搜索,查看存储在 Elasticsearch 索引中的数据并与之交互。你能够很容易实现高级的数据分析和可视化,以图表的形式展示出来。
rpm -ivh kibana-4.5.4-1.x86_64.rpm rpm -ql kibana
grep '^[a-z]' /opt/kibana/config/kibana.yml server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://10.201.1.145:9200" kibana.index: ".kibana"
/etc/init.d/kibana start netstat -lntpu | grep 5601
浏览器访问:http://10.201.1.146:5601
添加在es中建立的索引
查看收集到的日志信息