对Linux有兴趣的朋友加入QQ群:476794643 在线交流
本文防盗链:https://blog.51cto.com/zhang789java
因为公司前期的elk日志系统使用效果比较好,如今开发纷纷要求把线上的日志都加到elk上面去,300多台机器的日志,一台服务器确定承受不住了,为了解决这个状况,必需要作一个elk的集群架构。node
拓扑说明nginx
一、Filebeat:轻量级的日志收集工具 二、阿里云Redis:本身搭建的redis可扩张的方式没有阿里云方便 三、Logstash:用来对日志文件的过滤,由于Logstash比较实用性能,和ES服务分开 四、ES:为了保存数据,作两台集群 五、Kibana/nginx:kibana性能要求不高,可是Kibana访问没有限制,为了安全监听在127.0.0.1,而后实用Nginx代理
总计:redis
Redis:1台(4G单节点)
ESC:4台(2核8G)
域名:log.ops.****.comdocker
(有的网友推荐我用docker跑Redis,但考虑到docker性能,并且阿里云的Redis并不算贵,就选了阿里云的Redis)apache
从上路能够看到redis的链接地址,咱们的地址是只能在内网能访问,为了让全部机器都能写到redis我这里设置0.0.0.0/0全部内网地址均可以访问json
一、选购服务器这里不作解释 二、阿里云的服务器好像没有什么能够初始化的,重要的就是安全设置,而后把机器加入到监控、跳板机
(1)配置java环境,且版本为8,若是使用7可能会出现警告信息bootstrap
[root@Ops-Elk-ES-01 ~]# yum -y install java-1.8.0 [root@Ops-Elk-ES-01 ~]# java -version openjdk version "1.8.0_151" OpenJDK Runtime Environment (build 1.8.0_151-b12) OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
(2)安装Elasticsearchvim
[root@Ops-Elk-ES-01 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch [root@Ops-Elk-ES-01 ~]# cat /etc/yum.repos.d/elasticsearch.repo [elasticsearch-5.x] name=Elasticsearch repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md [root@Ops-Elk-ES-01 ~]# yum install elasticsearch
(3)配置Elasticsearch集群tomcat
[root@Ops-Elk-ES-01 ~]# vim /etc/elasticsearch/elasticsearch.yml cluster.name: es-cluster #组播的名称地址 node.name: "node1" #节点名称,不能和其余节点重复 path.data: /work/es/data #数据目录路径 path.logs: /work/es/logs #日志文件路径 bootstrap.mlockall: true #内存不向swap交换 network.host: 192.168.90.201 #容许访问的IP(本机IP地址) http.port: 9200 #启用http discovery.zen.ping.unicast.hosts: ["192.168.8.32", "192.168.8.33"] # 集群节点,会自动发现节点 [root@Ops-Elk-ES-01 ~]# mkdir /data/elk/{data.logs} –p [root@Ops-Elk-ES-01 ~]# chown elasticsearch:elasticsearch /data –R [root@Ops-Elk-ES-01 ~]# systemctl start elasticsearch ES-02 的配置只须要把node.name和IP地址改一下便可
(4)查看Elasticsearch集群状态
[root@Ops-Elk-ES-01 ~]# curl -XGET 'http://192.168.8.32:9200/_cat/nodes?v' ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 192.168.8.32 3 95 0 0.00 0.02 0.05 mdi * node-1 192.168.8.33 3 96 0 0.12 0.09 0.07 mdi - node-2
运维API:
1. 集群状态:http:// 192.168.8.32:9200/_cluster/health?pretty 2. 节点状态:http:// 192.168.8.32:9200/_nodes/process?pretty 3. 分片状态:http:// 192.168.8.32:9200/_cat/shards 4. 索引分片存储信息:http:// 192.168.8.32:9200/index/_shard_stores?pretty 5. 索引状态:http:// 192.168.8.32:9200/index/_stats?pretty 6. 索引元数据:http:// 192.168.8.32:9200/index?pretty
(1)配置java环境,且版本为8,若是使用7可能会出现警告信息
[root@Ops-Elk-ES-01 ~]# yum -y install java-1.8.0 [root@Ops-Elk-ES-01 ~]# java -version openjdk version "1.8.0_151" OpenJDK Runtime Environment (build 1.8.0_151-b12) OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
(1)安装Logstash
[root@Ops-Elk-Logstash-01 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch [root@Ops-Elk-Logstash-01 ~]# cat /etc/yum.repos.d/logstash.repo [logstash-5.x] name=Elastic repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md [root@Ops-Elk-Logstash-01 ~]# yum install logstash -y
如今已经安装完成,由于咱们取日志是在redis取出来使用logstash过滤写到redis,等下会有logstash的案例
(1)配置java环境,且版本为8,若是使用7可能会出现警告信息
[root@Ops-Elk-Kibana-01 ~]# yum -y install java-1.8.0 [root@Ops-Elk-Kibana-01 ~]# java -version openjdk version "1.8.0_151" OpenJDK Runtime Environment (build 1.8.0_151-b12) OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
(2)安装Kibana
[root@Ops-Elk-Kibana-01 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch [root@Ops-Elk-Kibana-01 ~]# cat /etc/yum.repos.d/kibana.repo [kibana-5.x] name=Kibana repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md [root@Ops-Elk-Kibana-01 ~]# yum install kibana
(3)配置Kibana
[root@Ops-Elk-Kibana-01 ~]# grep "^[a-z]" /etc/kibana/kibana.yml server.port: 5601 server.host: "127.0.0.1" elasticsearch.url: "http://192.168.8.32:9200" kibana.index: ".kibana" [root@Ops-Elk-Kibana-01 ~]# systemctl start kibana
(4)添加Nginx反向代理
[root@Ops-Elk-Kibana-01 ~]# yum -y install nginx [root@Ops-Elk-Kibana-01 ~]# cd /etc/nginx/conf.d/ [root@Ops-Elk-Kibana-01 conf.d]# touch elk.ops.qq.com.conf [root@Ops-Elk-Kibana-01 conf.d]# htpasswd -cm /etc/nginx/kibana-user zhanghe New password: Re-type new password: Adding password for user zhanghe [root@Ops-Elk-Kibana-01 conf.d]# cat elk.ops.qq.com.conf server { listen 80; server_name elk.ops.qq.com; auth_basic "Restricted Access"; auth_basic_user_file /etc/nginx/kibana-user; location / { proxy_pass http://127.0.0.1:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } [root@Ops-Elk-Kibana-01 conf.d]# yum -y install httpd-tools [root@Ops-Elk-Kibana-01 conf.d]# htpasswd -cm /etc/nginx/kibana-user zhanghe New password: Re-type new password: Adding password for user zhanghe [root@Ops-Elk-Kibana-01 conf.d]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@Ops-Elk-Kibana-01 conf.d]# systemctl start nginx
(5)访问kibana
(1)安装filebeat
[root@node-01:~]# cat /etc/yum.repos.d/filebeat.repo [elastic-5.x] name=Elastic repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md [root@Ops-Elk-Kibana-01 conf.d]# yum -y install filebeat
下面作一个简单案例
收集日志的顺序:
Tomcat_filebeat->Redis->logstash->Es->Kibana
(1)在两台tomcat上面配置filebeat写到redis
[root@Tomcat-01:~]# cat /etc/filebeat/filebeat.yml filebeat.prospectors: - input_type: log paths: - /usr/tomcat/apache-tomcat-7.0.78/logs/catalina.out document_type: tomcat-01 multiline.pattern: '^2017-0' multiline.negate: true multiline.match: after output.redis: hosts: ["r-****.redis.rds.aliyuncs.com:6379"] db: 0 timeout: 5 key: "tomcat-01" [root@Tomcat-01:~]# systemctl start filebeat
(2)在logstash上面配置一个文件,从redis取出来数据写到ES里面
[root@Ops-Elk-ES-01 conf.d]# cat tomcat.conf input { redis { type => "tomcat-01" host => "r-****.redis.rds.aliyuncs.com" port => "6379" db => "0" data_type => "list" key => "tomcat-01" } redis { type => "tomcat-02" host => "r-****.redis.rds.aliyuncs.com" port => "6379" db => "0" data_type => "list" key => "tomcat-02" } } output { if [type] == "tomcat-01" { elasticsearch { hosts => ["es01:9200","es02:9200"] index => "tomcat-01-%{+YYYY.MM.dd}" } } if [type] == "tomcat-02" { elasticsearch { hosts => ["es01:9200","es02:9200"] index => "tomcat-02-%{+YYYY.MM.dd}" } } } [root@Ops-Elk-Logstash-01 conf.d]# systemctl restart logstash
(3)在kibana上面添加ES索引
收集日志的顺序:
Nginx_filebeat->Redis->logstash->Es->Kibana
(1)nginx日志格式修改成json格式
格式1:
log_format access2 '{"@timestamp":"$time_iso8601",' '"host":"$server_addr",' '"clientip":"$remote_addr",' '"size":$body_bytes_sent,' '"responsetime":$request_time,' '"upstreamtime":"$upstream_response_time",' '"upstreamhost":"$upstream_addr",' '"http_host":"$host",' '"url":"$request",' '"domain":"$host",' '"xff":"$http_x_forwarded_for",' '"referer":"$http_referer",' #'"user_agent":"$http_user_agent",' '"status":"$status"}';
格式2:
log_format access_log_json '{"user_ip":"$http_x_real_ip","lan_ip":"$remote_addr","log_time":"$time_iso8601","user_req":"$request","http_code":"$status","body_bytes_sent":"$body_bytes_sent","req_time":"$request_time","user_ua":"$http_user_agent"}';
应用日志格式:
access_log /var/www/logs/access.log access2;
重启nginx
(2)在nginx机器上面配置filebeat将日志写到redis
[root@Tomcat-01:~]# cat /etc/filebeat/filebeat.yml filebeat.prospectors: - input_type: log paths: - /var/www/logs/access.log document_type: nginx-01 multiline.pattern: '^2017-0' multiline.negate: true multiline.match: after output.redis: hosts: ["r-****.redis.rds.aliyuncs.com:6379"] db: 0 timeout: 5 key: "nginx-01" [root@Tomcat-01:~]# systemctl start filebeat
(3)在logstash上面配置一个文件,从redis取出来数据写到es里面
[root@Ops-Elk-ES-01 conf.d]# cat nginx.conf input { redis { type => "nginx-01" host => "r-****.redis.rds.aliyuncs.com" port => "6379" db => "0" data_type => "list" key => "nginx-01" } } output { elasticsearch { hosts => ["es01:9200","es02:9200"] index => "logstash-nginx-s4-access-01-%{+YYYY.MM.dd}" } } [root@Ops-Elk-Logstash-01 conf.d]# systemctl restart logstash
(4)在kibana上面添加ES索引
最好能够根据Elk收集的日志,建立日志分析图形,好了,这就不过多讲解了,有兴趣的能够加入咱们QQ群,一块儿讨论,解决问题,让学习Elk stack再也不成为难题
对Linux有兴趣的朋友加入QQ群:476794643在线交流
本文防盗链:https://blog.51cto.com/zhang789