众所周知,ELK是日志收集套装,这里就很少作介绍了。java
画了一个粗略的架构图,以下:
node
这里实际用了三个节点,系统版本为CentOS6.6,ES版本为2.3.5,logstash版本为2.4.0,kibana版本为4.5.4-1,nginx版本为1.8.1。nginx
192.168.3.56 ES01+logstash01+kibana+redis+nginx 192.168.3.49 ES02+logstash02 192.168.3.57 ES03
一、为三个节点安装java环境redis
# yum install -y java java-1.8.0-openjdk-devel # vim /etc/profile.d/java.sh export JAVA_HOME=/usr # source /etc/profile.d/java.sh
二、三节点同步时间json
# ntpdate pool.ntp.org
三、安装elasticsearch集群,配置集群很简单,三节点保持集群名称相同便可,rpm包是提早在官网下载的vim
节点1,ES01:浏览器
# yum install -y elasticsearch-2.3.5.rpm # vim /etc/elasticsearch/elasticsearch.yml cluster.name: oupenges node.name: es01 network.host: 192.168.3.56 discovery.zen.ping.unicast.hosts: ["192.168.3.56", "192.168.3.49", "192.168.3.57"]
节点2,ES02:ruby
# yum install -y elasticsearch-2.3.5.rpm # vim /etc/elasticsearch/elasticsearch.yml cluster.name: oupenges node.name: es02 network.host: 192.168.3.49 discovery.zen.ping.unicast.hosts: ["192.168.3.56", "192.168.3.49", "192.168.3.57"]
节点3,ES03:架构
# yum install -y elasticsearch-2.3.5.rpm # vim /etc/elasticsearch/elasticsearch.yml cluster.name: oupenges node.name: es03 network.host: 192.168.3.57 discovery.zen.ping.unicast.hosts: ["192.168.3.56", "192.168.3.49", "192.168.3.57"]
启动服务:app
# service elasticsearch start # chkconfig elasticsearch on
经过cluster API查看集群状态:
# curl -XGET 'http://192.168.3.56:9200/_cluster/health?pretty=true' { "cluster_name" : "oupenges", "status" : "green", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 56, "active_shards" : 112, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }
四、为ES三个节点安装head插件
# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
用浏览器访问head:
这个是我装完全部组件以后的状态,后面装完以后就再也不贴head图了。
星形表明master
圆形表明slave
五、在节点1上安装logstash01
# yum install logstash-2.4.0.noarch.rpm
命令行验证logstash:
标准输入 --> 标准输出
# /opt/logstash/bin/logstash -e "input {stdin{}} output{stdout{ codec=>"rubydebug"}}" Settings: Default pipeline workers: 12 Pipeline main started hello { "message" => "hello", "@version" => "1", "@timestamp" => "2017-06-20T03:09:21.113Z", "host" => "uy-s-167" }
标准输入 --> elasticsearch
# /opt/logstash/bin/logstash -e 'input {stdin{}} output{ elasticsearch { hosts => ["192.168.3.56:9200"] index => "test"}}' Settings: Default pipeline workers: 12 Pipeline main started hello hi opera
从时间和内容能够看出,红色框的两条是我刚才添加的两条信息。
六、安装kibana
# yum install -y kibana-4.5.4-1.x86_64.rpm # vim /opt/kibana/config/kibana.yml elasticsearch.url: "http://192.168.3.56:9200" # service kibana start # chkconfig kibana on
用浏览器访问 http://192.168.3.56:5601
七、安装redis
# yum install -y redis # vim /etc/redis.conf daemonize yes bind 192.168.3.56 appendonly yes # service redis start # chkconfig redis on
八、安装Nginx,使用nginx代理kibanna,并设置添加身份验证
# wget http://nginx.org/download/nginx-1.8.1.tar.gz # tar xvf nginx-1.8.1.tar.gz # yum groupinstall -y "Development tools" # cd nginx-1.8.1/ # ./configure --prefix=/usr/local/nginx --sbin-path=/usr/local/nginx/sbin/nginx --conf-path=/usr/local/nginx/conf/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock --user=nginx --group=nginx --with-http_ssl_module --with-http_stub_status_module --with-http_gzip_static_module --http-client-body-temp-path=/var/tmp/nginx/client/ --http-proxy-temp-path=/var/tmp/nginx/proxy/ --http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi --http-scgi-temp-path=/var/tmp/nginx/scgi --with-pcre # mkdir -pv /var/tmp/nginx/client/ # /usr/local/nginx/sbin/nginx # vim /usr/local/nginx/conf/nginx.conf 在http段添加一个server段 server { listen 8080; server_name 192.168.3.56; #当前主机名 auth_basic "Restricted Access"; auth_basic_user_file /usr/local/nginx/conf/htpasswd.users; #身份验证 location / { proxy_pass http://192.168.3.56:5601; #代理到kibana proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } # yum install -y httpd-tools # htpasswd -bc /usr/local/nginx/conf/htpasswd.users admin admin # cat /usr/local/nginx/conf/htpasswd.users admin:TvypNSDg6V3Rc # /usr/local/nginx/sbin/nginx -t # /usr/local/nginx/sbin/nginx -s reload
九、将Nginx的日志格式转换为json格式
# vim /usr/local/nginx/conf/nginx.conf log_format access1 '{"@timestamp":"$time_iso8601",' '"host":"$server_addr",' '"clientip":"$remote_addr",' '"size":$body_bytes_sent,' '"responsetime":$request_time,' '"upstreamtime":"$upstream_response_time",' '"upstreamhost":"$upstream_addr",' '"http_host":"$host",' '"url":"$uri",' '"domain":"$host",' '"xff":"$http_x_forwarded_for",' '"referer":"$http_referer",' '"status":"$status"}'; access_log /var/log/nginx/access.log access1; # /usr/local/nginx/sbin/nginx -t # /usr/local/nginx/sbin/nginx -s reload
十、在须要收集日志也就是nginx server上安装filebeat
# yum install -y filebeat-1.2.3-x86_64.rpm # mv /etc/filebeat/filebeat.yml /etc/filebeat/filebeat.yml.bak # vim /etc/filebeat/filebeat.yml filebeat: prospectors: - paths: - /var/log/messages input_type: log document_type: nginxs1-system-message - paths: - /var/log/nginx/access.log input_type: log document_type: nginxs1-access-log registry_file: /var/lib/filebeat/registry output: logstash: hosts: ["192.168.3.56:5044"] file: path: "/tmp/" filename: filebeat.txt shipper: logging: to_files: true files: path: /tmp/mybeat # service filebeat start # chkconfig filebeat on
十一、配置logstash01接收filebeat发出的日志,并输出到redis
# vim /etc/logstash/conf.d/nginx.conf input { beats { port => 5044 codec => "json" }} output { if [type] == "nginxs1-system-message" { redis { data_type => "list" key => "nginxs1-system-message" host => "192.168.3.56" port => "6379" db => "0" }} if [type] == "nginxs1-access-log" { redis { data_type => "list" key => "nginxs1-access-log" host => "192.168.3.56" port => "6379" db => "0" }} file { path => "/tmp/nginx-%{+YYYY-MM-dd}messages.gz" } } # /etc/init.d/logstash configtest # service logstash restart
十二、在节点2上安装logstash02
# yum install logstash-2.4.0.noarch.rpm
1三、配置logstash02从redis读取日志,并输出到elasticsearch中
# vim /etc/logstash/conf.d/redis-to-es.conf input { redis { host => "192.168.3.56" port => "6379" db => "0" key => "nginxs1-system-message" data_type => "list" batch_count => 1 } redis { host => "192.168.3.56" port => "6379" db => "0" key => "nginxs1-access-log" data_type => "list" codec => "json" batch_count => 1 } } output { if [type] == "nginxs1-system-message" { elasticsearch { hosts => ["192.168.3.56:9200"] index => "nginxs1-system-message-%{+YYYY.MM.dd}" manage_template => true flush_size => 2000 idle_flush_time => 10 }} if [type] == "nginxs1-access-log" { elasticsearch { hosts => ["192.168.3.56:9200"] index => "logstash-nginxs1-access-log-%{+YYYY.MM.dd}" manage_template => true flush_size => 2000 idle_flush_time => 10 }} }
1四、登陆配置kibana
配置完成后,就能够在Discover中看到nginx的日志了。
在Visualize里面能够画各类图,这里就不细说了。
展现一个我画的很简单的Dashboard: