部署使用elk

1.配置8台虚拟机 es[1:5],kibana,logstash,web   ip 192.168.1.61-68html

2.开始配置ip和主机名java

3.用ansible部署elasticsearch,并能访问网页,如下是ansible部署的yml代码node

 1 ---
 2 - hosts: es  3  remote_user: root  4  tasks:  5     - copy:  6  src: local.repo  7         dest: /etc/yum.repos.d/local.repo  8  owner: root  9  group: root 10         mode: 0644
11     - name: install elasticsearch 12       yum: 13         name: java-1.8.0-openjdk,elasticsearch 14  state: installed 15     - template: 16  src: elasticsearch.yml 17         dest: /etc/elasticsearch/elasticsearch.yml 18  owner: root 19  group: root 20         mode: 0644
21  notify: reload elasticsearch 22  tags: esconf 23     - service: 24  name: elasticsearch 25  enabled: yes 26  handlers: 27     - name: reload elasticsearch 28  service: 29  name: elasticsearch 30         state: restarted
View Code

4.如下是当前要改的 hostsnginx

192.168.1.61 es1 192.168.1.62 es2 192.168.1.63 es3 192.168.1.64 es4 192.168.1.65 es5 192.168.1.66 kibana 192.168.1.67 logstash

5.yum源有相应的软件包和依赖包便可web

6.如下是当前要改的 elasticsearch.ymlapache

 1 # ======================== Elasticsearch Configuration =========================
 2 #  3 # NOTE: Elasticsearch comes with reasonable defaults for most settings.  4 #       Before you set out to tweak and tune the configuration, make sure you  5 # understand what are you trying to accomplish and the consequences.  6 #  7 # The primary way of configuring a node is via this file. This template lists  8 # the most important settings you may want to configure for a production cluster.  9 # 10 # Please see the documentation for further information on configuration options: 11 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
12 # 13 # ---------------------------------- Cluster -----------------------------------
14 # 15 # Use a descriptive name for your cluster: 16 # 17 cluster.name: nsd1810 18 # 19 # ------------------------------------ Node ------------------------------------
20 # 21 # Use a descriptive name for the node: 22 # 23 node.name: {{ansible_hostname}} 24 # 25 # Add custom attributes to the node: 26 # 27 # node.rack: r1 28 # 29 # ----------------------------------- Paths ------------------------------------
30 # 31 # Path to directory where to store the data (separate multiple locations by comma): 32 # 33 # path.data: /path/to/data 34 # 35 # Path to log files: 36 # 37 # path.logs: /path/to/logs 38 # 39 # ----------------------------------- Memory -----------------------------------
40 # 41 # Lock the memory on startup: 42 # 43 # bootstrap.mlockall: true
44 # 45 # Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory 46 # available on the system and that the owner of the process is allowed to use this limit. 47 # 48 # Elasticsearch performs poorly when the system is swapping the memory. 49 # 50 # ---------------------------------- Network -----------------------------------
51 # 52 # Set the bind address to a specific IP (IPv4 or IPv6): 53 # 54 network.host: 0.0.0.0
55 # 56 # Set a custom port for HTTP: 57 # 58 # http.port: 9200
59 # 60 # For more information, see the documentation at: 61 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
62 # 63 # --------------------------------- Discovery ----------------------------------
64 # 65 # Pass an initial list of hosts to perform discovery when new node is started: 66 # The default list of hosts is ["127.0.0.1", "[::1]"] 67 # 68 discovery.zen.ping.unicast.hosts: ["es1", "es2", "es3"] 69 # 70 # Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1): 71 # 72 # discovery.zen.minimum_master_nodes: 3
73 # 74 # For more information, see the documentation at: 75 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
76 # 77 # ---------------------------------- Gateway -----------------------------------
78 # 79 # Block initial recovery after a full cluster restart until N nodes are started: 80 # 81 # gateway.recover_after_nodes: 3
82 # 83 # For more information, see the documentation at: 84 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
85 # 86 # ---------------------------------- Various -----------------------------------
87 # 88 # Disable starting multiple nodes on a single system: 89 # 90 # node.max_local_storage_nodes: 1
91 # 92 # Require explicit names when deleting indices: 93 # 94 # action.destructive_requires_name: true
View Code

7.elasticsearch搭建完成,能够用http://192.168.1.61:9200 访问检验json

8.部署插件bootstrap

插件装在哪一台机器上,只能在哪台机器上使用(这里安装在es5机器上面)vim

1)使用远程 uri 路径能够直接安装浏览器

es5 ~]# cd /usr/share/elasticsearch/bin
es5 bin]# ./plugin install \

ftp://192.168.1.254/elk/elasticsearch-head-master.zip        //安装head插件

es5 bin]# ./plugin install \

ftp://192.168.1.254/elk/elasticsearch-kopf-master.zip        //安装kopf插件

es5 bin]# [root@es5 bin]# ./plugin install \

ftp://192.168.1.254/elk/bigdesk-master.zip

//安装bigdesk插件    

es5 bin]# ./plugin list        //查看安装的插件

Installed plugins in /usr/share/elasticsearch/plugins:

  1. - head
  2. - kopf
  3. - bigdesk

2)访问head插件

  1. [root@room9pc01 ~]# firefox http://192.168.1.65:9200/_plugin/head
  2. )访问kopf插件 

    1. [root@room9pc01 ~]# http://192.168.1.65:9200/_plugin/kopf

4)访问bigdesk插件

  1. [root@room9pc01 ~]# http://192.168.1.65:9200/_plugin/bigdesk

 

9:安装kibana

1)在另外一台主机,配置ip为192.168.1.66,配置yum源,更改主机名

2)安装kibana

  1. [root@kibana ~]# yum -y install kibana
  2. [root@kibana ~]# rpm -qc kibana
  3. /opt/kibana/config/kibana.yml
  4. [root@kibana ~]# vim /opt/kibana/config/kibana.yml
  5. 2 server.port: 5601        
  6. //若把端口改成80,能够成功启动kibana,但ss时没有端口,没有监听80端口,服务里面写死了,不能用80端口,只能是5601这个端口
  7. 5 server.host: "0.0.0.0"        //服务器监听地址
  8. 15 elasticsearch.url: http://192.168.1.61:9200    
  9. //声明地址,从哪里查,集群里面随便选一个
  10. 23 kibana.index: ".kibana"    //kibana本身建立的索引
  11. 26 kibana.defaultAppId: "discover"    //打开kibana页面时,默认打开的页面discover
  12. 53 elasticsearch.pingTimeout: 1500    //ping检测超时时间
  13. 57 elasticsearch.requestTimeout: 30000    //请求超时
  14. 64 elasticsearch.startupTimeout: 5000    //启动超时
  15. [root@kibana ~]# systemctl restart kibana
  16. [root@kibana ~]# systemctl enable kibana
  17. Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /usr/lib/systemd/system/kibana.service.
  18. [root@kibana ~]# ss -antup | grep 5601 //查看监听端口

3)浏览器访问kibana,

  1. [root@kibana ~]# firefox 192.168.1.66:5601

4)点击Status,查看是否安装成功,所有是绿色的对钩,说明安装成功

5)用head插件访问会有.kibana的索引信息,如图所示:

[root@es5 ~]# firefox http://192.168.1.65:9200/_plugin/head/   #插件安装再es5上
 

 

 10.安装jdk 和 logstash
  1. [root@logstash ~]# yum -y install java-1.8.0-openjdk
  2. [root@logstash ~]# yum -y install logstash
  3. [root@logstash ~]# java -version
11.本身配置logstash.conf 文件,参考代码以下,input,output,filter 能够参考elastic官网https://www.elastic.co/guide/en/logstash/current/index.html
      yes的为必须选项,下面有id和example(与id在一块儿的地方)
      注意这个代码包含不少测试输入输出还有调用宏(
  1. root@logstash ~]# cd /opt/logstash/vendor/bundle/ \
  2. jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns/
  3. [root@logstash ~]# vim grok-patterns //查找COMBINEDAPACHELOG
  4. COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}
)查找宏路径,写入数据到/tmp/a.log,以及echo > /dev/tcp/192.168.1.67 /8888 ,也能够用syslog部分,在/etc/rsyslog.conf在加上
  1. local0.info @192.168.1.67:514
  2. //写一个@或两个@@均可以,一个@表明udp,两个@@表明tcp

logger -p local0.info -t nds "001 elk" 而后上述操做都能被logstash,收集并处理,看到分析结果。调用宏用的是grok(由于写的正则有点复杂和头疼就用宏定义了)

 1 input{  2     stdin{ codec => "json"}  3  beats{  4       port => 5044
 5  }  6 file {  7     path => ["/tmp/a.log","/tmp/b.log"]  8     sincedb_path => "/var/lib/logstash/sincedb"
 9     start_position => "beginning"
10     type => "testlog"
11  } 12  tcp { 13   host => "0.0.0.0"
14   port => "8888"
15   type => "tcplog"
16  } 17  udp { 18      host => "0.0.0.0"
19      port => "8888"
20      type => "udplog"
21 } 22 
23 syslog { 24            type => "syslog"
25 } 26 
27 } 28 filter{ 29  grok { 30   match => ["message", "%{COMBINEDAPACHELOG}"] 31  }} 32 
33 output{ 34  stdout{ 35     codec => "rubydebug"
36  } 37  elasticsearch { 38     hosts => ["es1", "es2", "es3"] 39     index => "weblog"
40     flush_size => 2000
41     idle_flush_time => 10
42 } 43 }
View Code

12.

步骤二:安装Apache服务,用filebeat收集Apache服务器的日志,存入elasticsearch

1)在以前安装了Apache的主机上面安装filebeat

  1. [root@web~]# yum -y install filebeat
  2. [root@web~]# vim/etc/filebeat/filebeat.yml
  3. paths:
  4.     - /var/log/httpd/access_log //日志的路径,短横线加空格表明yml格式
  5. document_type: apachelog //文档类型
  6. elasticsearch:        //加上注释
  7. hosts: ["localhost:9200"]                //加上注释
  8. logstash:                    //去掉注释
  9. hosts: ["192.168.1.67:5044"]     //去掉注释,logstash那台主机的ip
  10. [root@web ~]# systemctl start filebeat

13.而后在/etc/logstash.conf中写入beats port相关配置(filebeat是给logstash的小插件,使得各服务器不用装logstash就能够自动把数据发送给logstash服务器)

14都配置好后,就能够在logstash执行/opt/logstash/bin/logstash -f /etc/logstash/logstash.conf 

  来进行解析数据了

而且能够用netstat -antup | grep 5044 查到 两个5044的端口 (用netstat -lntup | grep 5044只有一个)

15.访问web服务器后,logstash能收集到数据,而elastic的网页能够看到weblog,在.kibana的网页输入weblog能够看到访问的条形图

 weblog图相似下面,名字不同

 

.kibana图以下

 

不能画出饼图,数据太少,访问网页要多刷新几回,数据太少,可能看不到图片。 

 

上述的logstash.conf文件是用来解析httpd的log日志的若是是nginx或者是其余服务就要调用他的宏了,能够if来判断,如如下代码,提供一些思路

 1 input{  2     stdin{ codec => "json"}  3  beats{  4       port => 5044
 5  }  6 file {  7     path => ["/tmp/a.log","/tmp/b.log"]  8     sincedb_path => "/var/lib/logstash/sincedb"
 9     start_position => "beginning"
10     type => "testlog"
11  } 12  tcp { 13   host => "0.0.0.0"
14   port => "8888"
15   type => "tcplog"
16  } 17  udp { 18      host => "0.0.0.0"
19      port => "8888"
20      type => "udplog"
21 } 22 
23 syslog { 24            type => "syslog"
25 } 26 
27 } 28 filter{ 29  if [type] == "httplog"{ 30  grok { 31   match => ["message", "%{COMBINEDAPACHELOG}"] 32  }} 33 } 34 
35 output{ 36  stdout{ 37     codec => "rubydebug"
38  } 39 if [type] == "httplog"{ 40  elasticsearch { 41     hosts => ["es1", "es2", "es3"] 42     index => "weblog"
43     flush_size => 2000
44     idle_flush_time => 10
45 } 46 }}
View Code

 elk 总体思路是 客户经过访问web,而后web服务器的filebeat把数据发送给logstash,logstash解析后发送各elasticsearch检索存储,kibana上能够从elastic提取数据并有用web和图形展现出来

相关文章
相关标签/搜索