最开始架构定的是采用elk来作日志的收集,可是测试一段时间后,因为logstash的性能不好,对cpu和内存消耗很大,放弃了logstash。为何没有直接使用flume的agent来收集日志,这主要是根据实际的需求,众所周知,flume对目录的收集没法针对文件的动态变化,在传完文件以后,将会修改文件的后缀,变为.COMPLETED,不管是收集应用日志仍是系统日志,咱们都不但愿改变原有的日志文件,最终收集日志使用了用go开发的更轻量级的logstash_forward,logstash_forward功能比较单一,目前只能用来收集文件。linux
1.安装logstash-forwarder。git
https://github.com/elasticsearch/logstash-forwarder github
2.flume安装很简单,解压包便可。shell
3.安装elasticsearch,测试只安装es单节点apache
下载https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.2.tar.gz tomcat
tar zxf elasticsearch-1.3.2.tar.gz cd elasticsearch-1.3.2/ cd config 能够看到elasticsearch.yml,logging.yml两个文件,若没有请建立。 vi elasticsearch.yml,修改集群名称 cluster.name: elasticsearch 启动elasticsearch,bin/elasticsearch 安装elasticsearch head bin/plugin -install mobz/elasticsearch-head
http://master:9200/_plugin/head/架构
4. 安装kibana
elasticsearch
下载地址 : https://www.elastic.co/products/kibana oop
将kibana-4.1.1-linux-x64.tar.gz解压到apache或tomcat下,修改elasticsearch_url,性能
cd kibana-4.1.1-linux-x64 vi config/kibana.yml elasticsearch_url: "http://master:9200"
启动kibana,bin/kibana
5. 架构图
经测试,单台agent,入库速率能够达到约1.5w条/s。
在压测的过程当中,agent批量发送的数据量大的时候,会致使flume OOM,调整flume JVM 启动参数
vi bin/flume-ng JAVA_OPTS="-Xms2048m -Xmx2048m"
flume中心节点的配置:
# The configuration file needs to define the sources, # the channels and the sinks. # Sources, channels and sinks are defined per agent, # in this case called 'agent' a1.sources = r1 a1.sinks = k1 k2 a1.channels = c1 #sinks group a1.sinkgroups = g1 # For each one of the sources, the type is defined a1.sources.r1.type = http a1.sources.r1.bind = 192.168.137.118 a1.sources.r1.port = 5858 # The channel can be defined as follows. a1.channels.c1.type = SPILLABLEMEMORY a1.channels.c1.checkpointDir=/home/hadoop/.flume/channel1/file-channel/checkpoint a1.channels.c1.dataDirs=/home/hadoop/.flume/channel1/file-channel/data a1.channels.c1.keep-alive = 30 # Each sink's type must be defined # k1 sink a1.sinks.k1.channel = c1 a1.sinks.k1.type = avro # connect to CollectorMainAgent a1.sinks.k1.hostname = 192.168.137.119 a1.sinks.k1.port = 5858 # k2 sink a1.sinks.k2.channel = c1 a1.sinks.k2.type = avro # connect to CollectorBackupAgent a1.sinks.k2.hostname = 192.168.137.120 a1.sinks.k2.port = 5858 a1.sinkgroups.g1.sinks = k1 k2 # load_balance type a1.sinkgroups.g1.processor.type = load_balance a1.sinkgroups.g1.processor.backoff = true a1.sinkgroups.g1.processor.selector = ROUND_ROBIN a1.sources.r1.channels = c1
参考文档:
http://blog.qiniu.com/archives/3928
http://mp.weixin.qq.com/s?__biz=MzA5OTAyNzQ2OA==&mid=207036526&idx=1&sn=b0de410e0d1026cd100ac2658e093160&scene=23&srcid=10228P1jGvZC20dC2FGAdoqh#rd