redis
配置示例:api
output {
stdout {
codec => rubydebug
}
}
Codec 来自 Coder/decoderruby
两个单词的首字母缩写,Logstash 不仅是一个input | filter | output 的数据流,服务器
而是一个input | decode | filter | encode | output 的数据流,codec 就是用来decode、encode 事件的。网络
简单说,就是在logstash读入的时候,经过codec编码解析日志为相应格式,从logstash输出的时候,经过codec解码成相应格式。app
演示:运维
input {stdin{}}
output {
stdout {
codec => rubydebug
}
}
启动:bin/logstash -f /usr/local/elk/logstash-5.5.2/conf/template/stdout.confelasticsearch
展现:oop
经过日志收集系统将分散在数百台服务器上的数据集中存储在某中心服务器上,这是运维最原始的需求;优化
需求:将数据采集到logstash的日志文件中,区分业务和采集日期(哪天采集的)
input {stdin{}}
output {
file {
path => "/home/angel/logstash-5.5.2/logs/stdout/mobile-collection/%{+YYYY-MM-dd}-%{host}.txt"
codec => line {
format => "%{message}"
}
gzip => true
}
}
启动:
bin/logstash -f /home/angel/servers/logstash-5.5.2/logstash_conf/stdout_file.conf
Logstash能够直接将采集到的信息下沉到elasticsearch中
input {stdin{}}
output {
elasticsearch {
hosts => ["hadoop01:9200"]
index => "logstash-%{+YYYY.MM.dd}" #这个index是保存到elasticsearch上的索引名称,如何命名特别重要,由于咱们极可能后续根据某些需求作查询,因此最好带时间,由于咱们在中间加上type,就表明不一样的业务,这样咱们在查询当天数据的时候,就能够根据类型+时间作范围查询
flush_size => 20000 #表示logstash的包数量达到20000个才批量提交到es.默认是500
idle_flush_time => 10 #多长时间发送一次数据,flush_size和idle_flush_time以定时定量的方式发送,按照批次发送,能够减小logstash的网络IO请求
user => elastic
password => changeme
}
}
启动:bin/logstash -f /usr/local/elk/logstash-5.5.2/conf/template/stdout_es.conf
向控制台中输入6条数据:
192.168.77.1 - - [10/Apr/2018:00:44:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 505 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.2 - - [10/Apr/2018:00:45:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 460 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.3 - - [10/Apr/2018:00:46:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 510 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.4 - - [10/Apr/2018:00:47:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 112 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.5 - - [10/Apr/2018:00:48:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 455 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.6 - - [10/Apr/2018:00:49:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 653 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
配置:
input { stdin {} }
output {
redis {
host => "hadoop01"
data_type => "list"
db => 2
port => "6379"
key => "logstash-chan-%{+yyyy.MM.dd}"
}
}
数据落地到redis的优化:
• 批处理类(仅用于data_type为list)
o batch:设为true,经过发送一条rpush命令,存储一批的数据
o 默认为false:1条rpush命令,存储1条数据
o 设为true以后,1条rpush会发送batch_events条数据
o batch_events:一次rpush多少条
o 默认50条
o batch_timeout:一次rpush最多消耗多少s
o 默认5s
• 拥塞保护(仅用于data_type为list)
o congestion_interval:每隔多长时间进行一次拥塞检查
o 默认1s
o 设为0,表示对每rpush一个,都进行检测
o congestion_threshold:list中最多能够存在多少个item数据
o 默认是0:表示禁用拥塞检测
o 当list中的数据量达到congestion_threshold,会阻塞直到有其余消费者消费list中的数据
o 做用:防止OOM
启动redis 将数据打入logstash控制台:
192.168.77.1 - - [10/Apr/2018:00:44:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 505 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.2 - - [10/Apr/2018:00:45:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 460 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.3 - - [10/Apr/2018:00:46:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 510 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.4 - - [10/Apr/2018:00:47:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 112 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.5 - - [10/Apr/2018:00:48:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 455 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
192.168.77.6 - - [10/Apr/2018:00:49:11 +0800] "POST /api/metrics/vis/data HTTP/1.1" 200 653 "http://hadoop01/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36"
去redis上作认证,查看是否已经存储redis中: