前提:已配置好Redis集群,并设置的有统一的访问密码css
架构是filebeat-->redis集群-->logstash->elasticsearch,须要修改filebeat的输出和logstash的输入值
filebeat地址:192.168.80.108
redis集群地址:192.168.80.107 ,采用的是伪集群的方式html
filebeat.inputs: - type: log enabled: true paths: - /usr/local/openresty/nginx/logs/host.access.log fields: log_source: messages - type: log enabled: true paths: - /usr/local/openresty/nginx/logs/error.log fields: log_source: secure output.redis: # Redis集群地址列表 hosts: ["192.168.80.107:7001","192.168.80.107:7002","192.168.80.107:7003","192.168.80.107:7004","192.168.80.107:7005","192.168.80.107:7006","192.168.80.107:7007","192.168.80.107:7008"] # Redis集群key key: messages_secure password: foobar2000 # 集群模式下只能用第0数据库,填写其余的会报错 db: 0
登陆:nginx
# -h是地址,-p是端口,-c表示集群,-a是密码 /elk/redis/redis-4.0.1/src/redis-cli -h 192.168.80.107 -c -p 7001 -a foobar2000
查看:git
redis 127.0.0.1:7000[0]> keys * # 出现这个key了 说明fielebeat的数据已经传输到redis集群中了 1) "messages_secure" redis 127.0.0.1:7000[0]> llen emessages_secure ##查看list长度 (integer) 2002 redis 127.0.0.1:7000[0]> lindex messages_secure 0 #查看相关数据
或者使用redis客户端RedisDesktopManager使用github
发现一个问题,Redis集群中出现俩messages_secure,且存储的数据如出一辙,这个问题还有待继续研究..redis
input { redis { host => "192.168.80.107" port => 7001 password => foobar2000 data_type => "list" key => "messages_secure" db => 0 } redis { host => "192.168.80.107" port => 7002 password => foobar2000 data_type => "list" key => "messages_secure" db => 0 } redis { host => "192.168.80.107" port => 7003 password => foobar2000 data_type => "list" key => "messages_secure" db => 0 } redis { host => "192.168.80.107" port => 7004 password => foobar2000 data_type => "list" key => "messages_secure" db => 0 } redis { host => "192.168.80.107" port => 7005 password => foobar2000 data_type => "list" key => "messages_secure" db => 0 } redis { host => "192.168.80.107" port => 7006 password => foobar2000 data_type => "list" key => "messages_secure" db => 0 } redis { host => "192.168.80.107" port => 7007 password => foobar2000 data_type => "list" key => "messages_secure" db => 0 } redis { host => "192.168.80.107" port => 7008 password => foobar2000 data_type => "list" key => "messages_secure" db => 0 } redis { batch_count => 1 host => "192.168.80.107" port => 7001 password => foobar2000 data_type => "list" key => "messages_secure" db => 0 } } # 输出到elasticsearch中,根据不一样的日志来源建立不一样的索引 output { if [fields][log_source] == 'messages' { elasticsearch { hosts => ["http://192.168.80.104:9200", "http://192.168.80.105:9200","http://192.168.80.106:9200"] index => "messages-%{+YYYY.MM.dd}" user => "elastic" password => "elkstack123456" } } if [fields][log_source] == "secure" { elasticsearch { hosts => ["http://192.168.80.104:9200", "http://192.168.80.105:9200","http://192.168.80.106:9200"] index => "secure-%{+YYYY.MM.dd}" user => "elastic" password => "elkstack123456" } } }
说明:
input的redis中,host默认是string,不能填写列表,因此须要把全部集群的地址都写上,
如果只写其中一个Redis集群节点的地址,,则会出现以下提示,同时logstash也没法从Redis集群中拉取数据数据库
Redis connection problem {:exception=>#<Redis::CommandError: CROSSSLOT Keys in request don't hash to the same slot>} Redis connection problem {:exception=>#<Redis::CommandError: MOVED 7928 192.168.80.107:7002>}
可是若把全部集群的地址都写上,虽然也会出现上述的俩提示,可是logstash能从Redis集群中拉取数据json
延伸的问题:由于Redis集群中存储俩messages_secure,致使logstash从Redis集群中拉取的数据是会有俩如出一辙的,进而传输给Elasticsearch的数据
也是有重复的,在kibana上查看,每一个记录均有两条
出现这个问题是由于filebeat存储到Redis集群的数据重复,有待上面问题的解决。bash
host参数的值是string,不支持列表架构
For other versions, see the Versioned plugin docs.
For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.
This input will read events from a Redis instance; it supports both Redis channels and lists. The list command (BLPOP) used by Logstash is supported in Redis v1.3.1+, and the channel commands used by Logstash are found in Redis v1.3.8+. While you may be able to make these Redis versions work, the best performance and stability will be found in more recent stable versions. Versions 2.6.0+ are recommended.
For more information about Redis, see http://redis.io/
batch_count
note: If you use the batch_count
setting, you must use a Redis version 2.6.0 or newer. Anything older does not support the operations used by batching.
This plugin supports the following configuration options plus the Common Options described later.
Setting | Input type | Required |
---|---|---|
batch_count |
number | No |
data_type |
string, one of ["list", "channel", "pattern_channel"] |
Yes |
db |
number | No |
host |
string | No |
key |
string | Yes |
password |
password | No |
port |
number | No |
threads |
number | No |
timeout |
number | No |
Also see Common Options for a list of options supported by all input plugins.
batch_count
edit125
The number of events to return from Redis using EVAL.
data_type
editlist
, channel
, pattern_channel
Specify either list or channel. If data_type
is list
, then we will BLPOP the key. If data_type
is channel
, then we will SUBSCRIBE to the key. If data_type
is pattern_channel
, then we will PSUBSCRIBE to the key.
db
edit0
The Redis database number.
host
edit"127.0.0.1"
The hostname of your Redis server.
key
editThe name of a Redis list or channel.
password
editPassword to authenticate with. There is no authentication by default.
port
edit6379
The port to connect on.
ssl
editfalse
Enable SSL support.
threads
edit1
timeout
edit5
Initial connection timeout in seconds.
The following configuration options are supported by all input plugins:
Setting | Input type | Required |
---|---|---|
add_field |
hash | No |
codec |
codec | No |
enable_metric |
boolean | No |
id |
string | No |
tags |
array | No |
type |
string | No |
add_field
edit{}
Add a field to an event
codec
edit"plain"
The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline.
enable_metric
edittrue
Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin.
id
editAdd a unique ID
to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 redis inputs. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.
input { redis { id => "my_plugin_id" } }
tags
editAdd any number of arbitrary tags to your event.
This can help with processing later.
type
editAdd a type
field to all events handled by this input.
Types are used mainly for filter activation.
The type is stored as part of the event itself, so you can also use the type to search for it in Kibana.
If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server.