版权声明:本文为博主原创文章,未经博主容许不得转载。html
Sink groups容许组织多个sink到一个实体上。 Sink processors可以提供在组内全部Sink之间实现负载均衡的能力,并且在失败的状况下可以进行故障转移从一个Sink到另外一个Sink。dom
简单的说就是一个source 对应一个Sinkgroups,即多个sink,这里实际上与第六节的复用/复制状况差很少,只是这里考虑的是可靠性与性能,即故障转移与负载均衡的设置。tcp
下面是官方配置:ide
Property Name性能 |
Default测试 |
Descriptionthis |
sinksurl |
–spa |
Space-separated list of sinks that are participating in the group |
processor.type |
default |
The component type name, needs to be default, failover or load_balance |
从参数类型上能够看出有3种Processors类型:default, failover(故障转移)和 load_balance(负载均衡),固然,官网上说目前自定义processors 还不支持。
下面是官网例子
a1.sinkgroups=g1
a1.sinkgroups.g1.sinks=k1 k2
a1.sinkgroups.g1.processor.type=load_balance
DefaultSink Processor 接收单一的Sink,不强制用户为Sink建立Processor,前面举了不少例子。因此这个就很少说了。
FailoverSink Processor会经过配置维护了一个优先级列表。保证每个有效的事件都会被处理。
故障转移的工做原理是将连续失败sink分配到一个池中,在那里被分配一个冷冻期,在这个冷冻期里,这个sink不会作任何事。一旦sink成功发送一个event,sink将被还原到live 池中。
在这配置中,要设置sinkgroups processor为failover,须要为全部的sink分配优先级,全部的优先级数字必须是惟一的,这个得格外注意。此外,failover time的上限能够经过maxpenalty 属性来进行设置。
下面是官网配置:
Property Name |
Default |
Description |
sinks |
– |
Space-separated list of sinks that are participating in the group |
processor.type |
default |
The component type name, needs to be failover |
processor.priority.<sinkName> |
– |
<sinkName> must be one of the sink instances associated with the current sink group |
processor.maxpenalty |
30000 |
(in millis) |
下面是官网例子
a1.sinkgroups=g1
a1.sinkgroups.g1.sinks=k1 k2
a1.sinkgroups.g1.processor.type=failover
a1.sinkgroups.g1.processor.priority.k1=5
a1.sinkgroups.g1.processor.priority.k2=10
a1.sinkgroups.g1.processor.maxpenalty=10000
这里首先要申明一个sinkgroups,而后再设置2个sink ,k1与k2,其中2个优先级是5和10,而processor的maxpenalty被设置为10秒,默认是30秒。‘
下面是测试例子
[html] view plain copy
#配置文件:failover_sink_case13.conf
#Name the components on this agent
a1.sources= r1
a1.sinks= k1 k2
a1.channels= c1 c2
a1.sinkgroups= g1
a1.sinkgroups.g1.sinks= k1 k2
a1.sinkgroups.g1.processor.type= failover
a1.sinkgroups.g1.processor.priority.k1= 5
a1.sinkgroups.g1.processor.priority.k2= 10
a1.sinkgroups.g1.processor.maxpenalty= 10000
#Describe/configure the source
a1.sources.r1.type= syslogtcp
a1.sources.r1.port= 50000
a1.sources.r1.host= 192.168.233.128
a1.sources.r1.channels= c1 c2
#Describe the sink
a1.sinks.k1.type= avro
a1.sinks.k1.channel= c1
a1.sinks.k1.hostname= 192.168.233.129
a1.sinks.k1.port= 50000
a1.sinks.k2.type= avro
a1.sinks.k2.channel= c2
a1.sinks.k2.hostname= 192.168.233.130
a1.sinks.k2.port= 50000
# Usea channel which buffers events in memory
a1.channels.c1.type= memory
a1.channels.c1.capacity= 1000
a1.channels.c1.transactionCapacity= 100
这里设置了2个channels与2个sinks ,关于故障转移的设置直接复制官网的例子。咱们还要配置2个sinks对于的代理。这里的2个接受代理咱们沿用以前第六章复制的2个sink代理配置。
下面是第一个接受复制事件代理配置
[html] view plain copy
#配置文件:replicate_sink1_case11.conf
# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1
# Describe/configure the source
a2.sources.r1.type = avro
a2.sources.r1.channels = c1
a2.sources.r1.bind = 192.168.233.129
a2.sources.r1.port = 50000
# Describe the sink
a2.sinks.k1.type = logger
a2.sinks.k1.channel = c1
# Use a channel which buffers events inmemory
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100
下面是第二个接受复制事件代理配置:
[html] view plain copy
#配置文件:replicate_sink2_case11.conf
# Name the components on this agent
a3.sources = r1
a3.sinks = k1
a3.channels = c1
# Describe/configure the source
a3.sources.r1.type = avro
a3.sources.r1.channels = c1
a3.sources.r1.bind = 192.168.233.130
a3.sources.r1.port = 50000
# Describe the sink
a3.sinks.k1.type = logger
a3.sinks.k1.channel = c1
# Use a channel which buffers events inmemory
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100
#敲命令
首先先启动2个接受复制事件代理,若是先启动源发送的代理,会报他找不到sinks的绑定,由于2个接事件的代理还未起来。
flume-ng agent -cconf -f conf/replicate_sink1_case11.conf -n a1 -Dflume.root.logger=INFO,console
flume-ng agent -cconf -f conf/replicate_sink2_case11.conf -n a1 -Dflume.root.logger=INFO,console
在启动源发送的代理
flume-ng agent -cconf -f conf/failover_sink_case13.conf -n a1 -Dflume.root.logger=INFO,console
启动成功后
打开另外一个终端输入,往侦听端口送数据
echo "hello failoversink" | nc 192.168.233.128 50000
#在启动源发送的代理终端查看console输出
由于k1的优先级是5,K2是10所以当K2正常运行的时候,是发送到K2的。下面数据正常输出。
而后咱们中断K2的代理进程。
再尝试往侦听端口送数据
echo "hello close k2"| nc 192.168.233.128 50000
咱们发现源代理发生事件到K2失败,而后他将K2放入到failover list(故障列表)
由于K1仍是正常运行的,所以这个时候他会接收到数据。
而后咱们再打开K2的大理进程,咱们继续往侦听端口送数据
echo " hello open k2 again" | nc192.168.233.128 50000
数据正常发生,Failover SinkProcessor测试完毕。
负载均衡片处理器提供在多个Sink之间负载平衡的能力。实现支持经过round_robin(轮询)或者random(随机)参数来实现负载分发,默认状况下使用round_robin,但能够经过配置覆盖这个默认值。还能够经过集成AbstractSinkSelector类来实现用户本身的选择机制。
当被调用的时候,这选择器经过配置的选择规则选择下一个sink来调用。
下面是官网配置
Property Name |
Default |
Description |
processor.sinks |
– |
Space-separated list of sinks that are participating in the group |
processor.type |
default |
The component type name, needs to be load_balance |
processor.backoff |
false |
Should failed sinks be backed off exponentially. |
processor.selector |
round_robin |
Selection mechanism. Must be either round_robin, random or FQCN of custom class that inherits from AbstractSinkSelector |
processor.selector.maxTimeOut |
30000 |
Used by backoff selectors to limit exponential backoff (in milliseconds) |
下面是官网的例子
a1.sinkgroups=g1
a1.sinkgroups.g1.sinks=k1 k2
a1.sinkgroups.g1.processor.type=load_balance
a1.sinkgroups.g1.processor.backoff=true
a1.sinkgroups.g1.processor.selector=random
这个与故障转移的设置差很少。
下面是测试例子
[html] view plain copy
#配置文件:load_sink_case14.conf
# Name the components on this agent
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1
a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type =load_balance
a1.sinkgroups.g1.processor.backoff = true
a1.sinkgroups.g1.processor.selector =round_robin
# Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 50000
a1.sources.r1.host = 192.168.233.128
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = 192.168.233.129
a1.sinks.k1.port = 50000
a1.sinks.k2.type = avro
a1.sinks.k2.channel = c1
a1.sinks.k2.hostname = 192.168.233.130
a1.sinks.k2.port = 50000
# Use a channel which buffers events inmemory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
这里要说明的是,所以测试的是负载均衡的例子,所以这边使用一个channel来做为数据传输通道。这里sinks的对应的接收数据的代理配置,咱们沿用故障转移的接收代理配置。
#敲命令
首先先启动2个接受复制事件代理,若是先启动源发送的代理,会报他找不到sinks的绑定,由于2个接事件的代理还未起来。
flume-ng agent -cconf -f conf/replicate_sink1_case11.conf -n a1
-Dflume.root.logger=INFO,console
flume-ng agent -cconf -f conf/replicate_sink2_case11.conf -n a1
-Dflume.root.logger=INFO,console
在启动源发送的代理
flume-ng agent -cconf -f conf/load_sink_case14.conf -n a1
-Dflume.root.logger=INFO,console
启动成功后
打开另外一个终端输入,往侦听端口送数据
echo "loadbanlancetest1" | nc 192.168.233.128 50000
echo "loadbantest2" | nc 192.168.233.128 50000
echo "loadban test3"| nc 192.168.233.128 50000
echo "loadbantest4" | nc 192.168.233.128 50000
echo "loadbantest5" | nc 192.168.233.128 50000
#在启动源发送的代理终端查看console输出
其中K1收到3条数据
其中K1收到2条数据
由于咱们负载均衡选择的类型是轮询,所以能够看出flume 让代理每次向一个sink发送2次事件数据后就换另外一个sinks 发送。
Sink Processors测试完毕