本篇博主带来的是Flume对接Kafka。html
在企业中必需要清楚流式数据采集框架flume和kafka的定位是什么:java
所以咱们经常使用的一种模型是:
线上数据 --> flume --> kafka --> flume(根据情景增删该流程) --> HDFSweb
package com.buwenbuhuo.flume.interceptor; import org.apache.flume.Context; import org.apache.flume.Event; import org.apache.flume.interceptor.Interceptor; import java.util.List; /** * @author 卜温不火 * @create 2020-05-07 18:57 * com.buwenbuhuo.flume.interceptor - the name of the target package where the new class or interface will be created. * kafka0506 - the name of the current project. */ public class Customlnterceptor implements Interceptor { @Override public void initialize() { } @Override public Event intercept(Event event) { if (event.getBody()[0] >= '0' && event.getBody()[0] <= '9'){ event.getHeaders().put("topic","number"); }else if (event.getBody()[0] >= 'a' && event.getBody()[0] <= 'z'){ event.getHeaders().put("topic","letter"); } return event; } @Override public List<Event> intercept(List<Event> events) { for (Event event : events){ intercept(event); } return events; } @Override public void close() { } public static class Builder implements Interceptor.Builder{ public Interceptor build(){ return new Customlnterceptor(); } @Override public void configure(Context context) { } } }
[bigdata@hadoop002 job]$ vim nc-kafka.conf # define a1.sources = r1 a1.sinks = k1 a1.channels = c1 # source a1.sources.r1.type = exec a1.sources.r1.command = tail -F -c +0 /opt/module/datas/flume.log a1.sources.r1.shell = /bin/bash -c # Describe the source a1.sources.r1.type = netcat a1.sources.r1.bind = hadoop002 a1.sources.r1.port = 44444 a1.sources.r1.interceptors = i1 a1.sources.r1.interceptors.i1.type = com.buwenbuhuo.flume.interceptor.Customlnterceptor$Builder # sink a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink a1.sinks.k1.kafka.bootstrap.servers = hadoop002:9092,hadoop003:9092,hadoop004:9092 a1.sinks.k1.kafka.topic = first a1.sinks.k1.kafka.flumeBatchSize = 20 a1.sinks.k1.kafka.producer.acks = 1 a1.sinks.k1.kafka.producer.linger.ms = 1 # channel a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # bind a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1
[bigdata@hadoop002 flume]$ bin/flume-ng agent -n a1 -c conf/ -f job/nc-kafka.conf
[bigdata@hadoop003 kafka]$ bin/kafka-console-consumer.sh --bootstrap-server hadoop002:9092 --topic number [bigdata@hadoop004 kafka]$ bin/kafka-console-consumer.sh --bootstrap-server hadoop002:9092 --topic letter
[bigdata@hadoop003 module]$ nc hadoop002 44444
能够看到最终结果图与咱们设想是一致的,因此这次实验成功。shell
本次的分享就到这里了,apache
^ _ ^ ❤️ ❤️ ❤️
码字不易,你们的支持就是我坚持下去的动力。点赞后不要忘了关注我哦!bootstrap