Guide哥答应你们的 Kafka系列的第2篇原创文章。为了保证内容实时更新,我将相关文章也发送到了Gihub上!地址:https://github.com/Snailclimb/springboot-kafkajava
相关阅读:入门篇!大白话带你认识 Kafka!git
前置条件:你的电脑已经安装 Docker 程序员
主要内容:github
下面使用的单机版的Kafka 来做为演示,推荐先搭建单机版的Kafka来学习。面试
如下使用 Docker 搭建Kafka基本环境来自开源项目: https://github.com/simplestep... 。固然,你也能够按照官方提供的来: https://github.com/wurstmeist... 。
新建一个名为 zk-single-kafka-single.yml
的文件,文件内容以下:正则表达式
version: '2.1' services: zoo1: image: zookeeper:3.4.9 hostname: zoo1 ports: - "2181:2181" environment: ZOO_MY_ID: 1 ZOO_PORT: 2181 ZOO_SERVERS: server.1=zoo1:2888:3888 volumes: - ./zk-single-kafka-single/zoo1/data:/data - ./zk-single-kafka-single/zoo1/datalog:/datalog kafka1: image: confluentinc/cp-kafka:5.3.1 hostname: kafka1 ports: - "9092:9092" environment: KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181" KAFKA_BROKER_ID: 1 KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO" KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 volumes: - ./zk-single-kafka-single/kafka1/data:/var/lib/kafka/data depends_on: - zoo1
运行如下命令便可完成环境搭建(会自动下载并运行一个 zookeeper 和 kafka )spring
docker-compose -f zk-single-kafka-single.yml up
若是须要中止Kafka相关容器的话,运行如下命令便可:docker
docker-compose -f zk-single-kafka-single.yml down
如下使用 Docker 搭建Kafka基本环境来自开源项目: https://github.com/simplestep... 。
新建一个名为 zk-single-kafka-multiple.yml
的文件,文件内容以下:shell
version: '2.1' services: zoo1: image: zookeeper:3.4.9 hostname: zoo1 ports: - "2181:2181" environment: ZOO_MY_ID: 1 ZOO_PORT: 2181 ZOO_SERVERS: server.1=zoo1:2888:3888 volumes: - ./zk-single-kafka-multiple/zoo1/data:/data - ./zk-single-kafka-multiple/zoo1/datalog:/datalog kafka1: image: confluentinc/cp-kafka:5.4.0 hostname: kafka1 ports: - "9092:9092" environment: KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181" KAFKA_BROKER_ID: 1 KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO" volumes: - ./zk-single-kafka-multiple/kafka1/data:/var/lib/kafka/data depends_on: - zoo1 kafka2: image: confluentinc/cp-kafka:5.4.0 hostname: kafka2 ports: - "9093:9093" environment: KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka2:19093,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9093 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181" KAFKA_BROKER_ID: 2 KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO" volumes: - ./zk-single-kafka-multiple/kafka2/data:/var/lib/kafka/data depends_on: - zoo1 kafka3: image: confluentinc/cp-kafka:5.4.0 hostname: kafka3 ports: - "9094:9094" environment: KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka3:19094,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9094 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181" KAFKA_BROKER_ID: 3 KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO" volumes: - ./zk-single-kafka-multiple/kafka3/data:/var/lib/kafka/data depends_on: - zoo1
运行如下命令便可完成 1个节点 Zookeeper+3个节点的 Kafka 的环境搭建。apache
docker-compose -f zk-single-kafka-multiple.yml up
若是须要中止Kafka相关容器的话,运行如下命令便可:
docker-compose -f zk-single-kafka-multiple.yml down
通常状况下咱们不多会用到 Kafka 的命令行操做。
1.进入 Kafka container 内部执行 Kafka 官方自带了一些命令
docker exec -ti docker_kafka1_1 bash
2.列出全部 Topic
root@kafka1:/# kafka-topics --describe --zookeeper zoo1:2181
3.建立一个 Topic
root@kafka1:/# kafka-topics --create --topic test --partitions 3 --zookeeper zoo1:2181 --replication-factor 1 Created topic test.
咱们建立了一个名为 test 的 Topic, partition 数为 3, replica 数为 1。
4.消费者订阅主题
root@kafka1:/# kafka-console-consumer --bootstrap-server localhost:9092 --topic test send hello from console -producer
咱们订阅了 名为 test 的 Topic。
5.生产者向 Topic 发送消息
root@kafka1:/# kafka-console-producer --broker-list localhost:9092 --topic test >send hello from console -producer >
咱们使用 kafka-console-producer
命令向名为 test 的 Topic 发送了一条消息,消息内容为:“send hello from console -producer”
这个时候,你会发现消费者成功接收到了消息:
root@kafka1:/# kafka-console-consumer --bootstrap-server localhost:9092 --topic test send hello from console -producer
这是一款 IDEA 提供的 Zookeeper 可视化工具插件,很是好用! 咱们能够经过它:
实际使用效果以下:
<img src="https://my-blog-to-use.oss-cn-beijing.aliyuncs.com/2019-11/zookeeper-kafka.jpg" style="zoom:50%;" />
使用方法:
IDEA 提供的 Kafka 可视化管理插件。这个插件为咱们提供了下面这写功能:
实际使用效果以下:
<img src="https://my-blog-to-use.oss-cn-beijing.aliyuncs.com/2019-11/kafkalytic.jpg" style="zoom:50%;" />
使用方法:
代码地址: https://github.com/Snailclimb...
Step 1:新建一个Maven项目
Step2: pom.xml
中添加相关依赖
<dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>2.2.0</version> </dependency>
Step 3:初始化消费者和生产者
KafkaConstants
常量类中定义了Kafka一些经常使用配置常量。
public class KafkaConstants { public static final String BROKER_LIST = "localhost:9092"; public static final String CLIENT_ID = "client1"; public static String GROUP_ID_CONFIG="consumerGroup1"; private KafkaConstants() { } }
ProducerCreator
中有一个 createProducer()
方法方法用于返回一个 KafkaProducer
对象
import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.Producer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.common.serialization.StringSerializer; import java.util.Properties; /** * @author shuang.kou */ public class ProducerCreator { public static Producer<String, String> createProducer() { Properties properties = new Properties(); properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, KafkaConstants.BROKER_LIST); properties.put(ProducerConfig.CLIENT_ID_CONFIG, KafkaConstants.CLIENT_ID); properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); return new KafkaProducer<>(properties); } }
ConsumerCreator 中有一个 createConsumer()
方法方法用于返回一个 KafkaConsumer
对象
import org.apache.kafka.clients.consumer.Consumer; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.common.serialization.StringDeserializer; import java.util.Properties; public class ConsumerCreator { public static Consumer<String, String> createConsumer() { Properties properties = new Properties(); properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, KafkaConstants.BROKER_LIST); properties.put(ConsumerConfig.GROUP_ID_CONFIG, KafkaConstants.GROUP_ID_CONFIG); properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); return new KafkaConsumer<>(properties); } }
Step 4:发送和消费消息
生产者发送消息:
private static final String TOPIC = "test-topic"; Producer<String, String> producer = ProducerCreator.createProducer(); ProducerRecord<String, String> record = new ProducerRecord<>(TOPIC, "hello, Kafka!"); try { //send message RecordMetadata metadata = producer.send(record).get(); System.out.println("Record sent to partition " + metadata.partition() + " with offset " + metadata.offset()); } catch (ExecutionException | InterruptedException e) { System.out.println("Error in sending record"); e.printStackTrace(); } producer.close();
消费者消费消息:
Consumer<String, String> consumer = ConsumerCreator.createConsumer(); // 循环消费消息 while (true) { //subscribe topic and consume message consumer.subscribe(Collections.singletonList(TOPIC)); ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(1000)); for (ConsumerRecord<String, String> consumerRecord : consumerRecords) { System.out.println("Consumer consume message:" + consumerRecord.value()); } }
Step 5:测试
运行程序控制台打印出:
Record sent to partition 0 with offset 20 Consumer consume message:hello, Kafka!
做者的其余开源项目推荐: