tar -xzf kafka_2.11-0.10.1.1.tgz cd kafka_2.11-0.10.1.1
Kafka用到了Zookeeper ,因此首先要启动zookeeper,先启动一个单实例的zk服务。html
bin/zookeeper-server-start.sh config/zookeeper.properties &
启动Kafka 服务java
bin/kafka-server-start.sh config/server.properties
建立一个 名叫“test”的topic ,只有一个分区,一个副本node
bin/kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic test
能够经过list 命令查看已建立的topic:并发
bin/kafka-topics.sh --list --zookeeper 127.0.0.1:2181 test
Kafka 使用一个简单的命令行producer,从文件中或者从标准输入中读取消息并发送到服务端。默认的每条命令将发送一条消息测试
运行producer并在控制台中输一些消息,这些消息将被发送到服务端:
bin/kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic test
This is a message /:Enter
This is another message /:Enterfetch
Enter 键为push 一条记录,Ctrl + c 退出发送命令行
Kafka也有一个命令行consumer能够读取消息并输出到标准输出scala
bin/kafka-console-consumer.sh --zookeeper 127.0.0.1:2181 --topic test --from-beginning This is a message This is another message
若是开两个命令终端,经过producer 输入message,consumer 马上就能够读取出消息rest
上面启动了单broker,如今启动3个broker 组成的集群(机器A,B,C)
机器A:192.168.56.129 ;机器B:192.168.56.131 ;机器C:192.168.56.132日志
ABC 均照前面安装好kafka
zookeeper单例:放到机器A(192.168.56.129:2181)
机器A 配置:
broker.id=0 advertised.host.name=192.168.56.129 zookeeper.connect=localhost:2181
修改机器B 中kafka 的配置文件
broker.id=1 advertised.host.name=192.168.56.131 zookeeper.connect=192.168.56.129:2181
修改机器C 中kafka 的配置文件
broker.id=1 advertised.host.name=192.168.56.132 zookeeper.connect=192.168.56.129:2181
为了不出现 Should not set log end offset on partition 异常,须要将各个broker 的server.properties 中设置
advertised.host.name=192.168.xxx
机器A上,启动zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties &
依次启动A,B,C上的 kafka broker
bin/kafka-server-start.sh config/server.properties &
bin/kafka-topics.sh --create --zookeeper 192.168.56.129:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
bin/kafka-topics.sh --describe --zookeeper 192.168.56.129:2181 --topic my-replicated-topic Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs: Topic: my-replicated-topic Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2
Leader:负责处理消息的读写,leader 是从全部节点中随机选择的
Replicas:列出全部的副本节点,无论节点是否在服务中
Isr:是正在服务的节点
注意:broker 运行了3台,可是Isr检测到 一直只有一台,并且没法写入消息,结果发现是防火墙的问题
如 : Replicas: 0,1,2 Isr: 0 如producer 写消息时:Error while fetching metadata with correlation id 11 : {my-replicated-topic2=LEADER_NOT_AVAILABLE}
这时须要把各台机器的 9092 端口打开
vi /etc/sysconfig/iptables
service iptables restart
同单例;随意挑选机器,作为producer 向my-replicated-topic 写入消息,挑另一台机器作consumer ,消费这些消息
bin/kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic my-replicated-topic bin/kafka-console-consumer.sh --zookeeper 192.168.56.129:2181 --from-beginning --topic my-replicated-topic
上面检测节点信息时,broker.id=0 是做为Leader,如今kill Leader node
在几秒后,就恢复正常,见其余broker 的日志:
[2017-04-06 16:52:07,601] WARN [ReplicaFetcherThread-0-0], Error in fetch kafka.server.ReplicaFetcherThread$FetchRequest@11acd8a (kafka.server.ReplicaFetcherThread) java.io.IOException: Connection to 192.168.56.129:9092 (id: 0 rack: null) failed at kafka.utils.NetworkClientBlockingOps$.awaitReady$1(NetworkClientBlockingOps.scala:83) at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:93) at kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:248) at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:238) at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42) at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:118) at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:103) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63) [2017-04-06 16:52:08,091] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral) [2017-04-06 16:52:08,093] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral) [2017-04-06 16:52:08,094] INFO 1 successfully elected as leader (kafka.server.ZookeeperLeaderElector) [2017-04-06 16:52:08,312] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions my-replicated-topic2-0 (kafka.server.ReplicaFetcherManager) [2017-04-06 16:52:08,314] INFO [ReplicaFetcherThread-0-0], Shutting down (kafka.server.ReplicaFetcherThread) [2017-04-06 16:52:08,315] INFO [ReplicaFetcherThread-0-0], Stopped (kafka.server.ReplicaFetcherThread) [2017-04-06 16:52:08,316] INFO [ReplicaFetcherThread-0-0], Shutdown completed (kafka.server.ReplicaFetcherThread) [2017-04-06 16:52:08,363] INFO New leader is 1 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener) [2017-04-06 16:52:14,877] INFO Partition [my-replicated-topic,0] on broker 1: Shrinking ISR for partition [my-replicated-topic,0] from 0,1,2 to 1,2 (kafka.cluster.Partition)
再次检测 节点信息:
[root@localhost kafka_2.11-0.10.1.1]# bin/kafka-topics.sh --describe --zookeeper 192.168.56.129:2181 --topic my-replicated-topic2 Topic:my-replicated-topic2 PartitionCount:1 ReplicationFactor:3 Configs: Topic: my-replicated-topic2 Partition: 0 Leader: 1 Replicas: 0,1,2 Isr: 1,2
剩下两台机器,一个生产消息,一个消费消息,都没有问题
按照 http://www.aboutyun.com/thread-12882-1-1.html 思路进行操做。集群从单机多端口,改用多机器演示