下面是以三台机器搭建为例,(扩展到4台以上同样,修改下配置文件便可)apache
http://apache.fayea.com/kafka/0.9.0.1/ ,拷贝到三台服务器,并解压文件服务器
183服务器:测试
broker.id=0 host.name=132.228.28.183 advertised.host.name=132.228.28.183 zookeeper.connect=132.228.28.183:2182,132.228.28.184:2182,132.228.28.185:2182
184服务器:spa
broker.id=0 host.name=132.228.28.183 advertised.host.name=132.228.28.183 zookeeper.connect=132.228.28.183:2182,132.228.28.184:2182,132.228.28.185:2182
185服务器:code
broker.id=0 host.name=132.228.28.183 advertised.host.name=132.228.28.183 zookeeper.connect=132.228.28.183:2182,132.228.28.184:2182,132.228.28.185:2182
183服务器: 在文件/etc/hosts末尾添加: 132.228.28.183 dsjtest01 184服务器: 在文件/etc/hosts末尾添加: 132.228.28.184 dsjtest02 185服务器: 在文件/etc/hosts末尾添加: 132.228.28.185 dsjtest03
进入到kafka的bin目录,三台服务器都要启动server
启动zookeeper: ./zookeeper-server-start.sh ../config/zookeeper.properties &
启动kafka: ./kafka-server-start.sh ../config/server.properties &
在183建立topic:blog
./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
在184生产消息,发送到kafkaget
./kafka-console-producer.sh --broker-list 132.228.28.183:9092 --topic test
终端输入:hello kafkakafka
在185消费消息域名
./kafka-console-consumer.sh --zookeeper 132.228.28.183:2181 --topic test --from-beginning
接收到消息:hello kafka