Centos7通常都会带有本身的openjdk,咱们通常都回用oracle的jdk,因此要卸载java
删除系统预装jdk,能够一条命令直接删除:#rpm -e --nodeps `rpm -qa | grep java`node
经过 java -version查看是否已删除linux
windows下下载JDK8 apache
经过ssh上传到centos,dos命令以下:pscp D:\jdk-8u201-linux-x64.tar.gz root@192.168.75.129:/home/heibaobootstrap
解压并将文件移动到/opt/java目录下vim
sudo tar -vxzf jdk-8u201-linux-x64.tar.gzwindows
sudo mv jdk1.8.0_201 /opt/javacentos
配置java环境变量api
vim /etc/profileoracle
#在profile文件的最后面添加
export JAVA_HOME=/opt/java
export PATH=$JAVA_HOME/bin:$PATH
#环境变量当即生效
source /etc/profile
#添加成功后经过java -version来查看jdk是否安装成功
java -version
经过wget下载zookeeper到本机
sudo wget https://mirrors.cnnic.cn/apache/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz
解压并将文件移动到/opt/zookeeper目录下
# 解压zookeeper
sudo tar -zxvf zookeeper-3.4.13.tar.gz
# 将zookeeper移动到/opt/zookeeper目录下
sudo mv zookeeper-3.4.13 /opt/zookeeper
编辑zookeeper的配置(本例子中是3个节点)
# 建立存放日志的文件夹(有几个节点就建立几个)
sudo mkdir -p /opt/zookeeper/data_logs/zookeeper1
# 复制一份zoo_sample.cfg文件并更名为zoo1.cfg
sudo cp /opt/zookeeper/conf/zoo_sample.cfg zoo1.cfg
# 编辑zoo1.cfg 文件
tickTime=2000
dataDir=/opt/zookeeper/data_logs/zookeeper1
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.75.129:2888:3888
server.2=192.168.75.129:2889:3889
server.3=192.168.75.129:2890:3890
建立另外两个节点配置文件zoo2.cfg,zoo3.cfg(节点数量自定义)
内容分别以下():区别在datadir和clientPort
tickTime=2000
dataDir=/opt/zookeeper/data_logs/zookeeper2
clientPort=2182
initLimit=5
syncLimit=2
server.1=192.168.75.129:2888:3888
server.2=192.168.75.129:2889:3889
server.3=192.168.75.129:2890:3890
********
tickTime=2000
dataDir=/opt/zookeeper/data_logs/zookeeper3
clientPort=2183
initLimit=5
syncLimit=2
server.1=192.168.75.129:2888:3888
server.2=192.168.75.129:2889:3889
server.3=192.168.75.129:2890:3890
上面配置说明:
tickTime:心跳和超时时间,单位毫秒
dataDir:内存中保存系统快照的位置,生产环境中注意该文件夹的磁盘占用状况
clientPort:zookeeper监听客户端链接的端口(同一台机器用不一样端口,不一样机器能够用相同或者不一样的端口),默认是2181
initLimit:指定follower节点初始时链接leader节点的最大tick次数(5*tickTime=10秒),不然被视为超时
syncLimi:follwer节点与leader节点进行同步的最大时间(tick次数)
server.X=host.port1:port2:X必须是一个全局惟一的数字,且须要与myid文件中的数字相对应,host能够是域名/机器名/IP,port1用于follower节点链接leader,port2用于leader选举
建立每一个节点的myid文件,myid文件位于配置文件(如zoo1.cfg)中的dataDir配置的目录下,文件中只有一个数字X
sudo vim /opt/zookeeper/data_logs/zookeeper1/myid
配置zookeeper环境变量
sudo vim /etc/profile
#添加以下内容
export ZOOKEEPER_HOME=/opt/zookeeper
export PATH=$ZOOKEEPER_HOME/bin:$PATH
#环境变量当即生效
source /etc/profile
启动zookeeper
java -cp zookeeper-3.4.13.jar:lib/slf4j-api-1.7.25.jar:lib/slf4j-log4j12-1.7.25.jar:lib/log4j-1.2.17.jar:conf org.apache.zookeeper.server.quorum.QuorumPeerMain conf/zoo1.cfg
#若是是不一样的机器,能够采用简化的命令启动服务:bin/zkServer.sh start conf/zoo1.cfg
检查一下集群的状态
bin/zkServer.sh status conf/zoo1.cfg
bin/zkServer.sh status conf/zoo2.cfg
bin/zkServer.sh status conf/zoo3.cfg
若是成功,从输出信息中能够看见各节点是leader仍是follower
经过wget下载kafka到本机
sudo wget http://mirrors.tuna.tsinghua.edu.cn/apache/kafka/2.1.1/kafka_2.12-2.1.1.tgz
解压并将文件移动到/opt/kafka目录下
# 解压
sudo tar -zxvf kafka_2.12-2.1.1.tgz
# 移动
sudo mv kafka_2.12-2.1.1 /opt/kafka
编辑kafka集群的配置
#建立日志存放目录(三个节点)
cd /opt/kafka
mkdir -p data_logs/kafka1
mkdir -p data_logs/kafka2
mkdir -p data_logs/kafka3
#复制三份配置文件
sudo cp config/server.properties server1.properties
sudo cp config/server.properties server2.properties
sudo cp config/server.properties server3.properties
#三份配置文件以下
broker.id=0
delete.topic.enable=true
listeners=PLAINTEXT://192.168.75.129:9092
log.dirs=/opt/kafka/data_logs/Kafka1
zookeeper.connect=192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183
unclean.leader.election.enable=false
zookeeper.connection.timeout.ms=6000
broker.id=0
delete.topic.enable=true
listeners=PLAINTEXT://192.168.75.129:9093
log.dirs=/opt/kafka/data_logs/Kafka2
zookeeper.connect=192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183
unclean.leader.election.enable=false
zookeeper.connection.timeout.ms=6000
broker.id=0
delete.topic.enable=true
listeners=PLAINTEXT://192.168.75.129:9094
log.dirs=/opt/kafka/data_logs/Kafka3
zookeeper.connect=192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183
unclean.leader.election.enable=false
zookeeper.connection.timeout.ms=6000
配置kafka环境变量
sudo vim /etc/profile
#添加以下内容:
export KAFKA_HOME=/opt/kafka
export PATH=$KAFKA_HOME/bin:$PATH
#环境变量当即生效
source /etc/profile
在host中 注释掉 127.0.0.1 的配置,要否则会出现zookeeper拒绝链接的状况(具体缘由不清楚)
vim /etc/hosts
#第一行127.0.0.1注释掉:
#127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
启动kafka,启动后能够在Kafkalogs目录下的server.log文件中查看信息(路径 logs/server.log)
bin/kafka-server-start.sh -daemon config/server1.properties
bin/kafka-server-start.sh -daemon config/server2.properties
bin/kafka-server-start.sh -daemon config/server3.properties
测试topic建立和删除
bin/kafka-topics.sh --create -zookeeper 192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183 --topic test-topic --partitions 3 --replication-factor 3
验证建立的topic
bin/kafka-topics.sh --zookeeper 192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183 -list
bin/kafka-topics.sh --zookeeper 192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183 -describe --topic test-topic
删除建立的topic
bin/kafka-topics.sh --delete -zookeeper 192.168.75.129:2181,192.168.75.129:2182,192.168.75.129:2183 --topic test-topic
测试消息发送与消费(利用自带的脚本进行测试)
(开一个终端做为producer)bin/kafka-console-producer.sh --broker-list 192.168.75.129:9092,192.168.75.129:9093,192.168.75.129:9094 --topic test-topic
(开一个终端做为consumer) bin/kafka-console-consumer.sh --bootstrap-server 192.168.75.129:9092,192.168.75.129:9093,192.168.75.129:9094 --topic test-topic --from-beginning
生产者吞吐量测试
bin/kafka-producer-perf-test.sh --topic test-topic --num-records 5000 --record-size 200 --throughput -1 --producer-props bootstrap.servers=192.168.75.129:9092,192.168.75.129:9093,192.168.75.129:9094 acks=-1
消费者吞吐量测试
bin/kafka-consumer-perf-test.sh --broker-list 192.168.75.129:9092,192.168.75.129:9093,192.168.75.129:9094 --messages 5000 --topic test-topic