1、安装及配置(JDK、3台服务器都是须要安装)java
一、卸载原有JDK
yum -y remove javalinux
二、建立用户useradd kafka
apache
三、安装及配置vim
tar -xvf jdk-8u181-linux-x64.tar.gz chown -R kafka.kafka jdk1.8.0_181 mv jdk1.8.0_181 /usr/local/jdk
四、设置环境变量api
su - kafka vim ~/.bash_profile export JAVA_HOME=/usr/local/jdk export PATH=$JAVA_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar source ~/.bash_profile
五、查看当前jdk版本bash
su - kafka [kafka@localhost ~]$ java -version java version "1.8.0_181" Java(TM) SE Runtime Environment (build 1.8.0_181-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
2、安装及配置(zookeeper)
一、下载安装包
https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz服务器
二、安装及配置(zookeeper 3台服务器都是须要安装)app
tar -xvf zookeeper-3.4.13.tar.gz mkdir /app mv zookeeper-3.4.13 /app/zk mkdir -p /app/zk/data cp /app/zk/conf/zoo_sample.cfg /app/zk/conf/zoo_sample.cfg.bak20180903 cp /app/zk/conf/zoo_sample.cfg /app/zk/conf/zoo.cfg mkdir /app/zk/data/zookeeper/ -p chown -R kafka.kafka /app/zk
配置参数说明:socket
server.id=host:port:port:表示了不一样的zookeeper服务器的自身标识,做为集群的一部分,每一台服务器应该知道其余服务器的信息。 用户能够从"server.id=host:port:port" 中读取到相关信息。 在服务器的data(dataDir参数所指定的目录)下建立一个文件名为myid的文件,这个文件的内容只有一行,指定的是自身的id值。 好比,服务器"1"应该在myid文件中写入"1"。这个id必须在集群环境中服务器标识中是惟一的,且大小在1~255之间。 这同样配置中,zoo1表明第一台服务器的IP地址。第一个端口号(port)是从follower链接到leader机器的端口,第二个端口是用来进行leader选举时所用的端口。 因此,在集群配置过程当中有三个很是重要的端口:clientPort=218一、port:288八、port:3888
grep -n ^[a-Z] /app/zk/conf/zoo.cfg
2:tickTime=2000
5:initLimit=10
8:syncLimit=5
12:dataDir=/app/zk/data/zookeeper
13:dataLogDir=/app/zk/data/logs
15:clientPort=2181
18:maxClientCnxns=60
26:autopurge.snapRetainCount=3
29:autopurge.purgeInterval=1
30:server.1=172.16.8.246:2888:3888
31:server.2=172.16.8.247:2888:3888
32:server.3=172.16.8.248:2888:3888ide
vim /app/zk/bin/zkServer.sh
125 ZOO_LOG_DIR="$($GREP "^[[:space:]]dataLogDir" "$ZOOCFG" | sed -e 's/.=//')"
126 if [ ! -w "$ZOO_LOG_DIR" ] ; then
127 mkdir -p "$ZOO_LOG_DIR"
128 fi
启动zookeeper服务以前,还须要分别在三个zookeeper节点机器上建立myid
su - kafka echo 1 > /app/zk/data/zookeeper/myid [kafka@localhost ~]$ cat /app/zk/data/zookeeper/myid 1 su - kafka echo 2 > /app/zk/data/zookeeper/myid [kafka@localhost ~]$ cat /app/zk/data/zookeeper/myid 2 su - kafka echo 3 > /app/zk/data/zookeeper/myid [kafka@localhost ~]$ cat /app/zk/data/zookeeper/myid 3启动zookeeper服务 [kafka@localhost ~]$ /app/zk/bin/zkServer.sh start ZooKeeper JMX enabled by default Using config: /app/zk/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [kafka@localhost ~]$ /app/zk/bin/zkServer.sh start ZooKeeper JMX enabled by default Using config: /app/zk/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [kafka@localhost ~]$ /app/zk/bin/zkServer.sh start ZooKeeper JMX enabled by default Using config: /app/zk/bin/../conf/zoo.cfg Starting zookeeper ... STARTED
[kafka@localhost ~]$ ps -ef|grep zookeeper kafka 2053 1 7 20:36 pts/0 00:00:00 /usr/local/jdk/bin/java -Dzookeeper.log.dir=/app/zk/data/logs -Dzookeeper.root.logger=INFO,CONSOLE -cp /app/zk/bin/../build/classes:/app/zk/bin/../build/lib/*.jar:/app/zk/bin/../lib/slf4j-log4j12-1.7.25.jar:/app/zk/bin/../lib/slf4j-api-1.7.25.jar:/app/zk/bin/../lib/netty-3.10.6.Final.jar:/app/zk/bin/../lib/log4j-1.2.17.jar:/app/zk/bin/../lib/jline-0.9.94.jar:/app/zk/bin/../lib/audience-annotations-0.5.0.jar:/app/zk/bin/../zookeeper-3.4.13.jar:/app/zk/bin/../src/java/lib/*.jar:/app/zk/bin/../conf:/app/zk/bin/../build/classes:/app/zk/bin/../build/lib/*.jar:/app/zk/bin/../lib/slf4j-log4j12-1.7.25.jar:/app/zk/bin/../lib/slf4j-api-1.7.25.jar:/app/zk/bin/../lib/netty-3.10.6.Final.jar:/app/zk/bin/../lib/log4j-1.2.17.jar:/app/zk/bin/../lib/jline-0.9.94.jar:/app/zk/bin/../lib/audience-annotations-0.5.0.jar:/app/zk/bin/../zookeeper-3.4.13.jar:/app/zk/bin/../src/java/lib/*.jar:/app/zk/bin/../conf:.:/usr/local/jdk/lib/dt.jar:/usr/local/jdk/lib/tools.jar -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /app/zk/bin/../conf/zoo.cfg
查看三个节点的zookeeper角色
节点1
[kafka@localhost ~]$ /app/zk/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /app/zk/bin/../conf/zoo.cfg
Mode: follower
节点2
[kafka@localhost ~]$ /app/zk/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /app/zk/bin/../conf/zoo.cfg
Mode: follower
节点3
[kafka@localhost ~]$ /app/zk/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /app/zk/bin/../conf/zoo.cfg
Mode: leader
3、安装及配置(kafka)
下载安装包
http://mirrors.shu.edu.cn/apache/kafka/2.0.0/kafka_2.12-2.0.0.tgz
tar -xvf kafka_2.12-2.0.0.tgz
mv kafka_2.12-2.0.0 /app/kafka
chown -R kafka.kafka /app/kafka
cp /app/kafka/config/server.properties /app/kafka/config/server.properties.bak20180903
[root@localhost app]# grep -n ^[a-Z] /app/kafka/config/server.properties 21:broker.id=1 22:delete.topic.enable=true 31:listeners=PLAINTEXT://172.16.8.246:9092 42:num.network.threads=3 45:num.io.threads=8 48:socket.send.buffer.bytes=102400 51:socket.receive.buffer.bytes=102400 54:socket.request.max.bytes=104857600 60:log.dirs=/app/kafka/data 65:num.partitions=1 69:num.recovery.threads.per.data.dir=1 74:offsets.topic.replication.factor=1 75:transaction.state.log.replication.factor=1 76:transaction.state.log.min.isr=1 90:log.flush.interval.messages=10000 93:log.flush.interval.ms=1000 103:log.retention.hours=168 107:log.retention.bytes=1073741824 110:log.segment.bytes=1073741824 114:log.retention.check.interval.ms=300000 123:zookeeper.connect=172.16.8.246:2181,172.16.8.247:2181,172.16.8.248:2181 126:zookeeper.connection.timeout.ms=6000 136:group.initial.rebalance.delay.ms=0
将安装目录拷贝,其余两个节点
scp -r kafka 172.16.8.247:/app/
scp -r kafka 172.16.8.248:/app/
chown -R kafka.kafka /app/kafka
分别在三台服务相关参数
vim /app/kafka/config/server.properties
broker.id=1
listeners=PLAINTEXT://172.16.8.246:9092
vim /app/kafka/config/server.properties
broker.id=2
listeners=PLAINTEXT://172.16.8.247:9092
vim /app/kafka/config/server.properties
broker.id=3
listeners=PLAINTEXT://172.16.8.248:9092
su - kafka
nohup /app/kafka/bin/kafka-server-start.sh /app/kafka/config/server.properties >/dev/null 2>&1 &
su - kafka
/app/kafka/bin/kafka-topics.sh --create --zookeeper 172.16.8.246:2181,172.16.8.247:2181,172.16.8.248:2181 --replication-factor 1 --partitions 1 --topic qas
分别在三台服务执行,查看结果为qas表示集群正常[kafka@localhost app]$ /app/kafka/bin/kafka-topics.sh --list --zookeeper 172.16.8.246:2181,172.16.8.247:2181,172.16.8.248:2181qas