Kafka运维填坑(转)

  • 前提: 只针对Kafka 0.9.0.1版本;
  • 说是运维,其实偏重于问题解决;
  • 大部分解决方案都是google而来, 我只是做了次搬运工;
  • 有些问题的解决方案未必必定是通用的, 若应用到线上请慎重;
  • 若有疏漏之处, 欢迎你们批评指正;
  • 列表:
    1. Replica没法从leader同步消息
    2. Broker到zk集群的链接不时会断开重断
    3. Broker重启耗时好久
    4. 不容许脏主选举致使Broker被强制关闭
    5. Replica从错误的Partition leader上去同步数据
    6. __consumer_offsets日志没法被清除
    7. GC问题
    8. zk和kafka部署
    9. 监控很重要
    10. 大量异常: Attempted to decrease connection count for address with no connections
    11. 新版sdk访问较旧版的kafka, 发送kafka不支持的request
    12. 频繁FullGC
    13. 机器Swap使用

Replica没法从leader同步消息
  • 现象: 集群上某topic原来只有单复本, 增长双复本后,发现有些partition没有从leader同步数据,致使isr列表中一直没有新增的replica;
  • 日志分析:
[2017-09-20 19:37:05,265] ERROR Found invalid messages during fetch for partition [xxxx,87] offset 1503297 error Message is corrupt (stored crc = 286782282, computed crc = 400317671) (kafka.server.ReplicaFetcherThread) [2017-09-20 19:37:05,458] ERROR Found invalid messages during fetch for partition [xxxx,75] offset 1501373 error Message found with corrupt size (0) in shallow iterator (kafka.server.ReplicaFetcherThread) [2017-09-20 19:37:07,455] ERROR [ReplicaFetcherThread-0-5], Error due to (kafka.server.ReplicaFetcherThread) kafka.common.KafkaException: error processing data for partition [xxxx,87] offset 1503346 at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:147) at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:122) at scala.Option.foreach(Option.scala:257) at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:122) at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:120) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbractFeherThread.scala:120) at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:120) at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:120) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262) at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:118) at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:93) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63) Caused by: java.lang.RuntimeException: Offset mismatch: fetched offset = 1503346, log end offset = 1503297. at kafka.server.ReplicaFetcherThread.processPartitionData(ReplicaFetcherThread.scala:110) at kafka.server.ReplicaFetcherThread.processPartitionData(ReplicaFetcherThread.scala:42) at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:138) 
  • 解决:
    1. Kafka 0.9.0.1版本的bug: ReplicaFetcherThread stopped after ReplicaFetcherThread received a corrupted message
    2. 升级版本 或者 按上面连接中Reporter给出的简单修复来避开这个问题;
  • 深究:
    这个bug被触发实际是上下面这个致使:
    ERROR Found invalid messages during fetch for partition [qssnews_download,87] offset 1503297 error Message is corrupt (stored crc = 286782282, computed crc = 400317671) (kafka.server.ReplicaFetcherThread)
    当时触发这个bug的时, 恰逢相应的broker机器上硬盘出现了多个坏块, 但不能彻底肯定这个crc错误跟这个有关.这个也有个Kafka的issue: Replication issues
Broker到zk集群的链接不时会断开重断
  • 现象: broker不时地和zk从新创建session;
  • 日志分析: broker日志里报zk链接超时或不能从zk读取任何数据
  • 解决: 增长broker的zk的session timeout时间, 不能彻底解决,但会改善不少;
  • 深究:
    1. 目前用的kafka集群仍是相对比较稳定, 可是这个zk超时问题真是百思不得其解啊.
      broker在启动时会在zk上注册一个临时节点,表时本身已上线, 一旦session超时,此临时节点将被删除, 至关于此broker下线, 必然引发整个集群的抖动,可参考KafkaController分析8-broker挂掉
    2. zk为什么会timeout, 根本缘由未能准肯定位,目前看到跟诸多因素有关,好比磁盘IO, CPU负载, GC等等吧;
Broker重启耗时好久
  • 现象: broker重启下分耗时
  • 日志分析: 重启时加载全部的log segments, rebuild index;
  • 解决: 应该是stop时, 没有优雅的shutdown, 直接 kill -9致使;
  • 深究:
    1. 中止broker服务请使用kafka自己提供的脚本优雅shutdown;
    2. 在shutdown broker时确保相应的zk集群是可用状态, 不然可能没法优雅地shutdown broker.
不容许脏主选举致使Broker被强制关闭
  • 现象: 监控到集群中某台broker挂掉
  • 日志分析:
    [2016-02-25 00:29:39,236] FATAL [ReplicaFetcherThread-0-1], Halting because log truncation is not allowed for topic test, Current leader 1's latest offset 0 is less than replica 2's latest offset 151 (kafka.server.ReplicaFetcherThread)
  • 解决: 其实是设置了unclean.leader.election.enable=false, 而后走到了代码里下面这段逻辑
if (leaderEndOffset < replica.logEndOffset.messageOffset) { // Prior to truncating the follower's log, ensure that doing so is not disallowed by the configuration for unclean leader election. // This situation could only happen if the unclean election configuration for a topic changes while a replica is down. Otherwise, // we should never encounter this situation since a non-ISR leader cannot be elected if disallowed by the broker configuration. if (!LogConfig.fromProps(brokerConfig.originals, AdminUtils.fetchEntityConfig(replicaMgr.zkUtils, ConfigType.Topic, topicAndPartition.topic)).uncleanLeaderElectionEnable) { // Log a fatal error and shutdown the broker to ensure that data loss does not unexpectedly occur. fatal("...") Runtime.getRuntime.halt(1) } 

调用Runtime.getRuntime.halt(1)直接暴力退出了.
可参考Kafka issue: Unclean leader election and "Halting because log truncation is not allowed"php

Replica从错误的Partition leader上去同步数据
  • 现象: 集群里若干台机器前后磁盘空间报警, 经查是kafka log占用大量磁盘空间,接着看log, 里面有大量的
WARN [Replica Manager on Broker 3]: While recording the replica LEO, the partition [orderservice.production,0] hasn't been created. (kafka.server.ReplicaManager) 

css

ERROR [ReplicaFetcherThread-0-58], Error for partition [reptest,0] to broker 58:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. (kafka.server.ReplicaFetcherThread) 
  • 日志分析:
    从上面的日志结合当前topic的partiton的复本和isr状况,可知是错误的replica从错误的partition leader上去同步数据了, 这理论上不该该啊;
    1. 以前每一个集群因硬件缘由挂掉了一台机器, 而后想删掉上面的一个partiton, 但由于kafka自己不支持partiton的删除, 就在zk上的/brokers/[topic]节点的内容里直接去掉了这个partiton的信息, 可是kafka controller并不会处理partiton减小的状况, 可参考KafkaController分析
    2. 为了触发这个topic的partition的删除, 我又迁移了其余的partiton;
    3. 而后还删除了zk上的/controller临时节点;
    4. 最后连本身都晕了;
    5. 而后以前坏的机器修好又上线了, 而后问题出现了;
  • 解决: 将broker都重启了一遍;
  • 深究:
    1. 最终缘由没有彻底确认, 发现问题的时候以前的kafka debug log被删除了;
    2. kafka 上有类以的issue: can't create as many partitions as brokers exists
    3. 尽可能不要手动更新zk上的kafka相关节点内容;
    4. 考虑在kafka源码里加个delete partition的功能, 这个不会太难;
__consumer_offsets日志没法被清除
  • 现象: 集群中若干台机器磁盘空间报警, 上去查看是__consumer_offsets的一个partition占用了几十G的空间
  • 日志分析: 以前的日志被清理了,没有有效的日志了.为了debug这个问题,我把这个partition下的index和log文件打包拷贝到了测试集群, 而后重启了当前的broker, 发现了下面的日志:
[2017-09-30 10:49:36,126] ERROR [kafka-log-cleaner-thread-0], Error due to (kafka.log.LogCleaner) java.lang.IllegalArgumentException: requirement failed: 138296566648 messages in segment __consumer_offsets-5/00000000000000000000.log but offset map can fit only 5033164. You can increase log.cleaner.dedupe.buffer.size or decrease log.cleaner.threads at scala.Predef$.require(Predef.scala:219) at kafka.log.Cleaner$$anonfun$buildOffsetMap$4.apply(LogCleaner.scala:584) at kafka.log.Cleaner$$anonfun$buildOffsetMap$4.apply(LogCleaner.scala:580) at scala.collection.immutable.Stream$StreamWithFilter.foreach(Stream.scala:570) at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:580) at kafka.log.Cleaner.clean(LogCleaner.scala:322) at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:230) at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:208) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63) 
  • 问题分析:
    结合LogCleaner的源码可知,是00000000000000000000.log这个logSegmentsegment.nextOffset() - segment.baseOffset大于了maxDesiredMapSize, 致使了LogClean线程的终止, 从而没法清理, 这不该该啊?!
val segmentSize = segment.nextOffset() - segment.baseOffset
      require(segmentSize <= maxDesiredMapSize, "%d messages in segment %s/%s but offset map can fit only %d. You can in了crease log.cleaner.dedupe.buffer.size or decrease log.cleaner.threads".format(segmentSize, log.name, segment.log.file.getName, maxDesiredMapSize)) if (map.size + segmentSize <= maxDesiredMapSize) offset = buildOffsetMapForSegment(log.topicAndPartition, segment, map) else full = true 
  • 解决: 我也没想到其余的好办法, 暴力删除了00000000000000000000.log00000000000000000000.index, 而后删掉了cleaner-offset-checkpoint中相关的项,重启broker, 日志开始了压缩清理
  • 深究:
    这个logSegmentsegment.nextOffset() - segment.baseOffset大于了maxDesiredMapSize, 猜想是有个业务是手动提交offset到这个partition, 没有控制好,致使每秒能提交8,9MByte上来;
GC问题
  • 现象: 集群报警某台broker down, 在zk上无此broker节点的注册信息
  • 日志分析:
    1. 看broker日志里报zk链接超时或不能从zk读取任何数据, 其实和上面的Broker到zk集群的链接不时会断开重断现象是同样的;
    2. 看broker的gc日志, 对应时间gc耗时很长, 致使stop the world,broker全部线程都中止工做, 天然也没法与zk保持心跳;
  • 解决: 暂时无解决方案, GC是个大麻烦, 网上也搜了一圈, 没找到有效的解决方案, 我的水平有限, 哪位大神有什么好的方法, 能够留言给我,谢谢~
  • 补充: 关于GC这个找到了庄博士的这个视频,能够参考下OS 形成的长时间非典型 JVM GC 停顿:深度分析和解决
  • GC慢,引发的STW会致使不少问题, 咱们还遇到了他致使的OOM, Listen队列被打满
zk和kafka部署
  • zk和kafka broker 若是部署在同一台机器上, 请尽可能将各自的data和log路径放在不一样的磁盘, 避免磁盘io的竞争;
  • kafka对zk的波动很敏感, 所以zk最好是单独部署,保证其稳定运行;
  • 对zk不要有大量的写入操做, zk的写操做最后都会转移动leader上zk;
  • 若是采用了zk和broker是混部的方式,而且还有大量的zk写入操做,好比使用较旧版本的storm,其提交offset到zk上, 致使zk的IO较高, 在启动zk时能够加上zookeeper.forceSync=no, 下降写盘IO, 这个配置有其反作用, 在线上使用时还需慎重;
监控很重要
  • 实时监控: 在集群上创建一个专门的topic, 监控程序实时的写入数据, 但没法写入或写入耗时达到阈值时报警, 这个实时监控真的真好用,基本上都第一时间发现问题;
  • 基础监控: cpu, 磁盘IO, 网卡流量, FD, 链接数等;
  • Topic流量监控: 监控topic的生产和消费流量, 特别是流量突增的状况, 快速找出害群之马, 能够经过kafka的jmx来获取相关的数据, 使用Grafana来显示和报警;
大量异常: Attempted to decrease connection count for address with no connections
  • 现象: 集群中某台broker所在机器磁盘报警, 查看是server.log很大;
  • 日志分析: 日志里在刷大量的以下log:
[2016-10-13 00:00:00,495] ERROR Processor got uncaught exception. (kafka.network.Processor) java.lang.IllegalArgumentException: Attempted to decrease connection count for address with no connections, address: /xxx.xxx.xxx.xxx at kafka.network.ConnectionQuotas$$anonfun$9.apply(SocketServer.scala:565) at kafka.network.ConnectionQuotas$$anonfun$9.apply(SocketServer.scala:565) at scala.collection.MapLike$class.getOrElse(MapLike.scala:128) at scala.collection.AbstractMap.getOrElse(Map.scala:59) at kafka.network.ConnectionQuotas.dec(SocketServer.scala:564) at kafka.network.Processor$$anonfun$run$13.apply(SocketServer.scala:450) at kafka.network.Processor$$anonfun$run$13.apply(SocketServer.scala:445) at scala.collection.Iterator$class.foreach(Iterator.scala:742) at scala.collection.AbstractIterator.foreach(Iterator.scala:1194) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at kafka.network.Processor.run(SocketServer.scala:445) at java.lang.Thread.run(Thread.java:745) 
新版sdk访问较旧版的kafka, 发送kafka不支持的request
  • 现象: 日志里有大量以下日志:
[2017-10-12 16:52:38,141] ERROR Processor got uncaught exception. (kafka.network.Processor) java.lang.ArrayIndexOutOfBoundsException: 18 at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:68) at org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39) at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:79) at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426) at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421) at scala.collection.Iterator$class.foreach(Iterator.scala:742) at scala.collection.AbstractIterator.foreach(Iterator.scala:1194) at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at kafka.network.Processor.run(SocketServer.scala:421) at java.lang.Thread.run(Thread.java:745) 
  • 分析:
    1. 当前用的kafka版本为0.9.0.1, 支持的request最大id为16, 这个18是新版 kafka中的ApiVersion Request, 所以会抛这个异常出来;
    2. 跟了一下代码, 在SocketServer中:
try { val channel = selector.channel(receive.source) val session = RequestChannel.Session(new KafkaPrincipal(KafkaPrincipal.USER_TYPE, channel.principal.getName), channel.socketAddress) val req = RequestChannel.Request(processor = id, connectionId = receive.source, session = session, buffer = receive.payload, startTimeMs = time.milliseconds, securityProtocol = protocol) requestChannel.sendRequest(req) } catch { case e @ (_: InvalidRequestException | _: SchemaException) => // note that even though we got an exception, we can assume that receive.source is valid. Issues with constructing a valid receive object were handled earlier error("Closing socket for " + receive.source + " because of error", e) isClose = true close(selector, receive.source) } 

在处理Request时并未处理这个异常,致使这个异常被其外层的try...catch...处理, 直接进入了下一轮的selector.poll(300), 而在这个selector.poll(300)中会清理以前全部的接收到的Requests, 这就致使在这种状况下,可能会漏处理一些Request, 这样看起来仍是个比较严重的问题;html

  • 解决:
    1. 一个简单修复:
selector.completedReceives.asScala.foreach { receive => var isClose = false try { val channel = selector.channel(receive.source) val session = RequestChannel.Session(new KafkaPrincipal(KafkaPrincipal.USER_TYPE, channel.principal.getName), channel.socketAddress) val req = RequestChannel.Request(processor = id, connectionId = receive.source, session = session, buffer = receive.payload, startTimeMs = time.milliseconds, securityProtocol = protocol) requestChannel.sendRequest(req) } catch { case e @ (_: InvalidRequestException | _: SchemaException) => // note that even though we got an exception, we can assume that receive.source is valid. Issues with constructing a valid receive object were handled earlier error("Closing socket for " + receive.source + " because of error", e) isClose = true close(selector, receive.source) case e : ArrayIndexOutOfBoundsException => error("NotSupport Request | Closing socket for " + receive.source + " because of error", e) isClose = true close(selector, receive.source) } if (!isClose) { selector.mute(receive.source) } } 
  1. Kafka上也有相关的Broker does not disconnect client on unknown request, 这个修复内容比较多.
频繁FullGC
  • 现象: Kafka broker中止工做, 日志无输出,整个进程Hang住;
  • 分析: 查看kafkaServer-gc.log, 有FullGC log, 内存没法回收, 考虑是存在内存泄漏
    咱们找到了 SocketServer inflightResponses collection leaks memory on client disconnect: inflightResponses会缓存住须要发送但尚未发送完成的response, 这个response又同时持有其对应的request的引用, 访问请求量大的时候其内存占用很多.
    对于inflightResponses0.9.0.1代码中只在completedSends中做了remove, 在disconnectedclose中没有处理;
  • 修复:
    1. 最暴力的,能够直接将这个inflightResponses变量去掉, 但这会有个反作用,会影响到Metrics的统计;
    2. 优雅的,能够参考最新的kafka代码, 在disconnectedclose也加入移除的操做;
机器Swap使用
  • 使用大内存的机器,而且禁用掉swap

Kafka源码分析-汇总

做者:扫帚的影子 连接:https://www.jianshu.com/p/d2cbaae38014 来源:简书 简书著做权归做者全部,任何形式的转载都请联系做者得到受权并注明出处。
相关文章
相关标签/搜索