前言:《读者来信》是HBase老店开设的一个问答专栏,旨在能为更多的小伙伴解决工做中常遇到的HBase相关的问题。老店会尽力帮你们解决这些问题或帮你发出求救贴,老店但愿这会是一个互帮互助的小平台。有问题请直接在老店后台留言,有好的解决方案也请不要吝啬,诚挚欢迎你们能在留言区积极探讨解决方案,大胆发表本身的见解,也许你今天帮别人解决的问题,就是你明天可能遇到的答案。java
在重启HBase集群的过程当中,RS节点所有启动成功了,可是HMaser一直启动不起来,错误日志以下:node
unexpected error, closing socket connection and attempting reconnect java.io.IOException: Packet len4745468 is out of range! at org.apache.zookeeper.ClientCnxnSocket.readLength(ClientCnxnSocket.java:112) at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:79) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2020-04-02 22:31:08,673 ERROR [hadoop01:16000.activeMasterManager] zookeeper.RecoverableZooKeeper: ZooKeeper getChildren failed after 4 attempts 2020-04-02 22:31:08,674 FATAL [hadoop01:16000.activeMasterManager] master.HMaster: Failed to become active master org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/region-in-transition at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:295) at org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenNoWatch(ZKUtil.java:513) at org.apache.hadoop.hbase.master.AssignmentManager.processDeadServersAndRegionsInTransition(AssignmentManager.java:519) at org.apache.hadoop.hbase.master.AssignmentManager.joinCluster(AssignmentManager.java:494) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:748) at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:184) at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1729) at java.lang.Thread.run(Thread.java:748)
看错误日志,好像只看到了ZK的身影,日志关键词是[
ZooKeeper.getChildren
|Packet
|out of range
|ConnectionLoss for /hbase/region-in-transition
]。
咱们知道,HBase Master 重启时要作不少初始化工做,要与ZK数据节点进行一些交互工做,如元数据或节点状态的注册、修改、获取等等。看这些关键词大概好像明白是怎么回事:ZooKeeper在getChildren(region-in-transition)的时候超出了Packet的range,致使链接丢失了,Failed to become active master。ios
那什么是Packet呢?小猿问了问度娘,度娘回答说:apache
在 ZooKeeper 中,Packet 是一个最小的通讯协议单元,即数据包。Pakcet 用于进行客户端与服务端之间的网络传输,任何须要传输的对象都须要包装成一个 Packet 对象。vim
那就是读取zk节点数据包长度有限制咯,这个时候咱们确定是先去网上找下zk有没有相关的参数能够调一下。结果还真的有:jute.maxbuffer
,感受本身很幸运。套用官网的话解释一下这个参数:服务器
(Java system property: jute.maxbuffer)
This option can only be set as a Java system property. There is no zookeeper prefix on it. It specifies the maximum size of the data that can be stored in a znode. The default is 0xfffff, or just under 1M. If this option is changed, the system property must be set on all servers and clients otherwise problems will arise. This is really a sanity check. ZooKeeper is designed to store data on the order of kilobytes in size.微信
翻译一下:网络
(Java系统属性:jute.maxbuffer)
此选项只能设置为Java系统属性。上面没有Zookeeper前缀。它指定能够存储在znode中的数据的最大大小。默认值为0xfffff,或不到1M。若是更改此选项,则必须在全部服务器和客户端上设置系统属性,不然会出现问题。这确实是一个健全性检查。ZooKeeper旨在存储大小为千字节的数据。socket
也有另外一种说法:oop
须要注意的是,该参数并非在 Server 和 Client 端同时设置才会生效。实际状况是,在客户端设置后,Zookeeper 将控制从 Server 端读取数据的大小(outgoingBuffer);而在服务端设置后,则是控制从 Client 端写入数据的大小(incomingBuffer)
相关代码以下:
protected final ByteBuffer lenBuffer = ByteBuffer.allocateDirect(4); protected ByteBuffer incomingBuffer = lenBuffer; protected void readLength() throws IOException { int len = incomingBuffer.getInt(); if (len < 0 || len >= ClientCnxn.packetLen) { throw new IOException("Packet len" + len + " is out of range!"); } incomingBuffer = ByteBuffer.allocate(len); } public static final int packetLen = Integer.getInteger("jute.maxbuffer", 4096 * 1024);
那为何会读取这么大一个包呢?基于上文提到的关键字/hbase/region-in-transition
(待分配region信息) 及Region的规模(120000+),咱们猜想是由于Region太多了,致使/hbase/region-in-transition
节点太大,HMaster读取该节点数据时超出限制并以失败了结。咱们也在HBase Jira库找到了相关issue:
Cluster with too many regions cannot withstand some master failover scenarios
https://issues.apache.org/jira/browse/HBASE-4246
咱们不少时候都不是第一个湿鞋的人,也许你今天帮别人解决的问题,就是你明天可能遇到的答案。这也是老店开设问答专栏《读者来信》的初心--为了知识更好的传播与分享!
固然也不仅/region-in-transition
节点会有这样的问题,/unssigned
等节点也可能会有同样的问题。解决方案总结以下:
方案一:清理zk节点历史上存在的垃圾数据
该方案旨在将zk节点的数据大小降下来,是否能够降到红线如下。
方案二:调大参数jute.maxbuffer
# 设置 Client 端 $ vim $ZOOKEEPER_HOME/bin/zkCli.sh # 增长 -Djute.maxbuffer=<buffer_size> 参数 "$JAVA" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" "-Djute.maxbuffer=1073741824" \ -cp "$CLASSPATH" $CLIENT_JVMFLAGS $JVMFLAGS \ org.apache.zookeeper.ZooKeeperMain "$@" # 设置 Server 端 $ vim $ZOOKEEPER_HOME/conf/zoo.cfg # 增长 jute.maxbuffer=<buffer_size> 参数 jute.maxbuffer=1073741824
调大该参数可能有风险,上面也提到zk旨在存储大小为千字节的数据。
方案三:使用层次结构(来自社区评论区)
该方案是经过区域ID的前缀将·
/hbase/region-in-transition
目录分片。例如,区域1234567890abcdef
将位于/hbase/region-in-transition/1234/1234567890abcdef
中。所以,咱们必须进行遍历才能得到完整列表。
转载请注明出处!欢迎关注本人微信公众号【HBase工做笔记】