hbase遇到问题及解决方法

hbase遇到问题及解决方法

1.zookeeper启动报错

错误日志

启动zookeeper报错信息以下:css

java.net.NoRouteToHostException: No route to host
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:579)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:402)
        at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:840)
        at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:762)
2015-05-19 10:26:26,983 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@849] - Notification time out: 800

解决方法

此问题产生的主要缘由是由于zookeeper集群未关闭防火墙。
执行下面命令后仍然报上面的错误:
systemctl start iptables.service
通过仔细查找后发现,CentOS 7.0默认使用的是firewall做为防火墙,须要执行以下命令关闭防火墙:
systemctl stop firewalld.service #中止firewall
systemctl disable firewalld.service #禁止firewall开机启动
关闭各个节点防火墙后,重启zookeeper进程,就能够解决上述问题了。java

2.RegionServer进程挂掉

错误日志

Caused by: java.io.IOException: Couldn't set up IO streams at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:786) at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521) at org.apache.hadoop.ipc.Client.call(Client.java:1438) ... 60 more Caused by: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:713) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:779) 

解决方法

/etc/security/limits.confweb

/etc/security/limits.d/90-nproc.confsql

* hard    nproc   65536
* soft    nproc   65536
* hard    nofile  65536
* soft    nofile  65536

3.增长thrift server的线程数

日志信息

2015-06-05 12:46:37,756 INFO  [thrift-worker-6] client.AsyncProcess: #79, waiting for 72000  actions to finish
2015-06-05 12:46:37,756 INFO  [thrift-worker-9] client.AsyncProcess: #79, waiting for 48908  actions to finish
2015-06-05 12:46:37,855 INFO  [thrift-worker-8] client.AsyncProcess: #79, waiting for 72000  actions to finish
2015-06-05 12:46:38,198 INFO  [thrift-worker-2] client.AsyncProcess: #1, waiting for 78000  actions to finish
2015-06-05 12:46:38,762 INFO  [thrift-worker-13] client.AsyncProcess: #79, waiting for 72000  actions to finish
2015-06-05 12:46:39,547 INFO  [thrift-worker-0] client.AsyncProcess: #17, waiting for 78000  actions to finish
2015-06-05 12:47:55,612 INFO  [thrift-worker-9] client.AsyncProcess: #79, waiting for 108000  actions to finish
2015-06-05 12:47:55,912 INFO  [thrift-worker-6] client.AsyncProcess: #79, waiting for 114000  actions to finish

解决方法

增长thriftServer线程数apache

hbase-daemon.sh start thrift --threadpool -m 200 -w 500vim

在hbase_home目录下的logs目录中能够看到启动日志信息以下:bash

INFO  [main] thrift.ThriftServerRunner: starting TBoundedThreadPoolServer on /0.0.0.0:9090; min worker threads=200, max worker threads=500, max queued requests=1000

hbase-daemon.sh start thrift –threadpool
-mmarkdown

4. zookeeper使用内存大的问题

日志信息

  • jps 查看 QuorumPeerMain 进程IP
  • jmap -heap PID 查看进程使用内存状况,具体状况以下:
Attaching to process ID 6801, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 24.71-b01

using thread-local object allocation.
Parallel GC with 18 thread(s)

Heap Configuration:
   MinHeapFreeRatio = 0
   MaxHeapFreeRatio = 100
   MaxHeapSize      = 32126271488 (30638.0MB)
   NewSize          = 1310720 (1.25MB)
   MaxNewSize       = 17592186044415 MB
   OldSize          = 5439488 (5.1875MB)
   NewRatio         = 2
   SurvivorRatio    = 8
   PermSize         = 21757952 (20.75MB)
   MaxPermSize      = 85983232 (82.0MB)
   G1HeapRegionSize = 0 (0.0MB)

Heap Usage:
PS Young Generation
Eden Space:
   capacity = 537919488 (513.0MB)
   used     = 290495976 (277.0385513305664MB)
   free     = 247423512 (235.9614486694336MB)
   54.00361624377513% used
From Space:
   capacity = 89128960 (85.0MB)
   used     = 0 (0.0MB)
   free     = 89128960 (85.0MB)
   0.0% used
To Space:
   capacity = 89128960 (85.0MB)
   used     = 0 (0.0MB)
   free     = 89128960 (85.0MB)
   0.0% used
PS Old Generation
   capacity = 1431306240 (1365.0MB)
   used     = 0 (0.0MB)
   free     = 1431306240 (1365.0MB)
   0.0% used
PS Perm Generation
   capacity = 22020096 (21.0MB)
   used     = 9655208 (9.207923889160156MB)
   free     = 12364888 (11.792076110839844MB)
   43.847256615048366% used

3259 interned Strings occupying 265592 bytes.

解决方法

方法一

Heap Configuration配置中咱们能够看到,配置的heap内存很大,如今咱们修改zkServer.sh脚本减少MaxHeapSize,具体步骤以下:
* 打开zookeeper安装目录下bin文件夹中的zkServer.sh
* 在 zkServer.sh文件的49行处加入 JVMPARAM="-Xms1000M -Xmx1000M -Xmn512M"
* 而后修改zkServer.sh 109~110行出的内容。app

修改前以下:dom

nohup "$JAVA" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \
 -cp "$CLASSPATH" $JVMFLAGS $ZOOMAIN "$ZOOCFG" > "$_ZOO_DAEMON_OUT" 2>&1 < /dev/null &

修改后以下(将在49上添加的JVMPARAM参数项添加在JVMFLAGS后面):

nohup "$JAVA" "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \
-cp "$CLASSPATH" $JVMFLAGS $JVMPARAM $ZOOMAIN "$ZOOCFG" > "$_ZOO_DAEMON_OUT" 2>&1 < /dev/null &
  • 重启zookeeper进程
  • jmap -heap PID 查看修改后进程使用内存状况:
Attaching to process ID 6207, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 24.71-b01

using thread-local object allocation.
Parallel GC with 18 thread(s)

Heap Configuration:
   MinHeapFreeRatio = 0
   MaxHeapFreeRatio = 100
   MaxHeapSize      = 1048576000 (1000.0MB)
   NewSize          = 536870912 (512.0MB)
   MaxNewSize       = 536870912 (512.0MB)
   OldSize          = 5439488 (5.1875MB)
   NewRatio         = 2
   SurvivorRatio    = 8
   PermSize         = 21757952 (20.75MB)
   MaxPermSize      = 85983232 (82.0MB)
   G1HeapRegionSize = 0 (0.0MB)

Heap Usage:
PS Young Generation
Eden Space:
   capacity = 402653184 (384.0MB)
   used     = 104690912 (99.84103393554688MB)
   free     = 297962272 (284.1589660644531MB)
   26.000269254048664% used
From Space:
   capacity = 67108864 (64.0MB)
   used     = 0 (0.0MB)
   free     = 67108864 (64.0MB)
   0.0% used
To Space:
   capacity = 67108864 (64.0MB)
   used     = 0 (0.0MB)
   free     = 67108864 (64.0MB)
   0.0% used
PS Old Generation
   capacity = 511705088 (488.0MB)
   used     = 0 (0.0MB)
   free     = 511705088 (488.0MB)
   0.0% used
PS Perm Generation
   capacity = 22020096 (21.0MB)
   used     = 8878832 (8.467514038085938MB)
   free     = 13141264 (12.532485961914062MB)
   40.321495419456845% used

2697 interned Strings occupying 216760 bytes.

方法二

打开 zookeeper/bin/zkEnv.sh 文件,在zkEvn.sh中49~52行处有以下内容:

if [ -f "$ZOOCFGDIR/java.env" ]
then
    . "$ZOOCFGDIR/java.env"
fi

该文件已经明确说明有独立JVM内存的设置文件,路径是zookeeper/conf/java.env
安装的时候这个路径下没有有java.env文件,须要本身新建一个:
* vim java.env
* java.env文件内容以下:

#!/bin/sh
# heap size MUST be modified according to cluster environment
export JVMFLAGS="-Xms512m -Xmx1024m $JVMFLAGS"
  • 重启zookeeper进程
  • jmap -heap PID 查看修改后进程使用内存状况:
[hadoop@hadoop202 conf]$ jmap -heap  10151
Attaching to process ID 10151, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 24.71-b01

using thread-local object allocation.
Parallel GC with 18 thread(s)

Heap Configuration:
   MinHeapFreeRatio = 0
   MaxHeapFreeRatio = 100
   MaxHeapSize      = 1073741824 (1024.0MB)
   NewSize          = 1310720 (1.25MB)
   MaxNewSize       = 17592186044415 MB
   OldSize          = 5439488 (5.1875MB)
   NewRatio         = 2
   SurvivorRatio    = 8
   PermSize         = 21757952 (20.75MB)
   MaxPermSize      = 85983232 (82.0MB)
   G1HeapRegionSize = 0 (0.0MB)

Heap Usage:
PS Young Generation
Eden Space:
   capacity = 135266304 (129.0MB)
   used     = 43287608 (41.28227996826172MB)
   free     = 91978696 (87.71772003173828MB)
   32.00176741725715% used
From Space:
   capacity = 22020096 (21.0MB)
   used     = 0 (0.0MB)
   free     = 22020096 (21.0MB)
   0.0% used
To Space:
   capacity = 22020096 (21.0MB)
   used     = 0 (0.0MB)
   free     = 22020096 (21.0MB)
   0.0% used
PS Old Generation
   capacity = 358088704 (341.5MB)
   used     = 0 (0.0MB)
   free     = 358088704 (341.5MB)
   0.0% used
PS Perm Generation
   capacity = 22020096 (21.0MB)
   used     = 8887040 (8.475341796875MB)
   free     = 13133056 (12.524658203125MB)
   40.358770461309526% used

2699 interned Strings occupying 217008 bytes.

5.hosts文件致使hbase集群没法启动

错误日志

hmaster 错误日志

2015-08-27 02:23:40,681 INFO  [hadoop115:16020.activeMasterManager] master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 3149329 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms.
2015-08-27 02:23:42,186 INFO  [hadoop115:16020.activeMasterManager] master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 3150833 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms.
2015-08-27 02:23:43,690 INFO  [hadoop115:16020.activeMasterManager] master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 3152338 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms.
2015-08-27 02:23:45,195 INFO  [hadoop115:16020.activeMasterManager] master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 3153843 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms.
2015-08-27 02:23:46,699 INFO  [hadoop115:16020.activeMasterManager] master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 3155347 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms.
2015-08-27 02:23:48,204 INFO  [hadoop115:16020.activeMasterManager] master.ServerManager: Waiting f

regionserver

2015-08-27 14:21:22,770 WARN  [regionserver/hadoop116/127.0.0.1:16020] regionserver.HRegionServer: error telling master we are up
com.google.protobuf.ServiceException: java.net.SocketException: Invalid argument
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:231)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
        at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:8277)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2132)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:826)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Invalid argument
        at sun.nio.ch.Net.connect0(Native Method)
        at sun.nio.ch.Net.connect(Net.java:465)
        at sun.nio.ch.Net.connect(Net.java:457)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:670)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:403)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:709)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:880)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:849)
        at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1173)
        at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)

解决方法

检查/etc/hosts文件,以下文,产生此问题的缘由由hadoop116引发:

127.0.0.1 hadoop116 localhost.localdomain localhost4 localhost4.localdomain4

改为以下内容后重启集群,问题解决

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

此问题在stackoverflow中有这样的描述:
check your /etc/hosts file,if there is something like

127.0.0.1 localhost yourhost

change it to

127.0.0.1 localhost
192.168.1.1 yourhost

HBase RegionServer: error telling master we are up