hadoop 常见错误

1.Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:java

2016-01-05 23:03:32,967 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [192.168.10.31:8485, 192.168.10.32:8485, 192.168.10.33:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
192.168.10.31:8485: Call From bdata4/192.168.10.34 to bdata1:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.10.33:8485: Call From bdata4/192.168.10.34 to bdata3:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.10.32:8485: Call From bdata4/192.168.10.34 to bdata2:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createNewUniqueEpoch(QuorumJournalManager.java:182)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:436)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply(JournalSet.java:624)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:621)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1394)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1151)
        at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1658)
        at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
        at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
        at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1536)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1335)
        at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
        at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)node

2016-01-05 23:03:32,968 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1web

错误缘由:apache

咱们在执行start-dfs.sh的时候,默认启动顺序是namenode>datanode>journalnode>zkfc,若是journalnode和namenode不在一台机器启动的话,很容易由于网络延迟问题致使NN没法链接JN,没法实现选举,最后致使刚刚启动的namenode会忽然挂掉一个主的,留下一个standy的,虽然有NN启动时有重试机制等待JN的启动,可是因为重试次数限制,可能网络状况很差,致使重试次数用完了,也没有启动成功,windows

A:此时须要手动启动主的那个namenode,避免了网络延迟等待journalnode的步骤,一旦两个namenode连入journalnode,实现了选举,则不会出现失败状况,服务器

B:先启动JournalNode而后再运行start-dfs.sh,网络

C:把nn对jn的容错次数或时间调成更大的值,保证可以对正常的启动延迟、网络延迟能容错app

在hdfs-site.xml中加入,nn对jn检测的重试次数,默认为10次,每次1000ms,故网络状况差须要增长,这里设置为30次eclipse

    <property>
         <name>ipc.client.connect.max.retries</name>
         <value>30</value>ide

    </property>

二、org.apache.hadoop.security.AccessControlException: Permission denied

在master节点上修改hdfs-site.xml加上如下内容 

<property> 

<name>dfs.permissions</name> 

<value>false</value> 

</property> 

旨在取消权限检查,缘由是为了解决我在windows机器上配置eclipse链接hadoop服务器时,配置map/reduce链接后报错


三、运行报:[org.apache.hadoop.security.ShellBasedUnixGroupsMapping]-[WARN] got exception trying to get groups for user bdata

在master节点上修改hdfs-site.xml加上如下内容

<property><name>dfs.web.ugi</name><value>bdata,supergroup</value></property>

相关文章
相关标签/搜索