Hadoop tutorial - 2 - 安装hadoop 2015-3-23

工具:java

xshell ()node

安装包:linux

hadoop-2.6.0.tar.gz->2.4.1 http://archive.apache.org/dist/hadoop/core/hadoop-2.4.1/shell

 

----------5/19/2017----------startapache

 https://archive.apache.org/dist/hadoop/common/hadoop-2.5.0/hadoop-2.5.0.tar.gzvim

wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u112-b15/jdk-8u112-linux-x64.tar.gzwget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" https://archive.apache.org/dist/hadoop/common/hadoop-2.5.0/hadoop-2.5.0.tar.gzcookie

----------5/19/2017----------end网络

 

jdk-7u9-linux-i586.tar.gzoracle

后续用到的安装包ssh

hbase-0.94.2.tar.gz

hive-0.9.0.tar.gz

pig-0.10.0.tar.gz

zookeeper-3.4.3.tar.gz

 

添加用户和组

groupadd hadoop

useradd hadoop -g hadoop

切换用户 

su hadoop

退出

exit

 

 

 

JDK安装(root用户下进行安装)

plan a:  rpm

plab b: 解压便可

mkdir /usr/java

tar -zxvf jdk-7u9-linux-i506.tar.gz -C /usr/java

创建连接:

ls -s /usr/java/jdk1.6.0_30 /usr/java/jdk

配置环境变量:

修改vi /etc/profile,在最后添加

export JAVA_HOME=/usr/java/jdk

export PATH=$JAVA_HOME/bin:$PATH

让环境变量生效 source /etc/profile

检查echo $PATH 和java -version

 

 

--------------------------------------------------------------------------

SSH和无密码登陆

安装SSH客户端:

yum -y install openssh-clients

=>此时可进行复制虚拟机

ssh master

生成无密码的公私钥对:

ssh-keygen -t rsa

cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

 (之后能够把公钥发给其余机器ssh-copy-id 192.168.137.44)

 

--------------------------------------------------------------------------

 

复制虚拟机

复制->彻底复制

vi /etc/sysconfig/network-scripts/ifcfg-eth0

根据虚拟机真实的mac修改 设置-网络,可查看到

DEVICE="eth1"
HWADDR=...
IPADDR=192.168.56.3

eth0改成eht1

mv /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-script/ifcfg-eth1

重启网卡

经过以上方法能够复制多个虚拟

 

---------------------------------------------------------------
 

安装hadoop

下载地址 http://archive.apache.org/dist/hadoop/core/stable

解压:

tar -zxvf hadoop-1.0.3.tar.gz -C /opt/ #之前做法是安装在/usr/local,如今通常安装在opt

mv /opt/hadoop-1.0.3 /opt/hadoop  #重命名方便使用

chown -R hadoop:hadoop /opt/hadoop #把文件夹的权限赋给hadoop用户

su hadoop  #在hadoop用户下配置

配置0:

vi /etc/profile

export JAVA_HOME/usr/java/jdk

export HADOOP_HOME=/opt/hadoopp-2.6.0

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

source /etc/profile

 

 

配置1:

hadoop-evn.sh

export JAVA_HOME/usr/java/jdk

 

 配置2:vim core-site.xml (建议用hostname,不用ip)

<configuration>

    <!--指定HDFS的Namenode地址-->

    <property>

        <name>fs.defaultFS</name>

        <value>hdfs://192.168.137.2:9000</value>

    </property>

    <!--指定Hadoop运行时产生文件的地址-->

    <property>

        <name>hadoop.tmp.dir</name>

        <value>/opt/hadoop-2.6.0/tmp</value>

    </property>

</configuration>

 

配置3:hdfs-site.xml

<configuration>

<!--HDFS保存数据的副本数量-->

    <property>

        <name>dfs.replication</name>

        <value>1</value>

    </property>

</configuration>

 

配置4: mv mapred-site.xml.template mapred-site.xml

<configuration>

<!-- MR运行在YARN上-->

    <property>

        <name>mapreduce.framework.name</name>

        <value>yarn</value>

    </property>

</configuration>

 

配置5: yarn-site.xml

<configuration>

    <!-- NodeManager获取数据的方式是shuffle-->

    <property>

        <name>yarn.nodemanager.aux-services</name>

        <value>mapreduce_shuffle</value>

    </property>

    <!--指定YARN的ResourceManager的地址-->

    <property>

        <name>yarn.resourcemanager.hostname</name>

        <value>master</value>

    </property>

</configuration>

 

hadoop-env.sh

设置JAVA_HOME

 

初始化HDFS

hdfs namenode -format

底下生成tmp文件夹 

 

启动hadoop

./start-all.sh

 

检验-jps命令查看进程

    ResourceManager

    NodeManager

    NameNode

    Jps

    SecondaryNameNode

    DataNode

 

检验-http://192.168.137.2:50070

http://192.168.137.2:50070/dfsnodelist.jsp?whatNodes=LIVE

http://192.168.137.2:50075/browseDirectory.jsp?dir=%2F&go=go&namenodeInfoPort=50070&nnaddr=192.168.137.2%3A9000

http://192.168.137.2:8088

  若是没法访问,需关闭防火墙 service iptables stop

 

 

Error:

Could not get the namenode ID of this node.

hadoop-hdfs-2.6.0.jar(hdfs-default.xml) dfs.ha.namenode.id

原理: http://blog.csdn.net/chenpingbupt/article/details/7922004

  public static String getNameNodeId(Configuration conf, String nsId) {
    String namenodeId = conf.getTrimmed(DFS_HA_NAMENODE_ID_KEY);
    if (namenodeId != null) {
      return namenodeId;
    }
    
    String suffixes[] = DFSUtil.getSuffixIDs(conf, DFS_NAMENODE_RPC_ADDRESS_KEY,
        nsId, null, DFSUtil.LOCAL_ADDRESS_MATCHER);
    if (suffixes == null) {
      String msg = "Configuration " + DFS_NAMENODE_RPC_ADDRESS_KEY + 
          " must be suffixed with nameservice and namenode ID for HA " +
          "configuration.";
      throw new HadoopIllegalArgumentException(msg);
    }
    
    return suffixes[1];
  }

DFS_HA_NAMENODE_ID_KEY = "dfs.ha.namenode.id";

DFS_NAMENODE_RPC_ADDRESS_KEY = "dfs.namenode.rpc-address";

 

请先确保iptables关闭

0 检查各台机子的全部配置文件

1 是否没有配置文件

2 各台机子间的ssh免登陆是否正常

 

=>因为namenode配错机子

 

 

Tips:

跨机复制

例如:scp ./id_rsa.pub root@10.28.8.20:/home/hadoop

相关文章
相关标签/搜索