Hadoop集群彻底分布式安装

1、安装准备

1.安装包:

Hadoop-2.8.1.tar.gz 下载地址:http://hadoop.apache.org/releases.html#Downloadhtml

JDK1.8 下载地址:http://www.oracle.com/technetwork/java/javase/downloads/index.html  java

2、环境配置

1.静态IP配置

https://my.oschina.net/u/1765168/blog/1571584node

2.免密码登陆

http://www.javashuo.com/article/p-hrnvzsgv-dd.htmlweb

3.JDK安装

安装 tar -zxvf  jdk1.8.0.tar.gzapache

配置环境变量oracle

export JAVA_HOME=/opt/soft/jdk1.8
export PATH=$JAVA_HOME/bin:$PATH

3、Hadoop安装

1.解压下载好的安装包

tar -zxvf hadoop-2.8.1.tar.gzapp

2.配置环境变量

export HADOOP_HOME=/usr/hadoop 
export PATH=$PATH:$HADOOP_HOME/bin

3.修改配置文件

位置your_hadoop_dir/etc/hadoopwebapp

(1)hadoop-env.shoop

export JAVA_HOME=/opt/soft/jdk1.8

即与系统的环境变量JAVA_HOME保持一致。spa

(2)core-site.xml

    <property>   
        <name>hadoop.tmp.dir</name>   
        <value>/opt/soft/hadoop-2.8.1/tmp</value>   
        <final>true</final>  
        <description>A base for other temporary directories.</description>   
    </property>   
    <property>   
        <name>fs.default.name</name>   
        <value>hdfs://Hadoop1:9000</value>  
    <!-- hdfs://Master.Hadoop:22-->  
            <final>true</final>   
    </property>   
    <property>    
         <name>io.file.buffer.size</name>    
         <value>131072</value>    
    </property>  

(3)hdfs-site.xml

<property>   
        <name>dfs.replication</name>   
        <value>2</value>   
   </property>   
   <property>   
        <name>dfs.name.dir</name>   
        <value>/usr/local/hadoop/hdfs/name</value>   
   </property>   
   <property>   
        <name>dfs.data.dir</name>   
        <value>/usr/local/hadoop/hdfs/data</value>   
   </property>   
   <property>    
        <name>dfs.namenode.secondary.http-address</name>    
        <value>Hadoop1:9001</value>    
   </property>    
   <property>    
        <name>dfs.webhdfs.enabled</name>    
        <value>true</value>    
   </property>    
   <property>    
        <name>dfs.permissions</name>    
        <value>false</value>    
   </property>

(3)mapred-site.xml

    <property>    
        <name>mapreduce.framework.name</name>    
        <value>yarn</value>    
    </property>

(4)yarn-site.xml

    <property>    
        <name>yarn.resourcemanager.address</name>    
        <value>Hadoop1:18040</value>    
    </property>    
    <property>    
        <name>yarn.resourcemanager.scheduler.address</name>    
        <value>Hadoop1:18030</value>    
    </property>    
    <property>    
        <name>yarn.resourcemanager.webapp.address</name>    
        <value>Hadoop1:18088</value>    
    </property>    
    <property>    
        <name>yarn.resourcemanager.resource-tracker.address</name>    
        <value>Hadoop1:18025</value>    
    </property>    
    <property>    
        <name>yarn.resourcemanager.admin.address</name>    
        <value>Hadoop1:18141</value>    
    </property>    
    <property>    
        <name>yarn.nodemanager.aux-services</name>    
        <value>mapreduce_shuffle</value>    
    </property>    
    <property>    
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>    
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>    
    </property> 

(5)slaves

Hadoop2
Hadoop3

(6)将配置好的hadoop,发送到其余两台机器

scp –r /opt/soft/hadoop2.8.1 root@Hadoop1 :/opt/soft/

3、启动

1.格式化namenode

hadoop namenode -format

2.启动

./sbin/start-all.sh

3.查看节点状态

hadoop dfsadmin -report

4.web页面 http://your_ip:50079/

相关文章
相关标签/搜索