最近打算尝试在服务器上安装hadoop3.0,因为服务器上原本搭建好hadoop2.6,而且是用root用户安装。我不打算删除原来的hadoop2.6,html
因此准备新建两个用户,hadoop2.6及hadoop3.0;将位于/usr/local/hadoop/目录下的hadoop2.6移到hadoop2.6用户目录下,而且将本来在/etc/profilejava
中配置的环境变量删除,而后在hadoop2.6用户目录中的.bashrc文件设置环境变量。使得两个用户各自使用不一样的版本。node
master 116.57.56.220 slave1 116.57.86.221 slave2 116.57.86.222 slave3 116.57.86.223
<pre name="code" class="plain">sudo useradd -d /home/hadoop3.0 -m hadoop3.0 //-d设置用户目录路径,-m设置登陆名
<pre name="code" class="plain">passwd hadoop3.0 //设置密码而后使用切换至hadoop3.0时,命令行开头只显示$:,而且一些shell语句没法使用。
hadoop3.0:x:1002:1002::/home/hadoop3.0:/bin/bash
# Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL hadoop3.0 ALL=(ALL)ALL
export JAVA_HOME=/usr/local/java/jdk1.8.0_101 //hadoop3.0须要java8 export HADOOP_HOME=~/usr/local/hadoop/hadoop-3.0.0-alpha1 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:${HIVE_HOME}/lib export SCALA_HOME=~/usr/local/scala/scala-2.10.5 export SPARK_HOME=~/usr/local/spark/spark-2.0.1-bin-hadoop2.7 export SQOOP_HOME=~/usr/local/sqoop/sqoop-1.4.6 export HIVE_HOME=~/usr/local/hive/hive-1.2.1 export HBASE_HOME=~/usr/local/hbase/hbase-1.0.1.1 export PATH=${SPARK_HOME}/bin:${SCALA_HOME}/bin:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${SQOOP_HOME}/bin:${HADOOP_HOME}/lib:${HIVE_HOME}/bin:${HBASE_HOME}/bin:$PATH
ssh-keygen -t rsa //生成密钥id-rsa、公钥id-rsa.pub
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:///home/hadoop3.0/usr/local/hadoop/hadoop-3.0.0-alpha1/tmp</value> </property> </configuration>
<configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///home/hadoop3.0/usr/local/hadoop/hadoop-3.0.0-alpha1/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///home/hadoop3.0/usr/local/hadoop/hadoop-3.0.0-alpha1/hdfs/data</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>slave1:9001</value> </property> </configuration>
slave1 slave2 slave3
cp mapred-site.xml.template mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.application.classpath</name> <value> /home/hadoop3.0/usr/local/hadoop/hadoop-3.0.0-alpha1/etc/hadoop, /home/hadoop3.0/usr/local/hadoop/hadoop-3.0.0-alpha1/share/hadoop/common/*, /home/hadoop3.0/usr/local/hadoop/hadoop-3.0.0-alpha1/share/hadoop/common/lib/*, /home/hadoop3.0/usr/local/hadoop/hadoop-3.0.0-alpha1/share/hadoop/hdfs/*, /home/hadoop3.0/usr/local/hadoop/hadoop-3.0.0-alpha1/share/hadoop/hdfs/lib/*, /home/hadoop3.0/usr/local/hadoop/hadoop-3.0.0-alpha1/share/hadoop/mapreduce/*, /home/hadoop3.0/usr/local/hadoop/hadoop-3.0.0-alpha1/share/hadoop/mapreduce/lib/*, /home/hadoop3.0/usr/local/hadoop/hadoop-3.0.0-alpha1/share/hadoop/yarn/*, /home/hadoop3.0/usr/local/hadoop/hadoop-3.0.0-alpha1/share/hadoop/yarn/lib/* </value> </property> </configuration>
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandle</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8025</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8040</value> </property> </configuration>
export JAVA_HOME=/usr/local/java/jdk1.8.0_101
hdfs namenode -format若没有设置路径$HADOOP_HOME/bin为环境变量,则需在$HADOOP_HOME路径下执行
bin/hdfs namenode -format2.启动dfs及yarn
start-dfs.sh start-yarn.sh若没有设置路径$HADOOP_HOME/sbin为环境变量,则需在$HADOOP_HOME路径下执行
sbin/start-dfs.sh sbin/start-yarn.sh
hdfs dfs -mkdir /user hdfs dfs -mkdir /user/hduser
hdfs dfs -mkdir /user/hduser/input hdfs dfs -put etc/hadoop/*.xml /user/hduser/input
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-alpha1.jar grep /user/hduser/input output 'dfs[a-z.]+'
hdfs dfs -get output output cat output/*