1、环境说明 java
虚拟软件:VMware Workstation 10 node
虚拟机配置: c++
RHEL Server release 6.5 (Santiago) 2.6.32-431.el6.x86_64 web
cpu:4核心,内存:4G,硬盘:50G apache
2、前提条件: bootstrap
1:将rhel6.5的iso文件做为yum源 bash
2:hadoop-2.2.0-src.tar.gz 网络
3:安装JDK 1.6.0_43 ssh
4:安装并配置apache-maven 3.0.5(apache-maven-3.0.5-bin.tar.gz) maven
源码中BUILDING.txt中要求使用3.0,从hadoop2.0版本之后使用maven编译,以前用Ant)
解压并配置环境变量
mvn -version
5:安装并配置apache-ant-1.9.3-bin.zip(下载二进制版本的,这个须要编译安装findbugs)
解压并配置环境变量
ant -version
6:下载并安装cmake cmake-2.8.12.1,安装命令以下:
tar -zxvf cmake-2.8.12.1.tar.gz
cd cmake-2.8.12.1
./bootstrap
make
make install
检查安装是否正确
cmake --version(若是能正确显示版本号,则说明安装正确)
7:下载并安装配置findbugs-2.0.2-source.zip
http://sourceforge.jp/projects/sfnet_findbugs/releases/
使用ant编译安装。若是不编译安装则编译的时候会报
hadoop-common-project/hadoop-common/${env.FINDBUGS_HOME}/src/xsl/default.xsl doesn’t exist. -> [Help 1]
进入到解压后的目录,直接运行ant命令
若是不安装,则在编译时会报以下错误:
Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project hadoop-common: An Ant BuildException has occured
8:安装zlib-devel
默认状况下,系统没有安装zlib-devel
yum install zlib-devel
若是不安装,则在编译时会报以下错误:
Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (make) on project hadoop-common
9: protobuf-2.5.0
yum install gcc-c++ (若是不安装,则cmake configure失败)
./configure
make
make check
make install
检查安装是否正确
protoc --version((若是能正确显示版本号,则说明安装正确)
3、hadoop2.2源码编译
1:进入hadoop2.2.0解压后的源码目录
2:执行mvn命令编译,此过程须要链接网络,编译的速度取决于你的网速
mvn clean package -Pdist,native -DskipTests -Dtar
2.1.Create binary distribution without native code and without documentation:
$ mvn package -Pdist -DskipTests -Dtar
2.2.Create binary distribution with native code and with documentation:
$ mvn package -Pdist,native,docs -DskipTests -Dtar
2.3.Create source distribution:
$ mvn package -Psrc -DskipTests
2.4.Create source and binary distributions with native code and documentation:
$ mvn package -Pdist,native,docs,src -DskipTests -Dtar
2.5.Create a local staging version of the website (in /tmp/hadoop-site)
$ mvn clean site; mvn site:stage -DstagingDirectory=/tmp/hadoop-site
3:编译后的项目发布版本在hadoop-2.2.0-src/hadoop-dist/target/目录下
hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0/
4、安装hadoop 单点伪分布式模式
1:配置ssh互信
ssh-keygen -t dsa (或行ssh-keygen -t rsa -P "" 加上-P ""参数只须要一次回车就能够执行完毕。不加须要三次回车)
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
若是仍然须要输入密码,则执行以下命令
chmod 600 ~/.ssh/authorized_keys
2:将编译后的hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0复制到/data/hadoop/目录下
3:建议软连接 ln -s hadoop-2.2.0 hadoop2
4: 在用户的.bash_profile增长以下变量
export HADOOP_HOME=/data/hadoop/hadoop2
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
5:建议data.dir及namenode.dir目录
mkdir hdfs
mkdir namenode
chmod -R 755 hdfs
6:修改hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.6.0_43
export HADOOP_HOME=/data/hadoop/hadoop2
7:修改core-site.xml,hdfs-site.xml,mapred-site.xml,yarn-site.xml,内容见附录2
8:格式化HDFS文件系统
执行命令 hadoop namenode -format
9:启动hdfs,yarn
start-dfs.sh
start-yarn.sh
10:验证启动是否成功
jps
若是显示以下6个进程,则启动成功
53244 ResourceManager
53083 SecondaryNameNode
52928 DataNode
53640 Jps
52810 NameNode
53348 NodeManager
5、运行自带的wordcount例子
hadoop fs -mkdir /tmp
hadoop fs -mkdir /tmp/input
hadoop fs -put /usr/hadoop/test.txt /tmp/input
cd /data/hadoop/hadoop2/share/hadoop/mapreduce
hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /tmp/input /tmp/output
若是能正确运行,则hadoop安装配置正确
6、附录1:
设置的环境变量(/etc/profile,编辑后运行source /etc/profile/ 使配置生效)
#java set
export JAVA_HOME=/usr/java/jdk1.6.0_43
export JRE_HOME=/usr/java/jdk1.6.0_43/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
#maven set
export M2_HOME=/home/soft/maven
export PATH=$PATH:$M2_HOME/bin
#ant
export ANT_HOME=/home/soft/apache-ant-1.9.3
export PATH=$PATH:$ANT_HOME/bin
#findbugs
export FINDBUGS_HOME=/home/soft/findbugs-2.0.2
export PATH=$PATH:$FINDBUGS_HOME/bin
附录2:core-site.xml,hdfs-site.xml,mapred-site.xml,yarn-site.xml的文件内容
core-site.xml内容:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://vdata.kt:8020</value>
</property>
</configuration>
hdfs-site.xml内容:
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/data/hadoop/namenode</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/data/hadoop/hdfs</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
mapred-site.xml内容:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml内容:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>