之前安装过Hadoop几个版本的单机版,主要是为了研究Hadoop源代码,如今想更加深刻Hadoop整个生态系统,选择安装Hadoop彻底分布式,因为目前Hadoop最新版本为2.6,因而选择这个版本安装使用html
准备工做:java
一、笔记本4G内存 ,操做系统WIN7(纯屌丝配置)node
二、工具:VMware Workstationlinux
三、虚拟机:CentOS6.5(64位)共3台,一个master,两个slave,web
安装好一个主机master的CentOS系统, 一、系统环境设置(先配置master节点)vim
1.1 修改主机名app
NETWORKING=yes HOSTNAME=master NTPSERVERARGS=iburstssh
1.2 修改主机名和IP的映射关系(hosts)webapp
添加:192.168.111.131 master分布式
1.3 关闭防火墙
1.4 重启系统
#reboot 2.安装jdk
一、下载jdk,地址:http://www.Oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
二、上传到虚拟机
三、解压jdk
#mkdir opt
#tar -zxvf jdk-7u79-linux-x64.tar.gz
四、将java添加到环境变量中
#vim /etc/profile
//在文件的最后添加
export JAVA_HOME=/home/master/opt/jdk1.7.0_79 export PATH=$PATH:$JAVA_HOME/bin
$ ssh-keygen -t rsa (四个回车)
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ ~/.ssh/authorized_keys //查看rsa 4. 安装hadoop2.6.0
首先将hadoop解压缩到opt文件夹
4.1 配置hadoop
4.1.1 配置hadoop-env.sh
将 JAVA_HOME 修改成刚才配置的位置 export JAVA_HOME=/home/master/opt/jdk1.7.0_79
4.1.2 配置core-site.xml
添加下面的内容:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/master/opt/hadoop-2.6.0/tmp</value> </property> <property> <name>io.file.buffer.size</name> <value>4096</value> </property> </configuration>
4.1.3 配置hdfs-site.xml
添加下面的内容:
<configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///home/master/opt/hadoop-2.6.0/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///home/master/opt/hadoop-2.6.0/dfs/data</value> </property> <property> <name>dfs.nameservices</name> <value>h1</value> </property>
<property> <name>dfs.namenode.secondary.http-address</name> <value>master:50090</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property>
</configuration>
4.1.4 配置mapred-site.xml
添加下面的内容:
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <final>true</final> </property> <property> <name>mapreduce.jobtracker.http.address</name> <value>master:50030</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>master:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>master:19888</value> </property> <property> <name>mapred.job.tracker</name> <value>http://master:9001</value> </property> </configuration>
4.1.5 配置yarn-site.xml
添加下面的内容:
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property> </configuration>
4.2 将hadoop添加到环境变量
export HADOOP_HOME=/home/master/opt/hadoop-2.6.0 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
4.3 格式化namenode
4.4 启动hadoop
先启动HDFS:
再启动YARN
4.4 验证是否启动成功
2871 ResourceManager 3000 Jps 2554 NameNode 2964 NodeManager 2669 DataNode
至此伪分布式 hadoop 搭建完成!