1)hadoop集群搭建

操做系统环境

    CentOS7.2java

网络环境

hostname ip role
hadoop001 192.168.252.164

hdfs:namenode,datanode,sceondnamenodenode

yarn:resourcemanager,nodemanagerlinux

hadoop002 192.168.252.165

hdfs:datanodebash

yarn:nodemanager网络

hadoop003 192.168.252.166

hdfs:datanodessh

yarn:nodemanageroop

软件包:

    jdk-7u55-linux-x64.tar.gzspa

    hadoop-2.6.4.tar.gz操作系统

 

1.准备工做

1.1关闭防火墙

systemctl stop firewalld
chkconfig firewalld off

1.2关闭selinux

vi /etc/selinux/config

   SELINUX=disabledrest

1.3设置网络

vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
TYPE=Ethernet
BOOTPROTO=static
NAME=eno16777736
DEVICE=eno16777736
ONBOOT=yes
IPADDR=192.168.252.164
NETMASK=255.255.255.0
GATEWAY=192.168.252.1
systenctl restart network

1.4设置hostname

vi /etc/sysconfig/network

HOSTNAME=hadoop001

1.5设置hosts

vi /etc/hosts
192.168.252.164 hadoop001
192.168.252.165 hadoop002
192.168.252.166 hadoop003

1.6配置互信

生成密钥文件(~/.ssh目录下生成id_rsaid_rsa.pub)

ssh-keygen -t rsa

复制公钥 (~/.ssh目录下

cp id_rsa.pub authorized_keys

每一个节点执行完毕以后,合并各个节点的authorized_keys,并用合并后的文件覆盖原有authorized_keys。

1.7安装jdk

tar zxvf jdk-7u55-linux-x64.tar.gz

配置java环境变量

vi ~/.bashrc
export JAVA_HOME=/usr/jdk1.7.0_55
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source ~/.bashrc

 

2.节点一搭建

2.1解压hadoop(/opt目录下)

tar zxvf hadoop-2.6.4.tar.gz
mv hadoop-2.6.4.tar.gz hadoop

2.2配置环境变量

vi /etc/profile
export JAVA_HOME=/usr/jdk1.7.0_55
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile

2.3修改配置

core-site.xml

<property>
  <name>fs.default.name</name>
  <value>hdfs://hadoop001:9000</value>
</property>

hdfs-site.xml

<property>
  <name>dfs.name.dir</name>
  <value>/usr/local/data/namenode</value>
</property>

<property>
  <name>dfs.data.dir</name>
  <value>/usr/local/data/datanode</value>
</property>

<property>
  <name>dfs.tmp.dir</name>
  <value>/usr/local/data/tmp</value>
</property>

<property>
  <name>dfs.replication</name>
  <value>3</value>
</property>

mapred-site.xml

<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>

yarn-site.xml

<property>
  <name>yarn.resourcemanager.hostname</name>
  <value>hadoop001</value>
</property>

<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>

Slaves

hadoop001
hadoop002
Hadoop003

 

 

3.节点2、三搭建

3.1复制hadoop目录到2、三节点

scp -r hadoop 192.168.252.165:/opt
scp -r hadoop 192.168.252.166:/opt

3.2复制环境变量文件

scp -r profile 192.168.252.165:/etc
scp -r profile 192.168.252.166:/etc

3.3创建data目录

mkdir /usr/local/data

 

4.启动

4.1格式化HDFS

    hdfs namenode -format

4.2启动hdfs集群

    start-dfs.sh

4.3验证

    jps命令或50070端口

        hadoop001:namenode\datanode\sceondnamenode

        hadoop002:datanode

        hadoop003:datanode

4.4启动yarn

    start-yarn.sh

4.5验证:

    jps,8088端口

    hadoop001:resourcemanager\nodemanager

    hadoop002:nodemanager

    hadoop003:nodemanager

相关文章
相关标签/搜索