Hadoop部署(四)——CentOS 7部署Hadoop(伪分布式)


测试环境

Linux系统版本:CentOS 7 64位

Hadoop版本:hadoop-2.7.3


安装CentOS 7

Hadoop部署(一)——VMware虚拟机安装Linux系统


配置Java环境

Hadoop部署(二)——Linux系统下安装Java环境


配置单机版Hadoop

Hadoop部署(三)——CentOS 7部署Hadoop(单机版)


配置SSH免密登录

用hadoop用户登录进行下面的操作:

1、生成公钥私钥对,输入下面命令一直回车就可以了

ssh-****** -t rsa

执行情况如下: 

[[email protected] ~]$ ssh-****** -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:nx33fWWuccTRMo+zMzr/v4bMenWYFoPuFmOucqdwp90 [email protected]
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                .|
|             .o..|
|            . o*.|
|        S  .. +=*|
|         . o=o+O+|
|        . ++=+B B|
|        .o.+*B B.|
|         o+*=oEo=|
+----[SHA256]-----+

2、实现本地免密登录,将id_rsa.pub中的内容拷贝到authorized_keys

ssh-copy-id localhost

[[email protected] ~]$ ssh-copy-id localhost
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/hadoop/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: //这里输入hadoop用户的密码

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'localhost'"
and check to make sure that only the key(s) you wanted were added.

~/.ssh目录下会生成一个新的文件:authorized_keys,如下:

[[email protected] ~]$ cd ~/.ssh
[[email protected] .ssh]$ ll
总用量 16
-rw-------. 1 hadoop hadoop  398 10月  3 19:47 authorized_keys
-rw-------. 1 hadoop hadoop 1679 10月  3 19:46 id_rsa
-rw-r--r--. 1 hadoop hadoop  398 10月  3 19:46 id_rsa.pub
-rw-r--r--. 1 hadoop hadoop  171 10月  3 19:42 known_hosts

3、完成上述步骤后就可以本地SSH免密登录了,运行下面代码出现一行登录时间就代表本地SSH免密登录成功

ssh localhost

下面是本地SSH免密登录成功的标志:

[[email protected] ~]$ ssh localhost
Last login: Wed Oct  3 19:45:11 2018 from 192.168.33.2

想了解更多,可以参考下文或者是自己百度相关知识

Linux系统配置SSH免密登录(多主机互通)


修改Hadoop配置文件

1、进入/usr/local/hadoop/etc/hadoop/目录,可以看到如下配置文件,如果需要修改的文件不存在,那就创建一个,vi命令可以直接创建

[[email protected] ~]$ cd /usr/local/hadoop/etc/hadoop/

[[email protected] hadoop]$ ll
总用量 152
-rw-r--r--. 1 hadoop hadoop  4436 8月  18 2016 capacity-scheduler.xml
-rw-r--r--. 1 hadoop hadoop  1335 8月  18 2016 configuration.xsl
-rw-r--r--. 1 hadoop hadoop   318 8月  18 2016 container-executor.cfg
-rw-r--r--. 1 hadoop hadoop   774 8月  18 2016 core-site.xml
-rw-r--r--. 1 hadoop hadoop  3589 8月  18 2016 hadoop-env.cmd
-rw-r--r--. 1 hadoop hadoop  4224 8月  18 2016 hadoop-env.sh
-rw-r--r--. 1 hadoop hadoop  2598 8月  18 2016 hadoop-metrics2.properties
-rw-r--r--. 1 hadoop hadoop  2490 8月  18 2016 hadoop-metrics.properties
-rw-r--r--. 1 hadoop hadoop  9683 8月  18 2016 hadoop-policy.xml
-rw-r--r--. 1 hadoop hadoop   775 8月  18 2016 hdfs-site.xml
-rw-r--r--. 1 hadoop hadoop  1449 8月  18 2016 httpfs-env.sh
-rw-r--r--. 1 hadoop hadoop  1657 8月  18 2016 httpfs-log4j.properties
-rw-r--r--. 1 hadoop hadoop    21 8月  18 2016 httpfs-signature.secret
-rw-r--r--. 1 hadoop hadoop   620 8月  18 2016 httpfs-site.xml
-rw-r--r--. 1 hadoop hadoop  3518 8月  18 2016 kms-acls.xml
-rw-r--r--. 1 hadoop hadoop  1527 8月  18 2016 kms-env.sh
-rw-r--r--. 1 hadoop hadoop  1631 8月  18 2016 kms-log4j.properties
-rw-r--r--. 1 hadoop hadoop  5511 8月  18 2016 kms-site.xml
-rw-r--r--. 1 hadoop hadoop 11237 8月  18 2016 log4j.properties
-rw-r--r--. 1 hadoop hadoop   931 8月  18 2016 mapred-env.cmd
-rw-r--r--. 1 hadoop hadoop  1383 8月  18 2016 mapred-env.sh
-rw-r--r--. 1 hadoop hadoop  4113 8月  18 2016 mapred-queues.xml.template
-rw-r--r--. 1 hadoop hadoop   758 8月  18 2016 mapred-site.xml.template
-rw-r--r--. 1 hadoop hadoop    10 8月  18 2016 slaves
-rw-r--r--. 1 hadoop hadoop  2316 8月  18 2016 ssl-client.xml.example
-rw-r--r--. 1 hadoop hadoop  2268 8月  18 2016 ssl-server.xml.example
-rw-r--r--. 1 hadoop hadoop  2191 8月  18 2016 yarn-env.cmd
-rw-r--r--. 1 hadoop hadoop  4567 8月  18 2016 yarn-env.sh
-rw-r--r--. 1 hadoop hadoop   690 8月  18 2016 yarn-site.xml

2、修改core-site.xml文件

[[email protected] hadoop]$ vi core-site.xml

配置文件如下: 

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

	<property>
		<name>hadoop.tmp.dir</name>
		<value>file:/usr/local/hadoop/tmp</value>
		<description>指定hadoop运行时产生文件的存储路径</description>
	</property>
	<property>
		<name>fs.defaultFS</name>
		<value>hdfs://localhost:9000</value>
		<description>hdfs namenode的通信地址,通信端口</description>
	</property>

</configuration>

3、修改hdfs-site.xml

vi hdfs-site.xml

配置文件如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->
<!-- 该文件指定与HDFS相关的配置信息。
需要修改HDFS默认的块的副本属性,因为HDFS默认情况下每个数据块保存3个副本,
而在伪分布式模式下运行时,由于只有一个数据节点,
所以需要将副本个数改为1;否则Hadoop程序会报错。 -->

<configuration>

	<property>
		<name>dfs.replication</name>
		<value>1</value>
		<description>指定HDFS存储数据的副本数目,默认情况下是3份</description>
	</property>
	<property>
		<name>dfs.namenode.name.dir</name>
		<value>file:/usr/local/hadoop/hadoopdata/namenode</value>
		<description>namenode存放数据的目录</description>
	</property>
	<property>
		<name>dfs.datanode.data.dir</name>
		<value>file:/usr/local/hadoop/hadoopdata/datanode</value>
		<description>datanode存放block块的目录</description>
	</property>

</configuration>

4、修改mapred-site.xml

/usr/local/hadoop/etc/hadoop文件夹中并没有mapred-site.xml文件,但提供了模板mapred-site.xml.template,将其复制一份重命名为mapred-site.xml 即可

[[email protected] hadoop]$ cp mapred-site.xml.template mapred-site.xml

[[email protected] hadoop]$ vi mapred-site.xml

配置文件如下:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->
<!-- 在该配置文件中指定与MapReduce作业相关的配置属性,需要指定JobTracker运行的主机地址-->

<configuration>

	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
		<description>指定mapreduce运行在yarn上</description>
	</property>
	
</configuration>

5、修改yarn-site.xml

[[email protected] hadoop]$ vi yarn-site.xml
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->

	<property>
		<name>yarn.nodemanager.auxservices</name>
		<value>mapreduce_shuffle</value>
		<description>mapreduce执行shuffle时获取数据的方式</description>
	</property>

</configuration>

6、为了防止运行时报错,修改一下hadoop-env.sh文件

[[email protected] hadoop]$ vi hadoop-env.sh

找到export JAVA_HOME=${JAVA_HOME},在前面加个#注释掉,将JAVA_HOME用路径代替,如下:

#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/java/jdk1.8.0_181-amd64

上述配置文件我已经上传至:https://github.com/PengShuaixin/hadoop-2.7.3_centos7

可以直接下载下来通过上传到Linux的方式,直接覆盖原来文件的方式进行配置

 


关闭防火墙

CentOS 7 使用的是firewalld作为防火墙,与CentOS 6 有所不同

查看防火墙状态:

systemctl status firewalld

如下状态说明正在运行:

[[email protected] hadoop]$ systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since 三 2018-10-03 19:35:22 CST; 1h 49min ago
     Docs: man:firewalld(1)
 Main PID: 724 (firewalld)
   CGroup: /system.slice/firewalld.service
           └─724 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

 关闭防火墙:

systemctl stop firewalld

关闭之后状态如下:

[[email protected] hadoop]$ systemctl stop firewalld
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===
Authentication is required to manage system services or units.
Authenticating as: root
Password: #这里需要输入root账户密码
==== AUTHENTICATION COMPLETE ===
[[email protected] hadoop]$ systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since 三 2018-10-03 21:27:34 CST; 40s ago
     Docs: man:firewalld(1)
  Process: 724 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 724 (code=exited, status=0/SUCCESS)

 关闭防火墙开机自动启动:

systemctl disable firewalld

执行过程如下: 

[[email protected] hadoop]$ systemctl disable firewalld

==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-unit-files ===
Authentication is required to manage system service or unit files.
Authenticating as: root
Password: #输入root用户密码
==== AUTHENTICATION COMPLETE ===
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
==== AUTHENTICATING FOR org.freedesktop.systemd1.reload-daemon ===
Authentication is required to reload the systemd state.
Authenticating as: root
Password: #输入root用户密码
==== AUTHENTICATION COMPLETE ===

#查看开机启动状态,确认关闭成功
[[email protected] hadoop]$ systemctl is-enabled firewalld.service
disabled

更多firewalld防火墙操作请参考:

CentOS7使用firewalld打开关闭防火墙与端口


HDFS初始化

Hadoop配置完后,格式化namenode

[[email protected] hadoop]$ hdfs namenode -format

出现如下信息说明格式化成功:

[[email protected] hadoop]$ hdfs namenode -format
18/10/03 21:05:52 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master100/192.168.33.100
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.3
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.7.3.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG:   java = 1.8.0_181
************************************************************/
18/10/03 21:05:52 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
18/10/03 21:05:52 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-54d602ae-8c60-49f4-bcb9-8a687ba97501
18/10/03 21:05:52 INFO namenode.FSNamesystem: No KeyProvider found.
18/10/03 21:05:52 INFO namenode.FSNamesystem: fsLock is fair:true
18/10/03 21:05:52 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
18/10/03 21:05:52 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
18/10/03 21:05:52 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
18/10/03 21:05:52 INFO blockmanagement.BlockManager: The block deletion will start around 2018 十月 03 21:05:52
18/10/03 21:05:52 INFO util.GSet: Computing capacity for map BlocksMap
18/10/03 21:05:52 INFO util.GSet: VM type       = 64-bit
18/10/03 21:05:52 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
18/10/03 21:05:52 INFO util.GSet: capacity      = 2^21 = 2097152 entries
18/10/03 21:05:52 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
18/10/03 21:05:52 INFO blockmanagement.BlockManager: defaultReplication         = 1
18/10/03 21:05:52 INFO blockmanagement.BlockManager: maxReplication             = 512
18/10/03 21:05:52 INFO blockmanagement.BlockManager: minReplication             = 1
18/10/03 21:05:52 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
18/10/03 21:05:52 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
18/10/03 21:05:52 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
18/10/03 21:05:52 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
18/10/03 21:05:52 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
18/10/03 21:05:52 INFO namenode.FSNamesystem: supergroup          = supergroup
18/10/03 21:05:52 INFO namenode.FSNamesystem: isPermissionEnabled = true
18/10/03 21:05:52 INFO namenode.FSNamesystem: HA Enabled: false
18/10/03 21:05:52 INFO namenode.FSNamesystem: Append Enabled: true
18/10/03 21:05:52 INFO util.GSet: Computing capacity for map INodeMap
18/10/03 21:05:52 INFO util.GSet: VM type       = 64-bit
18/10/03 21:05:52 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
18/10/03 21:05:52 INFO util.GSet: capacity      = 2^20 = 1048576 entries
18/10/03 21:05:52 INFO namenode.FSDirectory: ACLs enabled? false
18/10/03 21:05:52 INFO namenode.FSDirectory: XAttrs enabled? true
18/10/03 21:05:52 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
18/10/03 21:05:52 INFO namenode.NameNode: Caching file names occuring more than 10 times
18/10/03 21:05:53 INFO util.GSet: Computing capacity for map cachedBlocks
18/10/03 21:05:53 INFO util.GSet: VM type       = 64-bit
18/10/03 21:05:53 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
18/10/03 21:05:53 INFO util.GSet: capacity      = 2^18 = 262144 entries
18/10/03 21:05:53 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
18/10/03 21:05:53 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
18/10/03 21:05:53 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
18/10/03 21:05:53 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
18/10/03 21:05:53 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
18/10/03 21:05:53 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18/10/03 21:05:53 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/10/03 21:05:53 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18/10/03 21:05:53 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/10/03 21:05:53 INFO util.GSet: VM type       = 64-bit
18/10/03 21:05:53 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
18/10/03 21:05:53 INFO util.GSet: capacity      = 2^15 = 32768 entries
18/10/03 21:05:53 INFO namenode.FSImage: Allocated new BlockPoolId: BP-938082284-192.168.33.100-1538571953311
18/10/03 21:05:53 INFO common.Storage: Storage directory /usr/local/hadoop/hadoopdata/namenode has been successfully formatted.
18/10/03 21:05:53 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/hadoopdata/namenode/current/fsimage.ckpt_0000000000000000000 using no compression
18/10/03 21:05:53 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/hadoopdata/namenode/current/fsimage.ckpt_0000000000000000000 of size 353 bytes saved in 0 seconds.
18/10/03 21:05:53 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/10/03 21:05:53 INFO util.ExitUtil: Exiting with status 0
18/10/03 21:05:53 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master100/192.168.33.100
************************************************************/

 


启动Hadoop

执行命令:

start-all.sh

#或者是使用如下两条命令启动hadoop,第一条命令实际上操作的就是下面两条命令
start-dfs.sh
start-yarn.sh

出现如下信息: 

[[email protected] hadoop]$ start-all.sh

#执行上述命令会出现如下信息
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-master100.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-master100.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:smnjN2nAXE9l/kx1fQ2r3KBQt14TMjgaVlh+65clSrA.
ECDSA key fingerprint is MD5:75:f5:c8:d0:06:c2:c8:0d:25:8b:b9:d5:47:8c:3a:f5.

#到这里时注意输入yes后回车
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-master100.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-master100.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-master100.out

查看启动的进程:

jps

出现如下进程说明Hadoop安装并启动成功

[[email protected] hadoop]$ jps
6566 NameNode
7399 Jps
6664 DataNode
7016 ResourceManager
7114 NodeManager
6862 SecondaryNameNode

可以进入如下网页查看Namenode与ResourceManager

192.168.33.100:50070

192.168.33.100:8088


关闭Hadoop

[[email protected] sbin]$ stop-all.sh

This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop