因为篇幅较大,废话很少说,直奔主题。html
hadoop 安装一样可分为 单机模式、伪分布式、彻底分布式java
本文主要介绍彻底分布式,环境 centos 6.5,hadoop-2.6.5node
[root@localhost ~]# hostname localhost.localdomain [root@localhost ~]# vi /etc/sysconfig/network [root@localhost ~]# hostname localhost.localdomain
修改成 linux
NETWORKING=yes
HOSTNAME=hadoop10
因为这种方法须要重启才能生效,故 再查 hostname 没有变化,这里我不想重启,直接使用 临时更改命令apache
[root@localhost ~]# hostname hodoop10 [root@localhost ~]# hostname hodoop10
重启失效centos
依次修改 4 台电脑的 hostname服务器
这个文件和 hostname 的修改没有任何关系,他须要放在集群中的每一个节点,以告知每一个节点 各个 IP 对应的 hostname,至关于 DNSsession
vi 命令,加入下面内容app
192.168.10.10 hadoop10 192.168.10.11 hadoop11 192.168.10.12 hadoop12 192.168.10.13 hadoop13
依次修改 4 台电脑的 /etc/hostsdom
[root@hadoop11 ~]# chkconfig iptables off [root@localhost ~]# chkconfig --list iptables iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@localhost ~]# ssh localhost The authenticity of host 'localhost (::1)' can't be established. RSA key fingerprint is d1:40:d3:50:c8:2d:af:d4:a0:d4:cb:9f:6d:8d:ed:2f. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. root@localhost's password: Last login: Tue Sep 17 01:11:07 2019 from 192.168.10.1
出现如上界面,说明已经安装了 ssh,若是没有,用下面命令安装
yum install openssh-server -y
[root@hodoop10 ~]# cd ~/.ssh [root@hodoop10 .ssh]# ls known_hosts
刚开始该目录下只有一个文件,这个文件记录 ssh 访问过的计算机的公钥
[root@hodoop10 .ssh]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: df:db:71:2b:a7:59:96:95:88:cd:0d:7e:25:85:f1:0d root@hodoop10 The key's randomart image is: +--[ RSA 2048]----+ | Eo.| | .+.| | .. +| | = +.o| | S . = +.| | . . . o| | . . .+.| | +++.| | .o=. | +-----------------+
一路回车,不须要任何其余操做
此时再 ls 目录,能够看到 公钥和私钥
[root@hodoop10 .ssh]# ls id_rsa id_rsa.pub known_hosts
依次为 4 台电脑建立公钥和私钥
首先把 本台电脑 的公钥放到 authorized_keys 文件里
[root@hodoop10 .ssh]# cat id_rsa.pub >> authorized_keys [root@hodoop10 .ssh]# ls authorized_keys id_rsa id_rsa.pub known_hosts
而后把 authorized_keys 发送给其余全部节点
[root@hodoop10 .ssh]# scp authorized_keys root@hadoop12:~/.ssh The authenticity of host 'hadoop12 (192.168.10.12)' can't be established. RSA key fingerprint is 43:68:54:4e:85:ed:ac:30:7c:b2:a1:48:02:b9:67:57. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'hadoop12,192.168.10.12' (RSA) to the list of known hosts. root@hadoop12's password: authorized_keys
可能还须要修改 authorized_keys 的权限 644;
此时可测试,该节点免密登陆其余节点,不须要输密码
[root@hodoop10 .ssh]# ssh hadoop12 Last login: Tue Sep 17 01:49:50 2019 from localhost [root@hadoop12 ~]# exit logout Connection to hadoop12 closed.
登陆成功,并退出
依次在 4 台电脑上重复上步操做
最终 authorized_keys 文件包含了 4 个节点的公钥
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAuFkD0t6HZM/H7pyqjqBnrnF+4wr2gI8p4wjCDdN8smAH8ujLviUAK0rE1Gh8bcXtWSjLmFLOf1oQwrCvtWnP4q9+enFwgqFFLEkQvT5jRbKrJImYWpafGimOlO5hb1jPZKrxpRZlMy9LFzLnfr5aJ+fES E2sSrTwlXbfXm0w1xhBKzoo5JZq8xIvzYXYQ8qyaTRFd2+EZbZKJ0CgVw83hKjiq9bjrbqtEg2oo8FdQwi4SNZ6d4jozhw54J8nCk8YduVneYoFSf1gmdwUcMb2iyGUfMRrhK3k0vUxBZKsfrG9aS4P4Gzd/CVGtMlqEWVldyTS9vmORHNAHEFqdyVI/w== root@hodoop10
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA7pZA4t2E00jJtotZeFST+HWXrAtzfjGFBvDkpnqwoYs1cEjsr8Ez2XjWbcdGBqbEFNohTWUh0dpfQHyWcT2fun10aRJ9GyYuebzSJm5BWT06PKWB5QavqNtdmqNTSzEfNXGjyvaV8PbfFA8kfIeaiq0/u TwTrtjcLHmN9ENm1NjJqibZxNSNJnQGXJs7Gj6ujIXrVmr//G9OqS97ZM5slgHw68F7azvpCfzHBsJu3QTZYL96WRUSRXHH8GteRMtBYVlRzg7N1gU+YKx4fMXjEk7xu/p8ub5IG5kClCIU+mR+Z0VNReGVP3n4GZuE/Fa3OMerESUs6i/GWczNbA2cSQ== root@hadoop11
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqL/aQhVUd4B7VsfnzOFEXFQJX/rV1obelijX6M/eVns2IlpxB54UUgYoAet97Xew5vc31tAAbURW8zS4CAJujKWKFnAB/R2UIzLww6CxahsTqrsPkj89SiLl3Q4SsBDC49hULfbd5AxuEdq/v0XIFT2js bpaUtWQ2pF5HxzkhpnrpEbcwHjc14GfM1cFtyPcR3XXZC4P+scaLGgdn8I3So0k6ENqo7LfQ7y2/FNQMXtKxObfO0j7bESsNWQxPGwolXdVeBO4VEYIrYH/6/gPdOxtNGe2gCnr8MM8z7eElLXy1cF5wTddv6vCdBv9bl5H3/BHtUrJ+/5/XjkkyRVECw== root@hadoop12
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAvOn53kK/2uoDBKKq/0LQhJ63S34K6lnksgAGJYWTugx57TxroRvms2DkdrV3EKhlIzVkpE3Xzrx4hyOFHXfnfAdsrvj22zgsPx4cNxM0Tmx6ELwCpcLPF381lDjEc5/7MEqQB+wV07tjAZAXOl5wETLLO 269iHvbX3oEZ3Q62xq52BLoKCkBunk5C0lVDHAhKtzBp1XTntixircUIxpNWWduhoUwiaTrUrki8gEyC2O/Hm9Wq6h2RyC7SvH8jaAZoC9UUso50TitD10J5bhdeg8iYnhb/wUJZ5zhkwSJuj8H4j8huCo5j/eX7sPXe/3eKnVlpEz/PX0/8eAQYJY6SQ== root@hadoop13
最终实现 每台电脑能够免密登陆其余全部电脑
方法不少,可自行百度
首先检查是否已经安装 java
yum list installed |grep java
yum 查看可用版本,并安装
yum -y list java*
yum -y install java-1.8.0-openjdk*
检测版本
[root@node .ssh]# java -version openjdk version "1.8.0_181"
下载地址 hadoop,注意不要下载 包含 src 的 tar 包,不然踩坑
解压便可
而后设置环境变量,测试是否安装成功
[root@hadoop10 lib]# vi /etc/profile [root@hadoop10 lib]# source /etc/profile [root@hadoop10 lib]# hadoop Usage: hadoop [--config confdir] COMMAND where COMMAND is one of: fs run a generic filesystem user client version print the version jar <jar> run a jar file checknative [-a|-h] check native hadoop and compression libraries availability distcp <srcurl> <desturl> copy file or directories recursively archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive classpath prints the class path needed to get the credential interact with credential providers Hadoop jar and the required libraries daemonlog get/set the log level for each daemon trace view and modify Hadoop tracing settings or CLASSNAME run the class named CLASSNAME Most commands print help when invoked w/o parameters.
说明安装成功。
环境变量设置以下
export HADOOP_HOME=/usr/lib/hadoop-2.6.5
export PATH=.:$HADOOP_HOME/bin:$PATH
依次在 4 台电脑上安装 hadoop
注意,只执行 第七步 就是 单机安装模式,对,只需这一步,而后咱们这里对单机模式作个小测试
进入 hadoop 根目录,建个 input 文件夹,而后上传一个文件 log.txt 到 input,而后在根目录执行 统计词频 的命令
[root@hadoop10 lib]# cd hadoop-2.6.5 [root@hadoop10 hadoop-2.6.5]# ls bin etc include lib libexec LICENSE.txt NOTICE.txt README.txt sbin share [root@hadoop10 hadoop-2.6.5]# mkdir input [root@hadoop10 input]# ls log.txt [root@hadoop10 hadoop-2.6.5]# hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount input output 19/09/17 23:07:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 19/09/17 23:07:20 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id 19/09/17 23:07:20 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 19/09/17 23:07:20 INFO input.FileInputFormat: Total input paths to process : 1 19/09/17 23:07:20 INFO mapreduce.JobSubmitter: number of splits:1 19/09/17 23:07:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1348719737_0001 19/09/17 23:07:21 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ 19/09/17 23:07:21 INFO mapreduce.Job: Running job: job_local1348719737_0001 19/09/17 23:07:21 INFO mapred.LocalJobRunner: OutputCommitter set in config null 19/09/17 23:07:21 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter 19/09/17 23:07:21 INFO mapred.LocalJobRunner: Waiting for map tasks 19/09/17 23:07:21 INFO mapred.LocalJobRunner: Starting task: attempt_local1348719737_0001_m_000000_0 19/09/17 23:07:21 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ] 19/09/17 23:07:21 INFO mapred.MapTask: Processing split: file:/usr/lib/hadoop-2.6.5/input/log.txt:0+183 19/09/17 23:07:21 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584) 19/09/17 23:07:21 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 19/09/17 23:07:21 INFO mapred.MapTask: soft limit at 83886080
...
19/09/17 23:07:22 INFO mapreduce.Job: map 100% reduce 100%
hadoop 的配置文件都在 /etc/hadoop 下
修改 java 环境变量为 绝对路径
设置 hdfs 的 Namenode 地址;
设置 hadoop 运行时临时文件的存储路径
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop10:8020</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/module/hadoop-2.6.5/data/tmp</value> </property> </configuration>
若是没有设置 hadoop.tmp.dir,默认存储路径在 /tmp/hadoop-username 下
设置 hdfs 的备份数,默认为 3
<configuration> <property> <name>dfs.replication</name> <value>5</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop10:50090</value> </property> <configuration>
把 mapred-site.xml.template 重命名为 mapred-site.xml 【这步不须要貌似也能够】
mv mapred-site.xml.template mapred-site.xml
修改 mapreduce 配置文件,设置 jobTracker 的地址和端口
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
指定 mr 运行在 yarn 上
删除原有内容,写入全部节点 hostname,这样能够一键启动整个集群
hadoop10
hadoop11
hadoop12
hadoop13
注意不能有 空格和空行
一样 将 java 环境变量改为 绝对路径 【这步不要也能够试试】
<!-- 指定YARN的老大(ResourceManager)的地址 --> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop10</value> </property> <!-- reducer获取数据的方式 --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property>
scp -r /usr/lib/hadoop-2.6.5/etc/hadoop root@hadoop13:/usr/lib/hadoop-2.6.5/etc/
集群搭好了,先把全部磁盘格式化一下,后面要存数据了,避免有杂质,同时建立一些东西。
注意:只在第一次启动是格式化,后面启动无需格式化
咱们要看 namenode 设置在哪一个节点上,而后在该节点上执行以下命令
bin/hdfs namenode -format
单节点启动 namenode
[root@hadoop10 hadoop-2.6.5]# sbin/hadoop-daemon.sh start namenode starting namenode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-namenode-hadoop10.out [root@hadoop10 hadoop-2.6.5]# jps 3877 NameNode 3947 Jps
单节点启动 datanode
[root@hadoop10 hadoop-2.6.5]# sbin/hadoop-daemon.sh start datanode starting datanode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-datanode-hadoop10.out [root@hadoop10 hadoop-2.6.5]# jps 3877 NameNode 4060 Jps 3982 DataNode
在 其余节点 依次启动 datanode
这样启动 hdfs,是否是很麻烦,并且咱们发现 SecondaryNameNode 并无被启动,因此 hadoop 提供了其余启动方式
一步启动 集群 hdfs:Namenode、Datanode、SecondaryNameNode
[root@hadoop10 hadoop-2.6.5]# sbin/start-dfs.sh 19/09/18 18:37:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [hadoop10] hadoop10: starting namenode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-namenode-hadoop10.out hadoop10: starting datanode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-datanode-hadoop10.out hadoop13: starting datanode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-datanode-hadoop13.out hadoop12: starting datanode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-datanode-hadoop12.out hadoop11: starting datanode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-datanode-hadoop11.out Starting secondary namenodes [hadoop10] hadoop10: starting secondarynamenode, logging to /usr/lib/hadoop-2.6.5/logs/hadoop-root-secondarynamenode-hadoop10.out 19/09/18 18:37:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [root@hadoop10 hadoop-2.6.5]# jps 6162 NameNode 6258 DataNode 6503 Jps 6381 SecondaryNameNode
一样看 yarn 设置在哪一个节点,yarn-site.xml,而后在该节点执行以下命令
[root@hadoop10 hadoop-2.6.5]# sbin/start-yarn.sh starting yarn daemons starting resourcemanager, logging to /usr/lib/hadoop-2.6.5/logs/yarn-root-resourcemanager-hadoop10.out hadoop10: starting nodemanager, logging to /usr/lib/hadoop-2.6.5/logs/yarn-root-nodemanager-hadoop10.out hadoop13: starting nodemanager, logging to /usr/lib/hadoop-2.6.5/logs/yarn-root-nodemanager-hadoop13.out hadoop11: starting nodemanager, logging to /usr/lib/hadoop-2.6.5/logs/yarn-root-nodemanager-hadoop11.out hadoop12: starting nodemanager, logging to /usr/lib/hadoop-2.6.5/logs/yarn-root-nodemanager-hadoop12.out [root@hadoop10 hadoop-2.6.5]# jps 6162 NameNode 6770 NodeManager 6258 DataNode 7012 Jps 6668 ResourceManager 6381 SecondaryNameNode
ResourceManager 和 NodeManager 都 启动
yarn 也能够分开启动
好麻烦,因此 hadoop 还提供了一键启动和一键关闭
sbin/start-all.sh
sbin/stop-all.sh
不过这个命令官方不建议使用,新版本已经废弃
namenode 的 IP
50070 端口 访问 hdfs http://192.168.10.10:50070
8088 端口 访问 mapreduce http://192.168.10.10:8088
给 hdfs 文件系统中创建目录,两种方式
bin/hdfs dfs -mkdir -p /usr/input/yanshw
bin/hadoop fs -mkdir -p /usr/input/yansw
远程可访问
上传文件
bin/hadoop fs -put README.txt /usr/input/yanshw
远程可查看
执行命令
必须指定输入输出,输出 不能提早建立
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount /usr/input/yanshw /usr/output/yanshw
远程查看
让咱们好奇的是,上面的目录在哪里呢?理论上应该是在 hadoop.tmp.dir 中,实际上确实如此,可是藏得很深
能够直接 cat 这个 文件就能看到他的内容就是咱们上传的文件;
这个文件很小,只有一个 block,若是是大文件,会被分红 多个 block,每一个 block 就和上面的图同样;
咱们能够把 全部 block cat >> 到一个 文件内,就能够看到咱们上传的文件
hadoop fs -linux 命令
如
$ hdfs dfs -ls / $ hdfs dfs -mkdir /user/hduser $ hdfs dfs -put /home/hduser/input.txt /user/hduser $ hdfs dfs -get input.txt /home/hduser
上面咱们虽然部署好了 hadoop 集群,可是并不完美,由于 咱们把 Namenode、SecondaryNamenode、ResourceManager 都部署到了一台服务器;
这样使得这台服务器压力很是大;并且这 3 个组件所能用到的资源都被压缩了;
因此在搭建集群以前咱们最好先作个规划,相似下图
三个核心组件分别放在 3 台服务器上,最简单的集群只需 3 台服务器
1. 找不到 jps
jps 是 查看 java 进程
找不到 jps 命令,说明 java 没装好,须要设置 java 环境变量
2. 重启后没法启动 datanode
一般在第一次搭建时能够成功,可是重启后不能成功,datanode 没法启动,缘由是 datanode 没法被 namenode 识别。
namenode 在 format 时会造成两个标识,blockPoolId 和 clusterId;
当有 datanode 加入时,会获取这两个标识做为从属 这个 namenode 的标识,这样才能组成集群;
一旦 namenode 被从新 format,会更新这两个标识;
然而 datanode 还拿原来的标识过来接头,天然被拒之门外;
解决方法:删除全部节点的数据,即 tmp,包括 namenode 的数据,从新格式化,再启动
3. 各类操做都会有以下 警告
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
无需理会,只是警告,确实想解决,参考 解决办法
参考资料:
https://www.cnblogs.com/laov/p/3421479.html hadoop1.2.1
https://blog.csdn.net/baidu_28997655/article/details/81586418 hadoop2.6.5
https://blog.csdn.net/qq285016127/article/details/80501418 hadoop2.6.4
https://www.cnblogs.com/xia520pi/archive/2012/05/16/2503949.html#_label3_0 讲得很详细