最近公司准备升级spark环境,主要缘由是生产环境的spark和hadoop版本都比较低,可是具体升级到何种版本还不肯定,须要作进一步的测试分析。这个任务对于大数据开发环境配置有要求,这里记录一下配置过程,可是对于为何要作这些配置还不是很了解,算是知其然不知其因此然,深刻了解再写篇博文分析。html
按照上一篇博文的配置,我发现centos7的 JDK已经安装好了,能够经过下面的代码进行检查,以下图,显示的1.8.0_121的openJDKjava
[kejun@localhost ~]$ java -version
openjdk version "1.8.0_121"
OpenJDK Runtime Environment (build 1.8.0_121-b13)
OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)node
没有安装也没关系,经过yum安装仍是很方便的,能够经过下面的指令来安装JDK:shell
yum search java|grep jdk
yum install java-1.8.0-openjdkapache
下一步是配置JAVA的环境变量,由于是系统默认安装的JDK,因此要找到JDK的目录比较困难,须要经过下面两个语句来进行查询,第一个查询是得到/usr/bin/java的依赖链接,第二个查询是进一步得到依赖的依赖的:vim
[kejun@localhost ~]$ ll /usr/bin/java lrwxrwxrwx. 1 root root 22 3月 16 17:11 /usr/bin/java -> /etc/alternatives/java [kejun@localhost ~]$ ll /etc/alternatives/java lrwxrwxrwx. 1 root root 73 3月 16 17:11 /etc/alternatives/java -> /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.121-0.b13.el7_3.x86_64/jre/bin/java
经过这个办法获得的/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.121-0.b13.el7_3.x86_64就是系统默认的JDK目录。得到JDK目录后,配置环境变量的指令为:centos
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.121-0.b13.el7_3.x86_64 export JRE_HOME=$JAVA_HOME/jre export CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH export PATH=$JAVA_HOME/bin:$PATH
有三种方法能够支持配置JAVA的环境变量:
在shell终端中经过上述命令直接执行,这种配置只对当前的shell有效,因此不推荐。
修改~/.bash_profile文件,在下面的两个语句中进行插入便可bash
PATH=$PATH:$HOME/.local/bin:$HOME/bin #这里插入 export PATH
这个修改只针对当前的用户生效。可是hadoop配置中可能须要切换到另一个用户hadoop进行配置,所以这个方法也不适合。
修改/etc/profile,这种配置方法适合于全部用户,可是这种修改比较适合单人的开发环境使用,咱们选择这种方式。服务器
vi /etc/profile source /etc/profile
到此,咱们完成了JDK的配置。下一步进入hadoop的安装。app
安装参考的文档是:CentOS 6.5 hadoop 2.7.3 集群环境搭建
Linux Hadoop2.7.3 安装(单机模式) 一
hadoop有三种安装模式:单机standlone模式,伪分布式模式,分布式模式。
由于没有多余的服务器资源供我测试,这里我选择了单机模式的安装。
cd /tmp wget http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz tar -zxvf hadoop-2.7.3.tar.gz cp -R /tmp/hadoop-2.7.3 /usr/hadoop
sudo -i vim /etc/profile HADOOP_HOME=/usr/hadoop export JAVA_LIBRARY_PATH='/usr/hadoop/lib/native export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
[root@localhost ~]# rpm -qa | grep ssh openssh-clients-6.6.1p1-33.el7_3.x86_64 openssh-6.6.1p1-33.el7_3.x86_64 openssh-server-6.6.1p1-33.el7_3.x86_64 libssh2-1.4.3-10.el7_2.1.x86_64
若是没有发现ssh也没关系,能够经过yum安装:
yum install openssh-clients yum install openssh-server
接下来依次执行,下面的指令能够确保当前用户能够免密钥登录:
ssh localhost cd ~/.ssh/ ssh-keygen -t dsa cat id_dsa.pub >> authorized_keys
经过root用户修改ssh的配置:
sudo -i vim /etc/ssh/sshd_config RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys service sshd restart
后续在配置hadoop的过程当中,咱们会发现会出现ssh的错误:The authenticity of host 0.0.0.0 can't be established.
解决方法能够经过下列命令解决:
ssh -o StrictHostKeyChecking=no 0.0.0.0
完成hadoop及ssh的安装步骤后,接下来是对hadoop的配置文件进行修改。
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://0.0.0.0:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/hadoop/temp</value> </property> </configuration>
<configuration> <property> <name>dfs.replication</name> <value>2</value> </property> </configuration>
/usr/hadoop/bin/hdfs namenode -format
/usr/hadoop/sbin/start-dfs.sh /usr/hadoop/sbin/stop-dfs.sh
启动后,访问 http://localhost:50070/dfshealth.html#tab-datanode 能够获得这个页面,就是配置成功了。
##Map-Reduce示例
为了验证单机版的hadoop配置是否正确,接下来作一个map-reduce的样例。对于一个文本的单词集进行计数
具体的配置以下:
cd /usr/hadoop/etc/hadoop mv mapred-site.xml.template mapred-site.xml vim mapred-site xml =============================== <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> =============================== </configuration>
vim mapred-site xml =============================== <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration ===============================
[root@localhost hadoop]# cd /usr/hadoop [root@localhost hadoop]# vi words.txt [root@localhost hadoop]# cat words.txt ============================= wo xiang shuo shen me shen me dou bu jue de yi qie yi qie dou shi xu huan de hahaha ============================= [root@localhost hadoop]# cd bin [root@localhost bin]# hadoop fs -put /usr/hadoop/words.tx
[root@localhost bin]# start-yarn.sh [root@localhost bin]# hadoop jar /usr/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount hdfs://localhost:9000/words.txt hdfs://localhost:9000/out3
接下来,打开连接就能够看到结果了http://localhost:50070/explorer.html#/
export HADOOP_ROOT_LOGGER=DEBUG,console
网上也有人有相似的问题,可是缘由并不老是同样的,我配置的出现这个问题是由于JAVA_LIBRARY_PATH没有配置,这个在配置我已经在上文的hadoop环境配置中增长上了(标粗体)
[root@localhost logs]# ls hadoop-root-datanode-localhost.localdomain.log hadoop-root-datanode-localhost.localdomain.out hadoop-root-datanode-localhost.localdomain.out.1 hadoop-root-datanode-localhost.localdomain.out.2 hadoop-root-datanode-localhost.localdomain.out.3 hadoop-root-datanode-localhost.localdomain.out.4 hadoop-root-datanode-localhost.localdomain.out.5 hadoop-root-namenode-localhost.localdomain.log hadoop-root-namenode-localhost.localdomain.out hadoop-root-namenode-localhost.localdomain.out.1 hadoop-root-namenode-localhost.localdomain.out.2 hadoop-root-namenode-localhost.localdomain.out.3 hadoop-root-namenode-localhost.localdomain.out.4 hadoop-root-namenode-localhost.localdomain.out.5 hadoop-root-secondarynamenode-localhost.localdomain.log hadoop-root-secondarynamenode-localhost.localdomain.out hadoop-root-secondarynamenode-localhost.localdomain.out.1 hadoop-root-secondarynamenode-localhost.localdomain.out.2 hadoop-root-secondarynamenode-localhost.localdomain.out.3 hadoop-root-secondarynamenode-localhost.localdomain.out.4 hadoop-root-secondarynamenode-localhost.localdomain.out.5 SecurityAuth-root.audit userlogs yarn-root-nodemanager-localhost.localdomain.log yarn-root-nodemanager-localhost.localdomain.out yarn-root-nodemanager-localhost.localdomain.out.1 yarn-root-nodemanager-localhost.localdomain.out.2 yarn-root-nodemanager-localhost.localdomain.out.3 yarn-root-resourcemanager-localhost.localdomain.log yarn-root-resourcemanager-localhost.localdomain.out yarn-root-resourcemanager-localhost.localdomain.out.1 yarn-root-resourcemanager-localhost.localdomain.out.2 yarn-root-resourcemanager-localhost.localdomain.out.3
检查log后发现出现data node不启动的缘由是使用了屡次namenode format的操做,后台存储的clusterID为第一次format的ID,再次format后datanode的clusterID没有变化,致使匹配不上,具体的报错状况为:
java.io.IOException: Incompatible clusterIDs in /usr/hadoop/temp/dfs/data: namenode clusterID = CID-9754ec5b-c309-4b36-89ce-f00de7285927; datanode clusterID = CID-d05d2a3a-4fe7-4de4-a53a-6960403696cc
解决这个问题比较简单,到datanode目录下修改VISION便可:
[root@localhost hadoop]# cd /usr/hadoop/temp/dfs/data/current/ [root@localhost current]# ls BP-1948380787-127.0.0.1-1490148702615 VERSION BP-2135918609-127.0.0.1-1490082190396 [root@localhost current]# vim VERSION ======================================= storageID=DS-34e30373-c8e1-4bfa-a5c7-84fd43ac99e2 clusterID=#在这里修改# cTime=0 datanodeUuid=ae0dae79-7de1-4207-8151-af6a6b86079d storageType=DATA_NODE layoutVersion=-56 =======================================