主要完成hadoop集群搭建和yarn上运行flinkhtml
主要是搭建hadoop MapReduce(yarn)和HDFSjava
这里下载的hadoop二进制包为 2.7.7,下载后解压到本地,假设是/usr/hadoop/hadoop-2.7.7node
#HADOOP VARIABLES START export HADOOP_INSTALL=/usr/hadoop/hadoop-2.7.7 export HADOOP_HOME=$HADOOP_INSTALL export PATH=$PATH:$HADOOP_INSTALL/bin export PATH=$PATH:$HADOOP_INSTALL/sbin export HADOOP_MAPRED_HOME=$HADOOP_INSTALL export HADOOP_COMMON_HOME=$HADOOP_INSTALL export HADOOP_HDFS_HOME=$HADOOP_INSTALL export YARN_HOME=$HADOOP_INSTALL export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib" #HADOOP VARIABLES END
运行命令web
ssh localhostshell
若是出现 “Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.” 相似的错误则须要作以下配置apache
$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa # 若是已经生成了公私钥对则跳过改步骤vim
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysssh
$ chmod 0600 ~/.ssh/authorized_keysoop
cd hadoop-2.7.7.net
vim etc/hadoop/core-site.xml
修改core-site.xml文件内容为
<configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/hadoop/hadoop-2.7.2/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration>
vim etc/hadoop/hdfs-site.xml
修改hdfs-site.xml内容为
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/hadoop/hadoop-2.7.2/tmp/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/hadoop/hadoop-2.7.2/tmp/dfs/data</value> </property> </configuration>
注意: 官网的配置只配置了fs.defaultFS和dfs.replication,这样即可以启动起来,可是若没有配置hadoop.tmp.dir参数,则默认使用的临时目录为 /tmp/hadoo-hadoop,而这个目录在重启时有可能被系统清理掉,致使必须从新执行format才行。
vim etc/hadoop/hadoop-env.sh
须要显示的声明JAVA_HOME, 即便环境变量里已经有了JAVA_HOME。不然会报错:JAVA_HOME is not set and could not be found
## 修改此处为jdk的home目录 export JAVA_HOME=/opt/jdk/jdk1.8
$ bin/hdfs namenode -format
$ sbin/start-dfs.sh
成功启动后能够经过 http://localhost:50070/ 访问hdfs web页面。使用jps查看进程能够看到DataNode、NameNode、SecondaryNameNode 三个进程,若是没有看到NameNode,能够排除下是否是端口有冲突,而后修复core-site.xml中fs.defaultFS配置的端口号重试下。
vim etc/hadoop/mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
vim etc/hadoop/yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
启动yarn
$ sbin/start-yarn.sh
启动后能够经过 http://localhost:8088/ 访问ResourceManager
到此hadoop伪集群已经搭建完毕
flink要下载和hadoop版本对应的flink版本,不然会出现错误, 这里咱们下载 Apache Flink 1.7.2 with Hadoop® 2.7 for Scala 2.11。下载后解压为flink-1.7.2。直接运行以下命令便可:
$ flink-1.7.2/bin/flink run -m yarn-cluster -yn 2 ../my-flink-project-0.1.jar
其中yarn-cluster表示在yarn上运行flink集群, my-flink-project-0.1.jar是本身写的flink程序。
提交后能够经过ResourceManager http://localhost:8088/ 查看yarn任务运行.