暑假第二弹:基于docker的hadoop分布式集群系统的搭建和测试

早在四月份的时候,就已经开了这篇文章。当时是参加数据挖掘的比赛,在计科院大佬的建议下用TensorFlow搞深度学习,并且要在本身的hadoop分布式集群系统下搞。html

当时可把咱们牛逼坏了,在没有基础的前提下,用一个月的时间搭建本身的大数据平台并运用人工智能框架来解题。java

结果可想而知:GG~~~~(只是把hadoop搭建起来了。。。。最后仍是老老实实的写爬虫)node

当时搭建是用VM虚拟机,等因而在17台机器上运行17个CentOS 7,如今咱们用docker来打包环境。web

1、技术架构docker

Docker 1.12.6数据库

CentOS 7 apache

JDK1.8.0_121centos

Hadoop2.7.3 :分布式计算框架bash

Zookeeper-3.4.9:分布式应用程序协调服务session

Hbase1.2.4:分布式存储数据库

Spark-2.0.2:大数据分布式计算引擎

Python-2.7.13

TensorFlow1.0.1:人工智能学习系统

2、搭建环境制做镜像

一、下载镜像:docker pull centos

二、启动容器:docker run -it -d --name hadoop centos

三、进入容器:docker exec -it hadoop /bin/bash

四、安装java(这些大数据工具须要jdk的支持,有些组件就是用java写的)我这里装在/usr

配置环境变量/etc/profile

#config java
export JAVA_HOME=/usr/java/jdk1.8.0_121
export JRE_HOME=/usr/java/jdk1.8.0_121/jre
export CLASSPATH=$JAVA_HOME/lib
export PATH=:$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

五、安装hadoop(http://hadoop.apache.org/releases.html)我这里装在/usr/local/

配置环境变量/etc/profile

#config hadoop
export HADOOP_HOME=/usr/local/hadoop/
export PATH=$HADOOP_HOME/bin:$PATH
export PATH=$PATH:$HADOOP_HOME/sbin
#hadoop?~D彗??W彖~G件路?D?~D?~M置
export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

source /etc/profile让环境变量生效

改配置/usr/local/hadoop/etc/hadoop/:

(1)slaves(添加datanode节点)

Slave1
Slave2

(2)core-site.xml

<configuration>
      <property>
          <name>hadoop.tmp.dir</name>
          <value>file:/usr/local/hadoop/tmp</value>
          <description>Abase for other temporary directories.</description>
      </property>
      <property>
          <name>fs.defaultFS</name>
          <value>hdfs://Master:9000</value>
      </property>
</configuration>

(3)hdfs-site.xml

<configuration>
       <property>
                <name>dfs.namenode.secondary.http-address</name>
               <value>Master:9001</value>
       </property>
     <property>
             <name>dfs.namenode.name.dir</name>
             <value>file:/usr/local/hadoop/dfs/name</value>
       </property>
      <property>
              <name>dfs.datanode.data.dir</name>
              <value>file:/usr/local/hadoop/dfs/data</value>
       </property>
       <property>
               <name>dfs.replication</name>
               <value>2</value>
        </property>
        <property>
                 <name>dfs.webhdfs.enabled</name>
                  <value>true</value>
         </property>
</configuration>

(4)建立mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>Master:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>Master:19888</value>
    </property>
</configuration>

(5)yarn-site.xml

<configuration>
        <property>
               <name>yarn.nodemanager.aux-services</name>
               <value>mapreduce_shuffle</value>
        </property>
        <property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
               <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
               <name>yarn.resourcemanager.address</name>
               <value>Master:8032</value>
       </property>
       <property>
               <name>yarn.resourcemanager.scheduler.address</name>
               <value>Master:8030</value>
       </property>
       <property>
            <name>yarn.resourcemanager.resource-tracker.address</name>
             <value>Master:8031</value>
      </property>
      <property>
              <name>yarn.resourcemanager.admin.address</name>
               <value>Master:8033</value>
       </property>
       <property>
               <name>yarn.resourcemanager.webapp.address</name>
               <value>Master:8088</value>
       </property>
</configuration>

六、安装zookeeper(https://zookeeper.apache.org/)我这里装在/usr/local/

配置环境变量/etc/profile

#config zookeeper
export ZOOKEEPER_HOME=/usr/local/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf

(1)/usr/local/zookeeper/conf/zoo.cfg

initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/usr/local/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

七、安装hbase(http://hbase.apache.org/)我这里装在/usr/local/

(1)/usr/local/hbase/conf/hbase-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_121
export HBASE_MANAGES_ZK=false

(2)hbase-site.xml

<configuration>
        <property>
                <name>hbase.rootdir</name>
                <value>hdfs://Master:9000/hbase</value>
        </property>

        <property>
                <name>hbase.zookeeper.property.clientPort</name>
                <value>2181</value>
        </property>
        <property>
                <name>zookeeper.session.timeout</name>
                <value>120000</value>
        </property>
        <property>
                <name>hbase.zookeeper.quorum</name>
                <value>Master,Slave1,Slave2</value>
        </property>
        <property>
                <name>hbase.tmp.dir</name>
                <value>/usr/local/hbase/data</value>
        </property>
        <property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>
</configuration>

(3)core-site.xml

<configuration>
      <property>
          <name>hadoop.tmp.dir</name>
          <value>file:/usr/local/hadoop/tmp</value>
          <description>Abase for other temporary directories.</description>
      </property>
      <property>
          <name>fs.defaultFS</name>
          <value>hdfs://Master:9000</value>
      </property>
</configuration>

(4)hdfs-site.xml

<configuration>

    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
</configuration>

(5)regionservers(表明个人三个节点)

Master #namenode
Slave1 #datanode01
Slave2 #datanode02

八、安装 spark(http://spark.apache.org/)我这里装在/usr/local/

配置环境变量:

#config spark
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

(1)cp ./conf/slaves.template ./conf/slaves 

在slaves中添加节点:

Slave1
Slave2

(2)spark-env.sh

export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SPARK_MASTER_IP=10.211.1.129
export JAVA_HOME=/usr/java/jdk1.8.0_121

九、若是要用tf训练数据的话:pip install tensorflow 

至此咱们namenode(Master)节点配置完了。。。。。。

十、exit退出容器

生成镜像:docker commit edcabfcd69ff vitoyan/hadoop

发布:docker push 

去Docker Hub看一看:

3、测试

若是要作彻底分布式的话,还须要添加多个节点(多个容器或者主机)。。。。。

由一个namenode控制多个datanode。

一、安装ssh和net工具:yum install openssh-server net-tools openssh-clients -y 

二、生成公钥:ssh-keygen -t rsa

三、把密钥追加到远程主机(容器):ssh-copy-id -i ~/.ssh/id_rsa.pub  root@10.211.1.129(这样两个容器不用密码就能够互相访问---handoop集群的前提)

四、在宿主机上查看hadoop容器的ip:docker exec hadoop hostname -i (再用一样的方式给容器互相添加公钥)

五、修改hostname分别为Master,Slave一、二、三、四、5.。。。。。以区分各个容器

六、每一个容器添加/etc/hosts:

10.211.1.129 Master
10.211.1.130 Slave1
10.211.1.131 Slave2
10.102.25.3  Slave3
10.102.25.4  Slave4
10.102.25.5  Slave5
10.102.25.6  Slave6
10.102.25.7  Slave7
10.102.25.8  Slave8
10.102.25.9  Slave9
10.102.25.10 Slave10
10.102.25.11 Slave11
10.102.25.12 Slave12
10.102.25.13 Slave13
10.102.25.14 Slave14
10.102.25.15 Slave15
10.102.25.16 Slave16

七、对应Slave的hadoop配置只须要copy,而后改为对应的主机名。

八、基本命令

(1)、启动hadoop分布式集群系统

cd /usr/local/hadoop

hdfs namenode -format

sbin/start-all.sh

检查是否启动成功:jps

(2)、启动zookeeper分布式应用程序协调服务

cd /usr/local/zookeeper/bin

./zkServer.sh start

检查是否启动成功:zkServer.sh status

(3)、启动hbase分布式数据库

cd /usr/local/hbase/bin/

./start-hbase.sh

(5)、启动spark大数据计算引擎集群

cd /usr/local/spark/

sbin/start-master.sh

sbin/start-slaves.sh

集群管理:http://master:8080

集群基准测试:http://blog.itpub.net/8183550/viewspace-684152/

个人hadoop镜像:https://hub.docker.com/r/vitoyan/hadoop/

欢迎pull

over!!!!!

相关文章
相关标签/搜索