注意:本文不讨论linux虚拟机的安装和docker的安装node
一、环境linux
1.一、宿主机git
内核版本:Linux localhost 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 GNU/Linuxgithub
系统版本:Debian 8
docker
1.二、docker
centos
版本:Docker version 1.9.1, build a34a1d5bash
镜像版本:crxy/centos服务器
二、宿主机中建立用户和分组ssh
2.一、建立docker用户组
分布式
sudo groupadd docker
2.二、添加当前用户到docker用户组里
sudo gpasswd -a *** docker 注:***为当前系统用户名
2.三、重启docker后台监控进程
sudo service docker restart
2.四、重启后,看docker服务是否生效
docker version
2.五、若是没有生效,能够重试重启系统
sudo reboot
三、Dockerfile建立docker镜像
3.1建立ssh功能镜像,并设置镜像帐号:root密码:root
cd /usr/local/
mkdir dockerfile
cd dockerfile/
mkdir centos-ssh-root
cd centos-ssh-root
vi Dockerfile 注:docker识别的dockerfile格式Dockerfile(首字母必须大写)
# 选择一个已有的os镜像做为基础 FROM centos # 镜像的做者 MAINTAINER crxy # 安装openssh-server和sudo软件包,而且将sshd的UsePAM参数设置成no RUN yum install -y openssh-server sudo RUN sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config #安装openssh-clientsRUN yum install -y openssh-clients# 添加测试用户root,密码root,而且将此用户添加到sudoers里 RUN echo "root:root" | chpasswd RUN echo "root ALL=(ALL) ALL" >> /etc/sudoers # 下面这两句比较特殊,在centos6上必需要有,不然建立出来的容器sshd不能登陆 RUN ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key # 启动sshd服务而且暴露22端口 RUN mkdir /var/run/sshd EXPOSE 22 CMD ["/usr/sbin/sshd", "-D"]
建立镜像命令:
docker build -t=’crxy/centos-ssh-root‘ .
建立完成后查看镜像生成状况:
docker images
3.二、建立jdk镜像
注:jdk使用1.7版本及以上版本
cd ..
mkdir centos-ssh-root-jdk
cd centos-ssh-root-jdk
cp ../../jdk-7u80-linux-x64.tar.gz .
vi Dockerfile
#上一步中生成的镜像FROM crxy/centos-ssh-rootADD jdk-7u75-linux-x64.tar.gz /usr/local/RUN mv /usr/local/jdk1.7.0_75 /usr/local/jdk1.7ENV JAVA_HOME /usr/local/jdk1.7ENV PATH $JAVA_HOME/bin:$PATH
建立镜像命令:
docker build -t=’crxy/centos-ssh-root-jdk‘ .
建立完成后查看镜像生成状况:
docker images
3.三、根据jdk镜像建立hadoop镜像
cd ..
mkdir centos-ssh-root-jdk-hadoop
cd centos-ssh-root-jdk-hadoop
cp ../../hadoop-2.2.0.tar.gz .
vi Dockerfile
#从crxy/centos-ssh-root-jdk版本建立FROM crxy/centos-ssh-root-jdkADD hadoop-2.2.0-src.tar.gz /usr/local#安装which软件包RUN yum install which#安装net-tools软件包RUM yum install net-toolsENV HADOOP_HOME /usr/local/hadoop-2.2.0ENV PATH $HADOOP_HOME/bin:$PATH
建立镜像命令:
docker build -t=’crxy/centos-ssh-root-jdk-hadoop‘ .
建立完成后查看镜像生成状况:
docker images
四、搭建hadoop分布式集群
4.一、hadoop集群规划
master:hadoop0 ip:172.17.0.10
slave1:hadoop1 ip:172.17.0.10
slave2:hadoop2 ip:172.17.0.10
查看docker桥接网卡dcker0
4.二、建立容器并启动容器,hadoop0、hadoop一、hadoop2
#主节点docker run --name hadoop0 --hostname hadoop0 -d -P -p 50070:50070 -p 8088:8088 crxy/centos-ssh-root-jdk-hadoop#nodedocker run --name hadoop1 --hostname hadoop1 -d -P crxy/centos-ssh-root-jdk-hadoop#nodedocker run --name hadoop2 --hostname hadoop2 -d -P crxy/centos-ssh-root-jdk-hadoop
查看容器:docker ps -a
4.三、为hadoop集群设置固定ip
4.3.一、下载pipework
https://github.com/jpetazzo/pipework.git
4.3.二、把下载的zip包上传到宿主机服务器上,解压,更名字
unzip pipework-master.zipmv pipework-master pipeworkcp -rp pipework/pipework /usr/local/bin/
4.3.三、安装bridge-utils
yum -y install bridge-utils
4.3.四、给容器设置固定ip
pipework docker0 hadoop0 172.17.0.10/24pipework docker0 hadoop1 172.17.0.11/24pipework dcoker0 hadoop2 172.17.0.12/24
4.3.五、验证ip是否通
4.四、配置hadoop0
4.4.一、连接hadoop0
docker exec -it hadoop0 /bin/bash
4.4.二、为hadoop0添加host
vi /etc/hosts
172.17.0.10 hadoop0172.17.0.11 hadoop1172.17.0.12 hadoop2
4.4.三、hadoop0上修改hadoop的配置文件
cd /usr/local/hadoop/etc/hadoop-2.2.0
修改四大配置文件:core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml
1)、hadoop-env.sh
#导入环境变量export JAVA_HOME=/usr/local/jdk1.7
2)、core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop0:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/tmp</value> </property> <property> <name>fs.trash.interval</name> <value>1440</value> </property></configuration>
3)、hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property></configuration>
4)、yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <description>The hostname of the RM.</description> <name>yarn.resourcemanager.hostname</name> <value>hadoop0</value> </property></configuration>
5)、mapred-site.xml
cp mapred-site.xml.template mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property></configuration>
4.4.四、格式化hdfs
bin/hdfs namenode -format
4.五、配置hadoop一、hadoop二、
4.5.一、执行4.4配置
4.六、切回到hadoop0,执行ssh免密码登录
4.6.1 、配置ssh
cd ~mkdir .sshcd .sshssh-keygen -t rsa(一直按回车便可)ssh-copy-id -i localhostssh-copy-id -i hadoop0ssh-copy-id -i hadoop1ssh-copy-id -i hadoop2在hadoop1上执行下面操做cd ~cd .sshssh-keygen -t rsa(一直按回车便可)ssh-copy-id -i localhostssh-copy-id -i hadoop1在hadoop2上执行下面操做cd ~cd .sshssh-keygen -t rsa(一直按回车便可)ssh-copy-id -i localhostssh-copy-id -i hadoop2
4.6.二、配置slaves
vi etc/hadoop/slaves
hadoop1hadoop2
4.6.三、执行远程复制
scp -rq /usr/local/hadoop-2.2.0 hadoop1:/usr/localscp -rq /usr/local/hadoop-2.2.0 hadoop2:/usr/local
五、启动hadoop集群
5.一、启动
hadoop namenode -format -clusterid clustername
cd /usr/local/hadoop-2.2.0
sbin/start-all.sh
5.二、验证集群启动是否正常
5.2.一、hadoop0
jps
5.2.二、hadoop1
jps
5.2.三、hadoop2
jps
5.三、验证hafs文件系统状态
bin/hdfs dfsadmin -report
六、测试hdfs、yarn是否正常
6.一、建立普通文件在master主机上(hadoop0)
1)、查看文件系统中是否有文件存在
hadoop fs -ls
2)、建立dfs文件夹,#默认/user/$USER
hadoop fs -mkdir /user/data
3)、建立普通文件在用户文件夹
4)、将文件写入dfs文件系统中
hadoop fs -put /home/suchao/data/1.txt /user/data
5)、在终端显示
hadoop fs -cat /user/data/1.txt
摘自:http://blog.csdn.net/xu470438000/article/details/50512442