第二十七周微职位

1、概述docker容器虚拟化技术,并完成如下练习:
(1)构建一个基于centos的httpd镜像,要求,其主目录路径为/web/htdocs,且主页存在,并以apache用户的身份运行,暴露80端口;
(2)进一步地,其页面文件为主机上的卷;
(3)进一步地,httpd支持解析php页面;
(4)构建一个基于centos的maridb镜像,让容器间可互相通讯;
(5)在httpd上部署wordpress;
1)建立带Apache服务的Centos Docker镜像.
基础镜像:
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
sshd-centos latest 64136bdc0cc8 22 hours ago 261.8 MB
centos latest 0f73ae75014f 5 weeks ago 172.3 MB
2)以镜像sshd-centos为基础新建容器,并指定容器的ssh端口22映射到宿主机的10022端口上
docker run -p 10022:22 -d sshd-centos /usr/sbin/sshd -D
3)查看容器运行状况:
[root@localhost ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
66b4ab8dbdeb sshd-centos "/usr/sbin/sshd -D" 22 hours ago Up 12 seconds 0.0.0.0:10022->22/tcp trusting_morse
4)在宿主机上经过ssh登陆容器
ssh localhost -p 10022
5)若是提示没有ssh命令请安装openssh-clients
yum install -y openssh-clients
6)下载apache源码包,编译安装
1.安装wget:yum install -y wget
2.下载源码包:cd /usr/local/src
wget http://apache.fayea.com/httpd/httpd-2.4.17.tar.gzphp

3.解压源码包:tar -zxvf httpd-2.4.17.tar.gz
cd httpd-2.4.17
4.安装gcc 、make编译器和apache依赖包
因为下载的docker镜像是简化版,因此连最基本的gcc和make都没有带,只好自已安装; 同时需 要安装apache依赖包apr 和 pcre
yum install -y gcc make apr-devel apr apr-util apr-util-devel pcre-devel
5.编译:./configure --prefix=/usr/local/apache2 --enable-mods-shared=most --enable-so
make
make install
6.修改apache配置文件
sed -i 's/#ServerName www.example.com:80/ServerName localhost:80/g' /usr/local/apache2/conf/httpd.conf
7.启动apache服务:/usr/local/apache2/bin/httpd
8.查看是否启动:ps aux
9.编写启动ssh和apache服务的脚本
cd /usr/local/sbin
vi run.shhtml

------------------------------------------------------
#!/bin/bash

/usr/sbin/sshd &
/usr/local/apache2/bin/httpd -D FOREGROUND node

改变脚本权限,使其能够运行:chmod 755 run.sh

10.建立带有apache和ssh服务的镜像
   1)查看当前容器的 Container ID:
   [root@localhost ~]# docker ps -a
 CONTAINER ID        IMAGE               COMMAND               CREATED                   STATUS                      PORTS                   NAMES

66b4ab8dbdeb sshd-centos "/usr/sbin/sshd -D" 23 hours ago Up 45 minutes 0.0.0.0:10022->22/tcp trusting_morse
2)根据容器CONTAINER ID生成新的镜像:docker commit 66b4ab8dbdeb apache:centos
3)查看新生成的镜像:
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
apache centos 31668185b8f1 About a minute ago 433.4 MB
sshd-centos latest 64136bdc0cc8 23 hours ago 261.8 MB
centos latest 0f73ae75014f 5 weeks ago 172.3 MB
11.根据新生成的镜像生成容器
分别映射容器的22端口和80端口到宿主机的2222端口和8000端口
docker run -d -p 2222:22 -p 8000:80 apache:centos /usr/local/sbin/run.sh
查看生成的容器:
[root@localhost ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7a9021c9b510 apache:centos "/usr/local/sbin/run 4 minutes ago Up 4 minutes 0.0.0.0:2222->22/tcp, 0.0.0.0:8000->80/tcp tender_payne
66b4ab8dbdeb sshd-centos "/usr/sbin/sshd -D" 23 hours ago Up 57 minutes 0.0.0.0:10022->22/tcp trusting_morse
6c40d0d2d8be centos "/bin/bash" 23 hours ago Exited (137) 23 hours ago centos-ssh
12.测试apache服务:[root@localhost ~]# curl localhost:8000
<html><body><h1>It works!</h1></body></html>
13.测试ssh服务
[root@localhost ~]#ssh localhost -p 2222
root@localhost's password:
Last login: Sat Nov 13 14:20:41 2017 from 172.17.42.1
[root@7a9021c9b510 ~]#
测试经过!
14.映射宿主机目录
将宿主机的/www目录映射到容器的/usr/local/apache2/htdocs目录
1)在宿主机上新建目录并创建主页文件
mkdir /www
cd /www
vi index.html
代码以下:
<html><body><h1>It's test!</h1></body></html>
为了区别于以前生成的8000端口的容器的默认主页内容,我将“It works” 改成 “It’s test”.
2)生成新的窗口
docker run -d -p 2223:22 -p 8001:80 -v /www:/usr/local/apache2/htdocs:ro apache:centos /usr/local/sbin/run.sh
分别映射容器的22端口和80端口到宿主机的2223端口和8001端口;
经过-v 参数将/www映射到/usr/local/apache2/htdocs,同时出于安全性和隔离性的考虑加上ro只读参数
3)查看生成的容器:
[root@localhost www]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bd8335195b44 apache:centos "/usr/local/sbin/run 9 minutes ago Up 9 minutes 0.0.0.0:2223->22/tcp, 0.0.0.0:8001->80/tcp cranky_nobel
7a9021c9b510 apache:centos "/usr/local/sbin/run 21 minutes ago Up 21 minutes 0.0.0.0:2222->22/tcp, 0.0.0.0:8000->80/tcp tender_payne
66b4ab8dbdeb sshd-centos "/usr/sbin/sshd -D" 23 hours ago Up About an hour 0.0.0.0:10022->22/tcp trusting_morse
6c40d0d2d8be centos "/bin/bash" 24 hours ago Exited (137) 23 hours ago centos-ssh
4)测试:
[root@localhost www]# curl localhost:8001
<html><body><h1>It's test!</h1></body></html>mysql

[root@localhost www]# curl localhost:8000
<html><body><h1>It works!</h1></body></html>web

事例2:
容器导入和导出:
docker export
docker importsql

镜像的保存及装载:
docker save -o /PATH/TO/SOMEFILE.TAR NAME[:TAG]docker

docker load -i /PATH/FROM/SOMEFILE.TARapache

回顾:
Dockerfile指令:
FROM,MAINTAINER
COPY,ADD
WORKDIR, ENV
USER
VOLUME
EXPOSE
RUN
CMD,ENTRYPOINT
ONBUILDcentos

Dockerfile(2)
示例2:httpd安全

FROM centos:latest
MAINTAINER MageEdu "<mage@magedu.com>"

RUN sed -i -e 's@^mirrorlist.repo=os.$@baseurl=http://mirrors.163.com/centos/$releasever/@g' -e '/^mirrorlist.repo=updates/a enabled=0' -e '/^mirrorlist.repo=extras/a enabled=0' /etc/yum.repos.d/CentOS-Base.repo && \
yum -y install httpd php php-mysql php-mbstring && \
yum clean all && \
echo -e '<?php\n\tphpinfo();\n?>' > /var/www/html/info.php

EXPOSE 80/tcp

CMD ["/usr/sbin/httpd","-f","/etc/httpd/conf/httpd.conf","-DFOREGROUND"]

2、搭建一套hadoop集群系统。
一、首先到官网上下载一个Hadoop的压缩安装包,我安装用的版本是hadoop-2.7.1.tar.gz,因为我安装的是最新的版本,和Hadoop以前的版本有很大的差别,因此网上不少的教程都不适用,这也是致使在安装过程当中遇到问题所在,下载地址:http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz

二、下载完成后(这个压缩包比较大,有201M,下载比较慢,耐心等待吧),放到Linux某个目录下,这里我用的系统是:CentOS release 6.5 (Final),我放的目录是:/usr/local/jiang/hadoop-2.7.1.tar.gz,而后执行:tar zxvf hadoop-2.7.1.tar.gz解压(这些操做都是要在集群中的主机上进行,也就是hadoop的master上面)

三、配置host文件

进入/etc/hosts,配置主机名和ip的映射, 这里是集群的每一个机子都须要配置,这里个人logsrv02是主机(master),其他两台是从机(slave)

[root@logsrv03 /]# vi /etc/hosts
172.17.6.142 logsrv02
172.17.6.149 logsrv04
172.17.6.148 logsrv03

四、jdk的安装(这里个人机子上面已经有了,因此就不须要再安装了)
我使用的jdk是jdk1.7.0_71,没有的须要安装,将jdk下载下来,解压到某个目录下,而后到/etc/profile中配置环境变量,在执行Java -version验证是否安装成功。

五、配置SSH免密码登录

这里所说的免密码登陆是相对于主机master来讲的,master和slave之间须要通讯,配置好后,master和slave进行ssh登录的时候不须要输入密码。

若是系统中没有ssh的须要安装,而后执行:
[root@logsrv03 ~]# ssh-keygen -t rsa
会在根目录下生成私钥id_rsa和公钥id_rsa.pub

[root@logsrv03 /]# cd ~  
[root@logsrv03 ~]# cd .ssh  
[root@logsrv03 .ssh]# ll  
总用量 20  
-rw-------  1 root root 1185 11月 10 14:41 authorized_keys  
-rw-------  1 root root 1675 11月  2 15:57 id_rsa  
-rw-r--r--  1 root root  395 11月  2 15:57 id_rsa.pub

而后将这里的公钥分别拷贝到其他slave中的.ssh文件中,而后要把公钥(id_dsa.pub)追加到受权的key中去:

cat id_rsa.pub >> authorized_keys

而后修改权限

[root@logsrv04 .ssh]# chmod 600 authorized_keys   
[root@logsrv04 .ssh]# chmod 700 -R .ssh

将生成的公钥复制到从机上的.ssh目录下:

[root@logsrv03 .ssh]# scp -r id_rsa.pub root@logsrv02:~/.ssh/  
[root@logsrv03 .ssh]# scp -r id_rsa.pub root@logsrv04:~/.ssh/

而后全部机子都须要重启ssh服务

[root@logsrv03 .ssh]# service sshd restart  
[root@logsrv02 .ssh]# service sshd restart  
[root@logsrv04 .ssh]# service sshd restart

而后验证免密码登录是否成功,这里在主机master这里验证:

[root@logsrv03 .ssh]# ssh logsrv02  
[root@logsrv03 .ssh]# ssh logsrv04

若是在登录slave不须要输入密码,则免密码登录设置成功。

六、开始安装Hadoop,配置hadoop环境变量/etc/profile(全部机子都须要配置)

export HADOOP_HOME=/usr/local/jiang/hadoop-2.7.1
export PATH= HADOOP_HOME/bin

七、修改配置文件:

(1)、修改hadoop-2.7.1/etc/hadoop/hadoop-env.sh

[root@logsrv03 /]# cd usr/local/jiang/hadoop-2.7.1
[root@logsrv03 hadoop-2.7.1]# cd etc/hadoop/
[root@logsrv03 hadoop]# vi hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.7.0_71

(2)、修改hadoop-2.7.1/etc/hadoop/slaves

[root@logsrv03 hadoop]# vi slaves   
logsrv02  
logsrv04

(3)、修改hadoop-2.7.1/etc/hadoop/core-site.xml

<configuration>  
<property>  
                <name>fs.defaultFS</name>  
                <value>hdfs://logsrv03:8020</value>  
        </property>  
        <property>  
                <name>io.file.buffer.size</name>  
                <value>131072</value>  
        </property>  
        <property>  
                <name>hadoop.tmp.dir</name>  
                <value>file:/opt/hadoop/tmp</value>  
        </property>  
        <property>  
                <name>fs.hdfs.impl</name>  
                <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>  
                <description>The FileSystem for hdfs: uris.</description>  
        </property>  
        <property>  
                <name>fs.file.impl</name>  
                <value>org.apache.hadoop.fs.LocalFileSystem</value>  
                <description>The FileSystem for hdfs: uris.</description>  
    </property>  
</configuration>

(4)、修改hadoop-2.7.1/etc/hadoop/hdfs-site.xml

<configuration>  
<property>  
                <name>dfs.namenode.name.dir</name>  
                <value>file:/opt/hadoop/dfs/name</value>  
        </property>  
        <property>  
                <name>dfs.datanode.data.dir</name>  
                <value>file:/opt/hadoop/dfs/data</value>  
        </property>  
        <property>  
                <name>dfs.replication</name>      
                <value>2</value>   
        </property>  
</configuration>

(5)、修改hadoop-2.7.1/etc/hadoop/yarn-site.xml

<configuration>  

<!-- Site specific YARN configuration properties -->  
<property>  
                <name>yarn.resourcemanager.address</name>  
                <value>logsrv03:8032</value>  
        </property>  
        <property>  
                <name>yarn.resourcemanager.scheduler.address</name>  
                <value>logsrv03:8030</value>  
        </property>  
        <property>  
                <name>yarn.resourcemanager.resource-tracker.address</name>  
                <value>logsrv03:8031</value>  
        </property>  
        <property>  
                <name>yarn.resourcemanager.admin.address</name>  
                <value>logsrv03:8033</value>  
        </property>  
        <property>  
                <name>yarn.resourcemanager.webapp.address</name>  
                <value>logsrv03:8088</value>  
        </property>  
        <property>  
                <name>yarn.nodemanager.aux-services</name>  
                <value>mapreduce_shuffle</value>  
        </property>  
        <property>  
                <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
                <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
        </property>  
</configuration>

(6)、修改hadoop-2.7.1/etc/hadoop/mapred-site.xml

<configuration>  
<property>  
                <name>mapreduce.framework.name</name>  
                <value>yarn</value>  
        </property>  
        <property>  
                <name>mapreduce.jobhistory.address</name>  
                <value>logsrv03:10020</value>  
        </property>  
        <property>  
                <name>mapreduce.jobhistory.webapp.address</name>  
                <value>logsrv03:19888</value>  
        </property>  
</configuration>

八、这些配置文件配置完毕后,而后将整个hadoop-2.7.1文件复制到各个从机的目录下,这里目录最好与主机一致

[root@logsrv03 hadoop-2.7.1]# scp -r hadoop-2.7.1 root@logsrv02:/usr/local/jiang/  
[root@logsrv03 hadoop-2.7.1]# scp -r hadoop-2.7.1 root@logsrv04:/usr/local/jiang/

九、到这里所有配置完毕,而后开始启动hadoop,首先格式化hdfs
[root@logsrv03 hadoop-2.7.1]# bin/hdfs namenode -format
若是出现successfully formatted则表示格式化成功。
十、而后启动hdfs
[root@logsrv03 hadoop-2.7.1]# sbin/start-dfs.sh
到这里,能够查看启动的进程:
主机logsrv03:
[root@logsrv03 hadoop-2.7.1]# jps
29637 NameNode
29834 SecondaryNameNode

从机logsrv0二、logsrv04:
[root@logsrv02 hadoop-2.7.1]# jps
10774 DataNode

[root@logsrv04 hadoop-2.7.1]# jps
20360 DataNode

十一、启动yarn
[root@logsrv03 hadoop-2.7.1]# sbin/start-yarn.sh

到这里,启动的进程:
主机logsrv03:

[root@logsrv03 hadoop-2.7.1]# jps   
29637 NameNode  
29834 SecondaryNameNode  
30013 ResourceManager

从机logsrv0二、logsrv04:

[root@logsrv02 hadoop-2.7.1]# jps
10774 DataNode
10880 NodeManager

[root@logsrv04 hadoop-2.7.1]# jps
20360 DataNode
20483 NodeManager

到这里,恭喜整个集群配置完成,能够经过:http://logsrv03:8088/cluster查看hadoop集群图

查看HDFS:能够经过:http://logsrv03:50070查看

相关文章
相关标签/搜索