参考文献:html
http://www.tuicool.com/articles/NvueEjMvue
http://blog.csdn.net/dangyifei/article/details/8920164 比较好java
http://blog.csdn.net/w13770269691/article/details/24457241node
http://www.tuicool.com/articles/AbQBz2express
前提条件:apache
1.下载app
下载hadoop和zookeeper、jdkless
hadoop-2.8.0.tar.gzssh
zookeeper-3.4.10.tar.gzjsp
jdk1.8.0_121
2.修改/etc/hosts文件
增长node 节点。
node1 172.22.14.107
node3 172.22.14.172
node4 172.22.14.169
3.修改hostname
注意:在命令行把 hostname 设置成 本机的ip,例如:
机器172.22.14.107执行
hostname 172.22.14.107
机器172.22.14.172执行
hostname 172.22.14.172
机器172.22.14.169执行
hostname 172.22.14.169
4.设置公钥
ssh 公钥配置
在node1上执行
ssh-keygen
ssh-copy-id root@node3
ssh-copy-id root@node4
相似在 node3,node4上执行
====================================================================
搭建步骤:
在 /etc/profile 问价下面修改
1.配置JAVA_HOME
export JAVA_HOME=/opt/jdk1.8.0_121
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
2.配置HADOOP_HOME
export HADOOP_HOME=/hadoop/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native"
3. 修改zookeeper的配置文件
zookeeper/conf/zoocfg
# milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/hadoop/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=node1:2888:3888
server.3=node3:2888:3888
server.4=node4:2888:3888
在/hadoop/zookeeper/data 创建 myid文件
172.22.14.107(node1) 机器的的myid文件里面是 :1
172.22.14.172 (node3)机器的的myid文件里面是 :3
172.22.14.169 (node4)机器的的myid文件里面是 :4
这个id适合 server.id 对应的。
4.启动zoopeeper
在node1,node3,node4 上执行
./zkServer.sh start &
=========================以上是zookeeper的配置====================
=========================如下是hadoop hdfs的配置==================
5.修改hadoop的配置文件
(1)/hadoop/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hdcluster</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>node1:2181,node3:2181,node4:2181</value>
</property>
</configuration>
(2)hadoop/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>hdcluster</value>
</property>
<property>
<name>dfs.ha.namenodes.hdcluster</name>
<value>node1,node3</value>
</property>
<property>
<name>dfs.namenode.rpc-address.hdcluster.node1</name>
<value>node1:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.hdcluster.node3</name>
<value>node3:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.hdcluster.node1</name>
<value>node1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.hdcluster.node3</name>
<value>node3:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://node1:8485;node3:8485;node4:8485/hdcluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.hdcluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/tmp/journal</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled.hdcluster</name>
<value>true</value>
</property>
</configuration>
(3)yarn-site.xml
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
(4)mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
(5)设置环境变量 hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/opt/jdk1.8.0_121
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export HADOOP_HOME=/hadoop/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native"
export JAVA_HOME=${JAVA_HOME}
6.hadoop启动
(1)启动journalnode
hadoop-daemon.sh start journalnode
(2)启动zkfc
在node1,node3上启动namenode选举进程
hadoop-daemon.sh start zkfc
(3)启动namenode
格式化namenode
在node1,node3上
hadoop-daemon.sh namenode -format
hadoop-daemon.sh start namenode
(4)启动datanode
在出了node1,和node3的节点上启动datanode
hadoop-daemon.sh start datanode
namenode 节点查看地址
http://172.22.14.172:50070/dfshealth.html#tab-overview
hbase 节点查看地址