vi hadoop-env.sh: export= JAVA_HOME=/opt/inst/jdk181
vi core-site.xml
<property> <name>fs.defaultFS</name> <value>hdfs://bigdata:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoopdata</value> </property> <property> <name>hadoop.proxyuser.root.users</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property>
vi hdfs-site.xml
<property> <name>dfs.replication</name> <value>1</value> </property>
cp mapred-site.xml.template mapred-site.xml
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
vi
export HADOOP_HOME=/opt/bigdata/hadoop260 export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin export HADOOP_INSTALL=$HADOOP_HOME
source /etc/profile hdfs namenode -format
start-all.sh
jps #查看进程
hdfs dfs -put /opt/a.txt /cm/ hdfs dfs -ls /cm
NameNode:主节点,目录node
DataNode:从节点,数据linux
SecondaryNameNode:主节点的备份shell
调度的是内存的资源和CPU的算力框架
经过ResourceManager(只有一个) 来调度分布式
ResourceManager主要做用:ide
1.处理客户端请求oop
2.监控NodeManagercode
3.启动或监控ApplicationMaster()orm
4.资源的分配或调度xml
NodeManager(多个)
NodeManager主要做用:
1.管理单个节点上的资源
2.处理来自ResourceManager的命令
3.处理来自ApplicationMaster的命令
运算的