1.hadoop的内存配置调优前端
mapred-site.xml的内存调整 <property> <name>mapreduce.map.memory.mb</name> <value>1536</value> </property> <property> <name>mapreduce.map.java.opts</name> <value>-Xmx1024M</value> </property> <property> <name>mapreduce.reduce.memory.mb</name> <value>3072</value> </property> <property> <name>mapreduce.reduce.java.opts</name> <value>-Xmx2560M</value> </property> yarn-site.xml <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>2048</value> <discription>每一个节点可用内存,单位MB</discription> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>1024</value> <discription>单个任务可申请最少内存,默认1024MB</discription> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>2048</value> <discription>单个任务可申请最大内存,默认8192MB</discription> </property> hadoop的内存调整hadoop-env.sh export HADOOP_HEAPSIZE_MAX=2048 export HADOOP_HEAPSIZE_MIN=2048
2.Hbase的参数调优java
hbase的内存调整hbase-env.sh export HBASE_HEAPSIZE=8G
3.数据的导入导出node
hbase数据的导出 hbase org.apache.hadoop.hbase.mapreduce.Export NS1.GROUPCHAT /do1/GROUPCHAT hdfs dfs -get /do1/GROUPCHAT /opt/GROUPCHAT 尝试删除数据 hdfs dfs -rm -r /do1/GROUPCHAT hbase数据的导入 hdfs dfs -put /opt/GROUPCHAT /do1/GROUPCHAT hdfs dfs -ls /do1/GROUPCHAT hbase org.apache.hadoop.hbase.mapreduce.Import NS1.GROUPCHAT /do1/GROUPCHAT
4.两个master,再每个node加配置apache
backup-masters
[root@do2cloud01 conf]# pwd /do1cloud/hbase-2.0.5/conf [root@do1cloud01 conf]# cat backup-masters do1cloud02
5.经过控制台就能够看出参数设置是否生效oop
10.0.0.99:16010 hbase控制台 10.0.0.99:9870 Hadoop 前端控制台 10.0.0.99:8088 集群 前端控制台
6.regionserverspa
[root@do2cloud01 conf]# cat regionservers
do1cloud02
do1cloud03
do1cloud04
do1cloud05code