简介:node
YARN 多租户资源池配置算法
当多用户同在一个 hadoop 集群做业时,就须要对资源进行有效的限制,例如区分测试、正式资源等shell
1、查看默认资源池apache
# 访问:http://192.168.1.25:8088/cluster/scheduler 即 master.hadoopvim
# 能够看到默认的资源池 default,这里称为队列,当有用户提交任务时,就会使用 default 资源池中的资源并发
2、配置资源池app
hadoop shell > vim etc/hadoop/yarn-site.xml # YARN 配置文件 <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>master.hadoop</value> </property> <property> <name>yarn.acl.enable</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> </property> <property> <name>yarn.nodemanager.log-dirs</name> <value>${yarn.log.dir}/userlogs</value> </property> <property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/tmp/logs</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
# yarn.acl.enable 开启 ACL 权限认证
# 这里选用的是计算能力调度算法 CapacityScheduler ide
hadoop shell > vim etc/hadoop/capacity-scheduler.xml # 子配置文件,主要配置资源池相关参数 <configuration> <property> <name>yarn.scheduler.capacity.maximum-applications</name> <value>10000</value> </property> <property> <name>yarn.scheduler.capacity.maximum-am-resource-percent</name> <value>0.1</value> </property> <property> <name>yarn.scheduler.capacity.resource-calculator</name> <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value> </property> <property> <name>yarn.scheduler.capacity.root.queues</name> <value>default,prod</value> </property> <property> <name>yarn.scheduler.capacity.root.default.capacity</name> <value>30</value> </property> <property> <name>yarn.scheduler.capacity.root.default.user-limit-factor</name> <value>1</value> </property> <property> <name>yarn.scheduler.capacity.root.default.maximum-capacity</name> <value>100</value> </property> <property> <name>yarn.scheduler.capacity.root.default.state</name> <value>RUNNING</value> </property> <property> <name>yarn.scheduler.capacity.root.default.acl_submit_applications</name> <value>*</value> </property> <property> <name>yarn.scheduler.capacity.root.default.acl_administer_queue</name> <value>*</value> </property> <property> <name>yarn.scheduler.capacity.root.prod.capacity</name> <value>70</value> </property> <property> <name>yarn.scheduler.capacity.root.prod.user-limit-factor</name> <value>1</value> </property> <property> <name>yarn.scheduler.capacity.root.prod.maximum-capacity</name> <value>100</value> </property> <property> <name>yarn.scheduler.capacity.root.prod.state</name> <value>RUNNING</value> </property> <property> <name>yarn.scheduler.capacity.root.prod.acl_submit_applications</name> <value>wang</value> </property> <property> <name>yarn.scheduler.capacity.root.prod.acl_administer_queue</name> <value>wang</value> </property> <property> <name>yarn.scheduler.capacity.node-locality-delay</name> <value>40</value> </property> <property> <name>yarn.scheduler.capacity.queue-mappings-override.enable</name> <value>false</value> </property> </configuration>
# yarn.scheduler.capacity.maximum-applications 集群中能够同时运行或等待的应用数量
# yarn.scheduler.capacity.maximum-am-resource-percent 集群中能够运行 application master 的资源比例上限,一般用来限制并发运行的应用程序,默认 10%
# yarn.scheduler.capacity.resource-calculator 资源计算方法,默认只计算内存,DominantResourceCalculator 计算内存、CPU
# yarn.scheduler.capacity.root.queues 定义资源池,default、prod
# yarn.scheduler.capacity.root.<default>.capacity 分别定义资源池占用总资源的百分比,同级资源池占用总和必须为 100%
# yarn.scheduler.capacity.root.<default>.user-limit-factor 每用户最多占用资源百分比,默认 100%
# yarn.scheduler.capacity.root.default.maximum-capacity 每资源池使用资源上限,因为资源共享,会存在资源池使用的资源量会超过其配置的容量
# yarn.scheduler.capacity.root.default.state 资源池状态,STOPPED \ RUNNING,状态为 STOPPED 时,用户没法向该队列或子队列提交任务
# yarn.scheduler.capacity.root.default.acl_submit_applications 限制用户、组能够向队列提交任务,默认为 * 全部,该属性具备继承性,子队列会集成府队列的权限
# yarn.scheduler.capacity.root.default.acl_administer_queue 设置可管理该队列的用户、组,例如能够杀死任意任务等
# yarn.scheduler.capacity.node-locality-delay 调度器尝试进行调度的次数,-1 为不启用,默认 40
# yarn.scheduler.capacity.queue-mappings-override.enable 是否用户指定的队列能够被覆盖,默认 falseoop
hadoop shell > vim etc/hadoop/mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.cluster.acls.enabled</name> <value>true</value> </property> <property> <name>yarn.app.mapreduce.am.staging-dir</name> <value>/tmp/hadoop-yarn/staging</value> </property> </configuration>
3、使配置生效测试
hadoop shell > yarn rmadmin -refreshQueues # 增长队列、修改属性等 能够执行该指令,删除队列须要重启 YARN
# 如今刷新网页,就会看到多了一个 prod 的队列(资源池)
4、验证资源池
hadoop shell > hadoop jar /usr/local/hadoop-2.8.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.0.jar grep shakespeare.txt outfile what
# hadoop 用户提交任务,进入了 default 队列
hadoop shell > hdfs dfs -mkdir /user/wang hadoop shell > hdfs dfs -chown -R wang /user/wang hadoop shell > hdfs dfs -chmod -R 777 /tmp wang shell > hdfs dfs -put shakespeare.txt wang shell > hadoop jar /usr/local/hadoop-2.8.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.0.jar grep -Dmapreduce.job.queuename=prod shakespeare.txt outfile what
# 嗯,不指定资源池,默认使用 default ,用户 Wang 指定能够指定配置好的资源池,访问 http://192.168.1.25:8088 也能够看到,状态正常
# 尴尬的是,其他用户也能指定 prod 资源池,而且能够成功! 说明 ACL 有问题,可是目前还没解决~~~ 超尴尬!