转自:html
hive 结合执行计划 分析 limit 执行原理:http://yaoyinjie.blog.51cto.com/3189782/923378sql
Hive 的 distribute byexpress
Order by 可以预期产生彻底排序的结果,可是它是经过只用一个reduce来作到这点的。因此对于大规模的数据集它的效率很是低。在不少状况下,并不须要全局排序,此时能够换成Hive的非标准扩展sort by。Sort by为每一个reducer产生一个排序文件。在有些状况下,你须要控制某个特定行应该到哪一个reducer,一般是为了进行后续的汇集操做。Hive的distribute by 子句能够作这件事。apache
所以,distribute by 常常和 sort by 配合使用。app
语句函数
SELECT COUNT, COUNT(DISTINCT uid) FROM logs GROUP BY COUNT;
hive> SELECT * FROM logs; OK a 苹果 3 a 橙子 3 a 烧鸡 1 b 烧鸡 3 hive> SELECT COUNT, COUNT(DISTINCT uid) FROM logs GROUP BY COUNT;
根据count分组,计算独立用户数。oop
1. 第一步先在mapper计算部分值,会以count和uid做为key,若是是distinct而且以前已经出现过,则忽略这条计算。第一步是以组合为key,第二步是以count为key.
2. ReduceSink是在mapper.close()时才执行的,在GroupByOperator.close()时,把结果输出。注意这里虽然key是count和uid,可是在reduce时分区是按count来的!
3. 第一步的distinct计算的值没用,要留到reduce计算的才准确。这里只是减小了key组合相同的行。不过若是是普通的count,后面是会合并起来的。
4. distinct经过比较lastInvoke判断要不要+1(由于在reduce是排序过了的,因此判断distict的字段变了没有,若是没变,则不+1)ui
hive> explain select count, count(distinct uid) from logs group by count; OK ABSTRACT SYNTAX TREE: (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME logs))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (TOK_TABLE_OR_COL count)) (TOK_SELEXPR (TOK_FUNCTIONDI count (TOK_TABLE_OR_COL uid)))) (TOK_GROUPBY (TOK_TABLE_OR_COL count)))) STAGE DEPENDENCIES: Stage-1 is a root stage Stage-0 is a root stage STAGE PLANS: Stage: Stage-1 Map Reduce Alias -> Map Operator Tree: logs TableScan //表扫描 alias: logs Select Operator//列裁剪,取出uid,count字段就够了 expressions: expr: count type: int expr: uid type: string outputColumnNames: count, uid Group By Operator //先来map汇集 aggregations: expr: count(DISTINCT uid) //汇集表达式 bucketGroup: false keys: expr: count type: int expr: uid type: string mode: hash //hash方式 outputColumnNames: _col0, _col1, _col2 Reduce Output Operator key expressions: //输出的键 expr: _col0 //count type: int expr: _col1 //uid type: string sort order: ++ Map-reduce partition columns: //这里是按group by的字段分区的 expr: _col0 //这里表示count type: int tag: -1 value expressions: expr: _col2 type: bigint Reduce Operator Tree: Group By Operator //第二次汇集 aggregations: expr: