【原创】大叔经验分享(15)spark sql limit实现原理

以前讨论过hive中limit的实现,详见 http://www.javashuo.com/article/p-dochikfw-hc.html
下面看spark sql中limit的实现,首先看执行计划:html

spark-sql> explain select * from test1 limit 10;
== Physical Plan ==
CollectLimit 10
+- HiveTableScan [id#35], MetastoreRelation temp, test1
Time taken: 0.201 seconds, Fetched 1 row(s)sql

limit对应的CollectLimit,对应的实现类是apache

org.apache.spark.sql.execution.CollectLimitExecide

case class CollectLimitExec(limit: Int, child: SparkPlan) extends UnaryExecNode {
...
  protected override def doExecute(): RDD[InternalRow] = {
    val locallyLimited = child.execute().mapPartitionsInternal(_.take(limit))
    val shuffled = new ShuffledRowRDD(
      ShuffleExchange.prepareShuffleDependency(
        locallyLimited, child.output, SinglePartition, serializer))
    shuffled.mapPartitionsInternal(_.take(limit))
  }

可见实现很是简单,首先调用SparkPlan.execute获得结果的RDD,而后从每一个partition中取前limit个row获得一个新的RDD,而后再将这个新的RDD变成一个分区,而后再取前limit个,这样就获得最终的结果。spa

相关文章
相关标签/搜索
本站公众号
   欢迎关注本站公众号,获取更多信息