先来看一下在PairRDDFunctions.scala文件中reduceByKey和groupByKey的源码网络
/** * Merge the values for each key using an associative reduce function. This will also perform * the merging locally on each mapper before sending results to a reducer, similarly to a * "combiner" in MapReduce. Output will be hash-partitioned with the existing partitioner/ * parallelism level. */ def reduceByKey(func: (V, V) => V): RDD[(K, V)] = { reduceByKey(defaultPartitioner(self), func) } /** * Group the values for each key in the RDD into a single sequence. Allows controlling the * partitioning of the resulting key-value pair RDD by passing a Partitioner. * The ordering of elements within each group is not guaranteed, and may even differ * each time the resulting RDD is evaluated. * * Note: This operation may be very expensive. If you are grouping in order to perform an * aggregation (such as a sum or average) over each key, using [[PairRDDFunctions.aggregateByKey]] * or [[PairRDDFunctions.reduceByKey]] will provide much better performance. * * Note: As currently implemented, groupByKey must be able to hold all the key-value pairs for any * key in memory. If a key has too many values, it can result in an [[OutOfMemoryError]]. */ def groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])] = { // groupByKey shouldn't use map side combine because map side combine does not // reduce the amount of data shuffled and requires all map side data be inserted // into a hash table, leading to more objects in the old gen. val createCombiner = (v: V) => CompactBuffer(v) val mergeValue = (buf: CompactBuffer[V], v: V) => buf += v val mergeCombiners = (c1: CompactBuffer[V], c2: CompactBuffer[V]) => c1 ++= c2 val bufs = combineByKey[CompactBuffer[V]]( createCombiner, mergeValue, mergeCombiners, partitioner, mapSideCombine=false) bufs.asInstanceOf[RDD[(K, Iterable[V])]] }
reduceByKey:reduceByKey会在结果发送至reducer以前会对每一个mapper在本地进行merge,有点相似于在MapReduce中的combiner。这样作的好处在于,在map端进行一次reduce以后,数据量会大幅度减少,从而减少传输,保证reduce端可以更快的进行结果计算。app
groupByKey:groupByKey会对每个RDD中的value值进行聚合造成一个序列(Iterator),此操做发生在reduce端,因此势必会将全部的数据经过网络进行传输,形成没必要要的浪费。同时若是数据量十分大,可能还会形成OutOfMemoryError。ide
经过以上对比能够发如今进行大量数据的reduce操做时候建议使用reduceByKey。不只能够提升速度,仍是能够防止使用groupByKey形成的内存溢出问题。ui