这是我参与更文挑战的第2天,活动详情查看:更文挑战数据库
Spark核心编程的三大数据结构 之 RDD基础编程 (一)apache
RDD只支持粗粒度转换,即在大量记录上执行的单个操做。将建立RDD的一系列Lineage(血统)记录下来,以便恢复丢失的分区。RDD的Lineage会记录RDD的元数据信息和转换行为,当该RDD的部分分区数据丢失时,它能够根据这些信息来从新运算和恢复丢失的数据分区。编程
def main(args: Array[String]): Unit = {
val sc = new SparkContext(
new SparkConf().setMaster("local[*]").setAppName("MapPartitions")
)
val rdd = sc.makeRDD(List(
("a", 1), ("a", 2), ("b", 3), ("b", 3), ("b", 3),
("b", 4), ("b", 5), ("a", 6)
), 2)
println(rdd.toDebugString)
println("--------------------------------------")
val rdd1 = rdd.map(t => (t._1, t._2 * 2))
println(rdd1.toDebugString)
println("--------------------------------------")
val rdd2 = rdd1.mapValues(_ + 100)
println(rdd2.toDebugString)
println("--------------------------------------")
val rdd3 = rdd2.reduceByKey(_ + _)
println(rdd3.toDebugString)
println("--------------------------------------")
val res = rdd2.collect()
println(res.mkString("\n"))
}
复制代码
(2) ParallelCollectionRDD[0] at makeRDD at _1.scala:19 []
--------------------------------------
(2) MapPartitionsRDD[1] at map at _1.scala:27 []
| ParallelCollectionRDD[0] at makeRDD at _1.scala:19 []
--------------------------------------
(2) MapPartitionsRDD[2] at mapValues at _1.scala:32 []
| MapPartitionsRDD[1] at map at _1.scala:27 []
| ParallelCollectionRDD[0] at makeRDD at _1.scala:19 []
--------------------------------------
(2) ShuffledRDD[3] at reduceByKey at _1.scala:37 []
+-(2) MapPartitionsRDD[2] at mapValues at _1.scala:32 []
| MapPartitionsRDD[1] at map at _1.scala:27 []
| ParallelCollectionRDD[0] at makeRDD at _1.scala:19 []
--------------------------------------
复制代码
这里所谓的依赖关系,其实就是两个相邻RDD之间的关系缓存
def main(args: Array[String]): Unit = {
val sc = new SparkContext(
new SparkConf().setMaster("local[*]").setAppName("MapPartitions")
)
val rdd = sc.makeRDD(List(
("a", 1), ("a", 2), ("b", 3), ("b", 3), ("b", 3),
("b", 4), ("b", 5), ("a", 6)
), 2)
println(rdd.dependencies)
println("--------------------------------------")
val rdd1 = rdd.map(t => (t._1, t._2 * 2))
println(rdd1.dependencies)
println("--------------------------------------")
val rdd2 = rdd1.mapValues(_ + 100)
println(rdd2.dependencies)
println("--------------------------------------")
val rdd3 = rdd2.reduceByKey(_ + _)
println(rdd3.dependencies)
println("--------------------------------------")
val res = rdd2.collect()
println(res.mkString("\n"))
}
复制代码
List()
--------------------------------------
List(org.apache.spark.OneToOneDependency@38704ff0)
--------------------------------------
List(org.apache.spark.OneToOneDependency@44de94c3)
--------------------------------------
List(org.apache.spark.ShuffleDependency@2c58dcb1)
--------------------------------------
复制代码
窄依赖表示每个父(上游)RDD的Partition最多被子(下游)RDD的一个Partition使用.markdown
@DeveloperApi
class OneToOneDependency[T](rdd: RDD[T]) extends NarrowDependency[T](rdd) {
override def getParents(partitionId: Int): List[Int] = List(partitionId)
}
复制代码
宽依赖表示同一个父(上游)RDD的Partition被多个子(下游)RDD的Partition依赖,会引发Shuffle.数据结构
@DeveloperApi
class ShuffleDependency[K: ClassTag, V: ClassTag, C: ClassTag]( @transient private val _rdd: RDD[_ <: Product2[K, V]], val partitioner: Partitioner, val serializer: Serializer = SparkEnv.get.serializer, val keyOrdering: Option[Ordering[K]] = None, val aggregator: Option[Aggregator[K, V, C]] = None, val mapSideCombine: Boolean = false, val shuffleWriterProcessor: ShuffleWriteProcessor = new ShuffleWriteProcessor)
extends Dependency[Product2[K, V]] {
if (mapSideCombine) {
require(aggregator.isDefined, "Map-side combine without Aggregator specified!")
}
override def rdd: RDD[Product2[K, V]] = _rdd.asInstanceOf[RDD[Product2[K, V]]]
private[spark] val keyClassName: String = reflect.classTag[K].runtimeClass.getName
private[spark] val valueClassName: String = reflect.classTag[V].runtimeClass.getName
private[spark] val combinerClassName: Option[String] =
Option(reflect.classTag[C]).map(_.runtimeClass.getName)
val shuffleId: Int = _rdd.context.newShuffleId()
val shuffleHandle: ShuffleHandle = _rdd.context.env.shuffleManager.registerShuffle(
shuffleId, this)
_rdd.sparkContext.cleaner.foreach(_.registerShuffleForCleanup(this))
_rdd.sparkContext.shuffleDriverComponents.registerShuffle(shuffleId)
}
复制代码
DAG(Directed Acyclic Graph)有向无环图是由点和线组成的拓扑图形,该图形具备方向,不会闭环。例如,DAG记录了RDD的转换过程和任务的阶段。app
private[scheduler] def handleJobSubmitted(jobId: Int,
finalRDD: RDD[_],
func: (TaskContext, Iterator[_]) => _,
partitions: Array[Int],
callSite: CallSite,
listener: JobListener,
properties: Properties): Unit = {
var finalStage: ResultStage = null
try {
//建立新阶段可能会抛出异常,例如,做业运行在
//删除底层HDFS文件的HadoopRDD。
finalStage = createResultStage(finalRDD, func, partitions, jobId, callSite)
} catch {
.......
}
/** * 建立一个与提供的jobId相关联的ResultStage */
private def createResultStage(
rdd: RDD[_],
func: (TaskContext, Iterator[_]) => _,
partitions: Array[Int],
jobId: Int,
callSite: CallSite): ResultStage = {
checkBarrierStageWithDynamicAllocation(rdd)
checkBarrierStageWithNumSlots(rdd)
checkBarrierStageWithRDDChainPattern(rdd, partitions.toSet.size)
val parents = getOrCreateParentStages(rdd, jobId)
val id = nextStageId.getAndIncrement()
val stage = new ResultStage(id, rdd, func, partitions, parents, jobId, callSite)
stageIdToStage(id) = stage
updateJobIdStageIdMaps(jobId, stage)
stage
}
/** * 获取或建立给定RDD的父阶段列表。新的stage将使用提供的firstJobId建立 */
private def getOrCreateParentStages(rdd: RDD[_], firstJobId: Int): List[Stage] = {
getShuffleDependencies(rdd).map { shuffleDep =>
getOrCreateShuffleMapStage(shuffleDep, firstJobId)
}.toList
}
/** * 若是在shuffleIdToMapStage中存在shuffle map stage,则获取一个shuffle map stage。不然,若是洗牌地图阶段不存在,该方法将建立洗牌地图阶段,以及任何丢失的祖先洗牌地图阶段。 */
private def getOrCreateShuffleMapStage(
shuffleDep: ShuffleDependency[_, _, _],
firstJobId: Int): ShuffleMapStage = {
shuffleIdToMapStage.get(shuffleDep.shuffleId) match {
case Some(stage) =>
stage
case None =>
//为全部缺失的祖先洗牌依赖建立阶段。
getMissingAncestorShuffleDependencies(shuffleDep.rdd).foreach { dep =>
//即便getmissing祖宗shuffledependencies只返回shuffle依赖
//没有在shuffleIdToMapStage中,当咱们
//在foreach循环中获取一个特定的依赖,它被添加到
// shuffleIdToMapStage经过早期依赖的阶段建立过程。看到
// SPARK-13902获取更多信息。
if (!shuffleIdToMapStage.contains(dep.shuffleId)) {
createShuffleMapStage(dep, firstJobId)
}
}
//最后,为给定的shuffle依赖建立一个stage。
createShuffleMapStage(shuffleDep, firstJobId)
}
}
复制代码
RDD任务切分中间分为:Application、Job、Stage和Taskide
注意:Application->Job->Stage->Task每一层都是1对n的关系
函数
//划分源码
val tasks: Seq[Task[_]] = try {
val serializedTaskMetrics = closureSerializer.serialize(stage.latestInfo.taskMetrics).array()
stage match {
case stage: ShuffleMapStage =>
stage.pendingPartitions.clear()
partitionsToCompute.map { id =>
val locs = taskIdToLocations(id)
val part = partitions(id)
stage.pendingPartitions += id
new ShuffleMapTask(stage.id, stage.latestInfo.attemptNumber,
taskBinary, part, locs, properties, serializedTaskMetrics, Option(jobId),
Option(sc.applicationId), sc.applicationAttemptId, stage.rdd.isBarrier())
}
case stage: ResultStage =>
partitionsToCompute.map { id =>
val p: Int = stage.partitions(id)
val part = partitions(p)
val locs = taskIdToLocations(id)
new ResultTask(stage.id, stage.latestInfo.attemptNumber,
taskBinary, part, locs, id, properties, serializedTaskMetrics,
Option(jobId), Option(sc.applicationId), sc.applicationAttemptId,
stage.rdd.isBarrier())
}
}
} catch {
case NonFatal(e) =>
abortStage(stage, s"Task creation failed: $e\n${Utils.exceptionString(e)}", Some(e))
runningStages -= stage
return
}
}
// Figure out the indexes of partition ids to compute.
val partitionsToCompute: Seq[Int] = stage.findMissingPartitions()
//findMissingPartitions 有两个实现类
//ShuffleMapStage实现
override def findMissingPartitions(): Seq[Int] = {
mapOutputTrackerMaster
.findMissingPartitions(shuffleDep.shuffleId)
.getOrElse(0 until numPartitions)
//ResultStage实现
override def findMissingPartitions(): Seq[Int] = {
val job = activeJob.get
(0 until job.numPartitions).filter(id => !job.finished(id))
}
复制代码
def main(args: Array[String]): Unit = {
val sc = new SparkContext(
new SparkConf().setMaster("local[*]").setAppName("MapPartitions")
)
val rdd = sc.makeRDD(List(
1, 2, 5, 9
), 2)
val mapRdd = rdd.map(i => {
println("map--------")
("a", i)
})
mapRdd.cache() //源码调用的persist()
//指定存储级别
//mapRdd.persist(StorageLevel.DISK_ONLY)
mapRdd.reduceByKey(_ + _).collect().foreach(println)
mapRdd.groupByKey().collect().foreach(println)
sc.stop()
}
复制代码
StorageLevel的全部存储级别
val NONE = new StorageLevel(false, false, false, false)
val DISK_ONLY = new StorageLevel(true, false, false, false)
val DISK_ONLY_2 = new StorageLevel(true, false, false, false, 2)
val MEMORY_ONLY = new StorageLevel(false, true, false, true)
val MEMORY_ONLY_2 = new StorageLevel(false, true, false, true, 2)
val MEMORY_ONLY_SER = new StorageLevel(false, true, false, false)
val MEMORY_ONLY_SER_2 = new StorageLevel(false, true, false, false, 2)
val MEMORY_AND_DISK = new StorageLevel(true, true, false, true)
val MEMORY_AND_DISK_2 = new StorageLevel(true, true, false, true, 2)
val MEMORY_AND_DISK_SER = new StorageLevel(true, true, false, false)
val MEMORY_AND_DISK_SER_2 = new StorageLevel(true, true, false, false, 2)
val OFF_HEAP = new StorageLevel(true, true, true, false, 1)
复制代码
def main(args: Array[String]): Unit = {
val sc = new SparkContext(
new SparkConf().setMaster("local[*]").setAppName("MapPartitions")
)
//设置检查点存储路径
sc.setCheckpointDir("./check")
val rdd = sc.makeRDD(List(
1, 2, 5, 9
), 2)
val mapRdd = rdd.map(i => {
println("map--------")
("a", i)
})
mapRdd.cache()
// checkpoint会把前面的RDD执行两次 配合cache()便可只执行一次
mapRdd.checkpoint()
mapRdd.reduceByKey(_ + _).collect().foreach(println)
mapRdd.groupByKey().collect().foreach(println)
sc.stop()
}
复制代码
Spark目前支持Hash分区和Range分区,和用户自定义分区。Hash分区为当前的默认分区。分区器直接决定了RDD中分区的个数、RDD中每条数据通过Shuffle后进入哪一个分区,进而决定了Reduce的个数。oop
HashPartitioner
具体代码可自行查看RangePartitioner
具体代码可自行查看def main(args: Array[String]): Unit = {
val sc = new SparkContext(
new SparkConf().setMaster("local[*]").setAppName("MapPartitions")
)
val rdd = sc.makeRDD(List(
("a", 1), ("a", 2), ("b", 3), ("b", 3), ("b", 3),
("b", 4), ("b", 5), ("a", 6), ("c", 6), ("d", 6)
),5 )
val parRDD = rdd.partitionBy(new MyPartitioner)
val reduceRdd1 = parRDD.reduceByKey((i,j)=>{
println("reduceRdd1")
i+j
})
val reduceRdd2 = reduceRdd1.reduceByKey((i,j)=>{
println("reduceRdd2")
i+j
})
reduceRdd2.saveAsTextFile("out1")
sc.stop()
}
/** * 实现把 * a 分到0区 * b 分到1区 * 其余 分到2区 */
class MyPartitioner extends Partitioner {
/** * numPartitions 和 getPartition 是必须重写的方法 */
override def numPartitions: Int = 3
override def getPartition(key: Any): Int = key match {
case "a" => 0
case "b" => 1
case _ => 2
}
/** * equals 和 hashCode 可选 */
override def equals(other: Any): Boolean = other match {
case h: HashPartitioner =>
h.numPartitions == numPartitions
case _ =>
false
}
override def hashCode: Int = numPartitions
}
复制代码
Spark的数据读取及数据保存能够从两个维度来做区分:文件格式以及文件系统。 文件格式分为:text文件、csv文件、sequence文件以及Object文件; 文件系统分为:本地文件系统、HDFS、HBASE以及数据库。
def main(args: Array[String]): Unit = {
val sc = new SparkContext(
new SparkConf().setMaster("local[*]").setAppName("MapPartitions")
)
val rdd = sc.makeRDD(List(
("a", 1), ("a", 2), ("b", 3), ("b", 3), ("b", 3)
), 1)
/** * text文件 */
rdd.saveAsTextFile("text")
sc.textFile("text").collect().foreach(println)
sc.stop()
}
复制代码
SequenceFile文件是Hadoop用来存储二进制形式的key-value对而设计的一种平面文件(Flat File)。在SparkContext中,能够调用sequenceFile[keyClass, valueClass](path)
。
def main(args: Array[String]): Unit = {
val sc = new SparkContext(
new SparkConf().setMaster("local[*]").setAppName("MapPartitions")
)
val rdd = sc.makeRDD(List(
("a", 1), ("a", 2), ("b", 3), ("b", 3), ("b", 3)
), 1)
/** * Sequence文件 */
rdd.saveAsSequenceFile("sequence")
sc.sequenceFile[String, Int]("sequence").collect().foreach(println)
sc.stop()
}
复制代码
对象文件是将对象序列化后保存的文件,采用Java的序列化机制。能够经过objectFile[T: ClassTag](path)
函数接收一个路径,读取对象文件,返回对应的RDD,也能够经过调用saveAsObjectFile()
实现对对象文件的输出。由于是序列化因此要指定类型。
def main(args: Array[String]): Unit = {
val sc = new SparkContext(
new SparkConf().setMaster("local[*]").setAppName("MapPartitions")
)
val rdd = sc.makeRDD(List(
("a", 1), ("a", 2), ("b", 3), ("b", 3), ("b", 3)
), 1)
/** * object文件 */
rdd.saveAsObjectFile("object")
sc.objectFile[(String, Int)]("object").collect().foreach(println)
sc.stop()
}
复制代码