spark 源码分析之十四 -- broadcast 是如何实现的?

本篇文章主要剖析broadcast 的实现机制。node

 

BroadcastManager初始化

 BroadcastManager初始化方法源码以下:算法

 

TorrentBroadcastFactory的继承关系以下:apache

BroadcastFactory

An interface for all the broadcast implementations in Spark (to allow multiple broadcast implementations). SparkContext uses a BroadcastFactory implementation to instantiate a particular broadcast for the entire Spark job.编程

即它是Spark中broadcast中全部实现的接口。SparkContext使用BroadcastFactory实现来为整个Spark job实例化特定的broadcast。它有惟一子类 -- TorrentBroadcastFactory。数组

它有两个比较重要的方法:缓存

newBroadcast 方法负责建立一个broadcast变量。session

 

TorrentBroadcastFactory

其主要方法以下:app

newBroadcast其实例化TorrentBroadcast类。dom

unbroadcast方法调用了TorrentBroadcast 类的 unpersist方法。ide

 

TorrentBroadcast父类Broadcast

官方说明以下:

A broadcast variable. Broadcast variables allow the programmer to keep a read-only variable cached on each machine rather than shipping a copy of it with tasks. 
They can be used, for example, to give every node a copy of a large input dataset in an efficient manner. Spark also attempts to distribute broadcast variables using efficient broadcast algorithms to reduce communication cost. Broadcast variables are created from a variable v by calling org.apache.spark.SparkContext.broadcast. The broadcast variable is a wrapper around v, and its value can be accessed by calling the value method.
The interpreter session below shows this: scala> val broadcastVar = sc.broadcast(Array(1, 2, 3)) broadcastVar: org.apache.spark.broadcast.Broadcast[Array[Int]] = Broadcast(0) scala> broadcastVar.value res0: Array[Int] = Array(1, 2, 3) After the broadcast variable is created, it should be used instead of the value v in any functions run on the cluster so that v is not shipped to the nodes more than once. In addition, the object v should not be modified after it is broadcast in order to ensure that all nodes get the same value of the broadcast variable (e.g. if the variable is shipped to a new node later).

 

即广播变量容许编程者将一个只读变量缓存到每个机器上,而不是随任务一块儿发送它的副本。它们能够被用来用一种高效的方式拷贝输入的大数据集。Spark也尝试使用高效的广播算法来减小交互代价。它经过调用SparkContext的broadcast 方法建立,broadcast变量是对真实变量的包装,它能够经过broadcast对象的value方法返回真实对象。一旦真实对象被广播了,要确保对象不会被改变,以确保该数据在全部节点上都是一致的。

TorrentBroadcast继承关系以下:

TorrentBroadcast 是 Broadcast 的惟一子类。

 

TorrentBroadcast

其说明以下:

A BitTorrent-like implementation of org.apache.spark.broadcast.Broadcast. 
The mechanism is as follows:
The driver divides the serialized object into small chunks and stores those chunks in the BlockManager of the driver.
On each executor, the executor first attempts to fetch the object from its BlockManager.
If it does not exist, it then uses remote fetches to fetch the small chunks from the driver and/or other executors if available.
Once it gets the chunks, it puts the chunks in its own BlockManager, ready for other executors to fetch from.
This prevents the driver from being the bottleneck in sending out multiple copies of the broadcast data (one per executor).
When initialized, TorrentBroadcast objects read SparkEnv.get.conf.

 

实现机制:

driver 将数据拆分红多个小的chunk并将这些小的chunk保存在driver的BlockManager中。在每个executor节点上,executor首先先从它本身的blockmanager获取数据,若是不存在,它使用远程抓取,从driver或者是其余的executor中抓取数据。一旦它获取到chunk,就将其放入到本身的BlockManager中,准备被其余的节点请求获取。这使得driver发送多个副本到多个executor节点的瓶颈不复存在。

 

driver 端写数据

广播数据的保存有两种形式:

1. 数据保存在memstore中一份,须要反序列化后存入;保存在磁盘中一份,磁盘中的那一份先使用 SerializerManager序列化为字节数组,而后保存到磁盘中。

2. 将对象根据blockSize(默认为4m,能够经过spark.broadcast.blockSize 参数指定),compressCodec(默认是启用的,能够经过 spark.broadcast.compress参数禁用。压缩算法默认是lz4,能够经过 spark.io.compression.codec 参数指定)将数据写入到outputStream中,进而拆分为几个小的chunk,最终将数据持久化到blockManager中,也是memstore一份,不须要反序列化;磁盘一份。

 

其中,TorrentBroadcast 的 blockifyObject 方法以下:

压缩的Outputstream对 ChunkedByteBufferOutputStream 作了装饰。

 

driver或executor读数据

broadcast 方法调用 value 方法时, 会调用 TorrentBroadcast 的 getValue 方法,以下:

_value 字段声明以下:

private lazy val _value: T = readBroadcastBlock()

 

接下来看一下 readBroadcastBlock 这个方法:

 1 private def readBroadcastBlock(): T = Utils.tryOrIOException {
 2   TorrentBroadcast.synchronized {
 3     val broadcastCache = SparkEnv.get.broadcastManager.cachedValues
 4 
 5     Option(broadcastCache.get(broadcastId)).map(_.asInstanceOf[T]).getOrElse {
 6       setConf(SparkEnv.get.conf)
 7       val blockManager = SparkEnv.get.blockManager
 8       blockManager.getLocalValues(broadcastId) match {
 9         case Some(blockResult) =>
10           if (blockResult.data.hasNext) {
11             val x = blockResult.data.next().asInstanceOf[T]
12             releaseLock(broadcastId)
13 
14             if (x != null) {
15               broadcastCache.put(broadcastId, x)
16             }
17 
18             x
19           } else {
20             throw new SparkException(s"Failed to get locally stored broadcast data: $broadcastId")
21           }
22         case None =>
23           logInfo("Started reading broadcast variable " + id)
24           val startTimeMs = System.currentTimeMillis()
25           val blocks = readBlocks()
26           logInfo("Reading broadcast variable " + id + " took" + Utils.getUsedTimeMs(startTimeMs))
27 
28           try {
29             val obj = TorrentBroadcast.unBlockifyObject[T](
30               blocks.map(_.toInputStream()), SparkEnv.get.serializer, compressionCodec)
31             // Store the merged copy in BlockManager so other tasks on this executor don't
32             // need to re-fetch it.
33             val storageLevel = StorageLevel.MEMORY_AND_DISK
34             if (!blockManager.putSingle(broadcastId, obj, storageLevel, tellMaster = false)) {
35               throw new SparkException(s"Failed to store $broadcastId in BlockManager")
36             }
37 
38             if (obj != null) {
39               broadcastCache.put(broadcastId, obj)
40             }
41 
42             obj
43           } finally {
44             blocks.foreach(_.dispose())
45           }
46       }
47     }
48   }
49 }

 

 

对源码做以下解释:

第3行:broadcastManager.cachedValues 保存着全部的 broadcast 的值,它是一个Map结构的,key是强引用,value是虚引用(在垃圾回收时会被清理掉)。

第4行:根据 broadcastId 从cachedValues 中取数据。若是没有,则执行getOrElse里的 default 方法。

第8行:从BlockManager的本地获取broadcast的值(从memstore或diskstore中,获取的数据是完整的数据,不是切分以后的小chunk),如有,则释放BlockManager的锁,并将获取的值存入cachedValues中;若没有,则调用readBlocks将chunk 数据读取到并将数据转换为 broadcast 的value对象,并将该对象放入cachedValues中。

 

其中, readBlocks 方法以下:

 1 /** Fetch torrent blocks from the driver and/or other executors. */
 2 private def readBlocks(): Array[BlockData] = {
 3   // Fetch chunks of data. Note that all these chunks are stored in the BlockManager and reported
 4   // to the driver, so other executors can pull these chunks from this executor as well.
 5   val blocks = new Array[BlockData](numBlocks)
 6   val bm = SparkEnv.get.blockManager
 7 
 8   for (pid <- Random.shuffle(Seq.range(0, numBlocks))) {
 9     val pieceId = BroadcastBlockId(id, "piece" + pid)
10     logDebug(s"Reading piece $pieceId of $broadcastId")
11     // First try getLocalBytes because there is a chance that previous attempts to fetch the
12     // broadcast blocks have already fetched some of the blocks. In that case, some blocks
13     // would be available locally (on this executor).
14     bm.getLocalBytes(pieceId) match {
15       case Some(block) =>
16         blocks(pid) = block
17         releaseLock(pieceId)
18       case None =>
19         bm.getRemoteBytes(pieceId) match {
20           case Some(b) =>
21             if (checksumEnabled) {
22               val sum = calcChecksum(b.chunks(0))
23               if (sum != checksums(pid)) {
24                 throw new SparkException(s"corrupt remote block $pieceId of $broadcastId:" +
25                   s" $sum != ${checksums(pid)}")
26               }
27             }
28             // We found the block from remote executors/driver's BlockManager, so put the block
29             // in this executor's BlockManager.
30             if (!bm.putBytes(pieceId, b, StorageLevel.MEMORY_AND_DISK_SER, tellMaster = true)) {
31               throw new SparkException(
32                 s"Failed to store $pieceId of $broadcastId in local BlockManager")
33             }
34             blocks(pid) = new ByteBufferBlockData(b, true)
35           case None =>
36             throw new SparkException(s"Failed to get $pieceId of $broadcastId")
37         }
38     }
39   }
40   blocks
41

 

源码解释以下:

第14行:根据pieceid从本地BlockManager 中获取到 chunk

第15行:若是获取到了chunk,则释放锁。

第18行:若是没有获取到chunk,则从远程根据pieceid获取远程获取chunk,获取到chunk后作checksum校验,以后将chunk存入到本地BlockManager中。

 

注:本篇文章没有对BroadcastManager中关于BlockManager的操做作进一步更详细的说明,下一篇文章会专门剖析Spark的存储体系。

相关文章
相关标签/搜索