第9课:Spark Streaming源码解读之Receiver在Driver的精妙实现全生命周期彻

本期内容:node

1,Receiver启动的方式设想ide

2,Receiver启动源码完全分析oop

 

为何要Receiver?spa

Receiver不断持续接收外部数据源的数据,并把数据汇报给Driver端,这样咱们每隔BatchDuration会把汇报数据生成不一样的Job,来执行RDD的操做。rest

 

Receiver是随着应用程序的启动而启动的。ip

Receiver和InputDStream是一一对应的。hadoop

RDD[Receiver]只有一个Partition,一个Receiver实例。rpc

 

Spark Core并不知道RDD[Receiver]的特殊性,依然按照普通RDD对应的Job进行调度,就有可能在一样一个Executor上启动多个Receiver,会致使负载不均衡,会致使Receiver启动失败。get

 

Receiver在Executor启动的方案:源码

1,启动不一样Receiver采用RDD中不一样Partiton的方式,不一样的Partiton表明不一样的Receiver,在执行层面就是不一样的Task,在每一个Task启动时就启动Receiver。

这种方式实现简单巧妙,可是存在弊端启动可能失败,运行过程当中Receiver失败,会致使TaskRetry,若是3次失败就会致使Job失败,会致使整个Spark应用程序失败。由于Receiver的故障,致使Job失败,不能容错。

 

2.第二种方式就是Spark Streaming采用的方式。

在ReceiverTacker的start方法中,先实例化Rpc消息通讯体ReceiverTrackerEndpoint,再调用

launchReceivers方法。

/** Start the endpoint and receiver execution thread. */
def start(): Unit = synchronized {
  if (isTrackerStarted) {
    throw new SparkException("ReceiverTracker already started")
  }

  if (!receiverInputStreams.isEmpty) {
    endpoint = ssc.env.rpcEnv.setupEndpoint(
      "ReceiverTracker", new ReceiverTrackerEndpoint(ssc.env.rpcEnv))
    if (!skipReceiverLaunch) launchReceivers()
    logInfo("ReceiverTracker started")
    trackerState = Started
  }
}

在launchReceivers方法中,先对每个ReceiverInputStream获取到对应的一个Receiver,而后发送StartAllReceivers消息。Receiver对应一个数据来源。

/**
 * Get the receivers from the ReceiverInputDStreams, distributes them to the
 * worker nodes as a parallel collection, and runs them.
 */
private def launchReceivers(): Unit = {
  val receivers = receiverInputStreams.map(nis => {
    val rcvr = nis.getReceiver()
    rcvr.setReceiverId(nis.id)
    rcvr
  })

  runDummySparkJob()

  logInfo("Starting " + receivers.length + " receivers")
  endpoint.send(StartAllReceivers(receivers))
}

ReceiverTrackerEndpoint接收到StartAllReceivers消息后,先找到Receiver运行在哪些Executor上,而后调用startReceiver方法。

override def receive: PartialFunction[Any, Unit] = {
  // Local messages
  case StartAllReceivers(receivers) =>
    val scheduledLocations = schedulingPolicy.scheduleReceivers(receivers, getExecutors)
    for (receiver <- receivers) {
      val executors = scheduledLocations(receiver.streamId)
      updateReceiverScheduledExecutors(receiver.streamId, executors)
      receiverPreferredLocations(receiver.streamId) = receiver.preferredLocation
      startReceiver(receiver, executors)
    }

startReceiver方法在Driver层面本身指定了TaskLocation,而不用Spark Core来帮咱们选择TaskLocation。其有如下特色:终止Receiver不须要重启Spark Job;第一次启动Receiver,不会执行第二次;为了启动Receiver而启动了一个Spark做业,一个Spark做业启动一个Receiver。每一个Receiver启动触发一个Spark做业,而不是每一个Receiver是在一个Spark做业的一个Task来启动。当提交启动Receiver的做业失败时发送RestartReceiver消息,来重启Receiver。

/**
 * Start a receiver along with its scheduled executors
 */
private def startReceiver(
    receiver: Receiver[_],
    scheduledLocations: Seq[TaskLocation]): Unit = {
  def shouldStartReceiver: Boolean = {
    // It's okay to start when trackerState is Initialized or Started
    !(isTrackerStopping || isTrackerStopped)
  }

  val receiverId = receiver.streamId
  if (!shouldStartReceiver) {
    onReceiverJobFinish(receiverId)
    return
  }

  val checkpointDirOption = Option(ssc.checkpointDir)
  val serializableHadoopConf =
    new SerializableConfiguration(ssc.sparkContext.hadoopConfiguration)

  // Function to start the receiver on the worker node
  val startReceiverFunc: Iterator[Receiver[_]] => Unit =
    (iterator: Iterator[Receiver[_]]) => {
      if (!iterator.hasNext) {
        throw new SparkException(
          "Could not start receiver as object not found.")
      }
      if (TaskContext.get().attemptNumber() == 0) {
        val receiver = iterator.next()
        assert(iterator.hasNext == false)
        val supervisor = new ReceiverSupervisorImpl(
          receiver, SparkEnv.get, serializableHadoopConf.value, checkpointDirOption)
        supervisor.start()
        supervisor.awaitTermination()
      } else {
        // It's restarted by TaskScheduler, but we want to reschedule it again. So exit it.
      }
    }

  // Create the RDD using the scheduledLocations to run the receiver in a Spark job
  val receiverRDD: RDD[Receiver[_]] =
    if (scheduledLocations.isEmpty) {
      ssc.sc.makeRDD(Seq(receiver), 1)
    } else {
      val preferredLocations = scheduledLocations.map(_.toString).distinct
      ssc.sc.makeRDD(Seq(receiver -> preferredLocations))
    }
  receiverRDD.setName(s"Receiver $receiverId")
  ssc.sparkContext.setJobDescription(s"Streaming job running receiver $receiverId")
  ssc.sparkContext.setCallSite(Option(ssc.getStartSite()).getOrElse(Utils.getCallSite()))

  val future = ssc.sparkContext.submitJob[Receiver[_], Unit, Unit](
    receiverRDD, startReceiverFunc, Seq(0), (_, _) => Unit, ())
  // We will keep restarting the receiver job until ReceiverTracker is stopped
  future.onComplete {
    case Success(_) =>
      if (!shouldStartReceiver) {
        onReceiverJobFinish(receiverId)
      } else {
        logInfo(s"Restarting Receiver $receiverId")
        self.send(RestartReceiver(receiver))
      }
    case Failure(e) =>
      if (!shouldStartReceiver) {
        onReceiverJobFinish(receiverId)
      } else {
        logError("Receiver has been stopped. Try to restart it.", e)
        logInfo(s"Restarting Receiver $receiverId")
        self.send(RestartReceiver(receiver))
      }
  }(submitJobThreadPool)
  logInfo(s"Receiver ${receiver.streamId} started") }

相关文章
相关标签/搜索