第8課:Spark Streaming源碼解讀之RDD生成全生命週期徹底研究和思考

本期內容
1. DStreamRDD關係徹底研究

2. StreamingRDD的生成徹底研究

spark-1.6.1\examples\src\main\scala\org\apache\spark\examples\streaming\NetworkWordCount.scala

object NetworkWordCount {
  def main(args: Array[String]) {
    if (args.length < 2) {
      System.err.println("Usage: NetworkWordCount <hostname> <port>")
      System.exit(1)
    }

    //StreamingExamples.setStreamingLogLevels()

    // Create the context with a 1 second batch size
    val sparkConf = new SparkConf().setAppName("NetworkWordCount")
    val ssc = new StreamingContext(sparkConf, Seconds(1))

    // Create a socket stream on target ip:port and count the
    // words in input stream of \n delimited text (eg. generated by 'nc')
    // Note that no duplication in storage level only for running locally.
    // Replication necessary in distributed scenario for fault tolerance.
    val lines = ssc.socketTextStream(args(0), args(1).toInt, StorageLevel.MEMORY_AND_DISK_SER)
    val words = lines.flatMap(_.split(" "))
    val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
    wordCounts.print()  //此處的print也是Transformation,其內部分調用print方法
    ssc.start()
    ssc.awaitTermination()
  }
}


1. DStream有下三個特徵:

(1)DStream依賴其他DStream,除了第一個DStream(基於數據源產生的)

(2)DSream基於時間間隔(Batch Duration)生成RDD 

(3)DStream在每個Batch Duration後生成RDD

 * DStreams internally is characterized by a few basic properties:
 *  - A list of other DStreams that the DStream depends on
 *  - A time interval at which the DStream generates an RDD
 *  - A function that is used to generate an RDD after each time interval
 */

abstract class DStream[T: ClassTag] (
    @transient private[streaming] var ssc: StreamingContext
ssc.socketTextStream
) extends Serializable with Logging {


NetworkWordCount代碼中的transform都是基於DStream。

ssc.socketTextStream ->  socketStream -> SocketInputDStream -> ReceiverInputDStream -> InputDStream ->DStream
lines.flatMap -> FlatMappedDStream -> DStream
reduceByKey -> reduceByKey -> combineByKey -> ShuffledDStream ->DStream

綜上所述,DStream是RDD的模板。SparkStreaming在執行時看最後一個DStream。


再看DStream中 generatedRDDs是怎麼獲取的。

搞清楚generatedRDDs的HashMap是在哪實例化,就搞清楚了RDD是怎麼產生的。

DStream的L335行, 說明根據既定時間得到 RDD或通過緩存得到或計算出RDD並緩存RDD。

/**
 * Get the RDD corresponding to the given time; either retrieve it from cache
 * or compute-and-cache it.
 */
private[streaming] final def getOrCompute(time: Time): Option[RDD[T]] = {
  // If RDD was already generated, then retrieve it from HashMap,
  // or else compute the RDD
  generatedRDDs.get(time).orElse {
    // Compute the RDD if time is valid (e.g. correct time in a sliding window)
    // of RDD generation, else generate nothing.
    if (isTimeValid(time)) {  //驗證時間有效性

      val rddOption = createRDDWithLocalProperties(time, displayInnerRDDOps = false) {
        // Disable checks for existing output directories in jobs launched by the streaming
        // scheduler, since we may need to write output to an existing directory during checkpoint
        // recovery; see SPARK-4835 for more details. We need to have this call here because
        // compute() might cause Spark jobs to be launched.
        PairRDDFunctions.disableOutputSpecValidation.withValue(true) {
          compute(time)
        }
      }

      rddOption.foreach { case newRDD =>
        // Register the generated RDD for caching and checkpointing
        if (storageLevel != StorageLevel.NONE) {
          newRDD.persist(storageLevel)
          logDebug(s"Persisting RDD ${newRDD.id} for time $time to $storageLevel")
        }
        if (checkpointDuration != null && (time - zeroTime).isMultipleOf(checkpointDuration)) {
          newRDD.checkpoint()
          logInfo(s"Marking RDD ${newRDD.id} for time $time for checkpointing")
        }
        generatedRDDs.put(time, newRDD)
      }
      rddOption
    } else {
      None
    }
  }
}

轉至SocketInputStream,

class SocketInputDStream[T: ClassTag](
    ssc_ : StreamingContext,
    host: String,
    port: Int,
    bytesToObjects: InputStream => Iterator[T],
    storageLevel: StorageLevel
  ) extends ReceiverInputDStream[T](ssc_) {

ReceiverInputDStream的compute方法

/**
 * Generates RDDs with blocks received by the receiver of this stream. */ 根據接收到的blocks生成RDD。
override def compute(validTime: Time): Option[RDD[T]] = {
  val blockRDD = {

    if (validTime < graph.startTime) {
      // If this is called for any time before the start time of the context,
      // then this returns an empty RDD. This may happen when recovering from a
      // driver failure without any write ahead log to recover pre-failure data.
      new BlockRDD[T](ssc.sc, Array.empty)
    } else {
      // Otherwise, ask the tracker for all the blocks that have been allocated to this stream
      // for this batch    //此處可以看出:Spark根據沒有流處理,只有批處理。
      val receiverTracker = ssc.scheduler.receiverTracker
      val blockInfos = receiverTracker.getBlocksOfBatch(validTime).getOrElse(id, Seq.empty)

      // Register the input blocks information into InputInfoTracker
      val inputInfo = StreamInputInfo(id, blockInfos.flatMap(_.numRecords).sum)
      ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo) // JobScheduler 記錄所有的數據輸入來源

      // Create the BlockRDD
      createBlockRDD(validTime, blockInfos)  //創建BlocakRDD
    }
  }
  Some(blockRDD)
}

創建BlockRDD

private[streaming] def createBlockRDD(time: Time, blockInfos: Seq[ReceivedBlockInfo]): RDD[T] = {

  if (blockInfos.nonEmpty) {
    val blockIds = blockInfos.map { _.blockId.asInstanceOf[BlockId] }.toArray

    // Are WAL record handles present with all the blocks
    val areWALRecordHandlesPresent = blockInfos.forall { _.walRecordHandleOption.nonEmpty }

    if (areWALRecordHandlesPresent) {
      // If all the blocks have WAL record handle, then create a WALBackedBlockRDD
      val isBlockIdValid = blockInfos.map { _.isBlockIdValid() }.toArray
      val walRecordHandles = blockInfos.map { _.walRecordHandleOption.get }.toArray
      new WriteAheadLogBackedBlockRDD[T](
        ssc.sparkContext, blockIds, walRecordHandles, isBlockIdValid)
    } else {
      // Else, create a BlockRDD. However, if there are some blocks with WAL info but not
      // others then that is unexpected and log a warning accordingly.
      if (blockInfos.find(_.walRecordHandleOption.nonEmpty).nonEmpty) {
        if (WriteAheadLogUtils.enableReceiverLog(ssc.conf)) {
          logError("Some blocks do not have Write Ahead Log information; " +
            "this is unexpected and data may not be recoverable after driver failures")
        } else {
          logWarning("Some blocks have Write Ahead Log information; this is unexpected")
        }
      }
      val validBlockIds = blockIds.filter { id =>              // 數據是否存在,不存在就過濾掉;

        ssc.sparkContext.env.blockManager.master.contains(id)   // 此處的master是BlocManager的Master
      }
      if (validBlockIds.size != blockIds.size) {
        logWarning("Some blocks could not be recovered as they were not found in memory. " +
          "To prevent such data loss, enabled Write Ahead Log (see programming guide " +
          "for more details.")
      }
      new BlockRDD[T](ssc.sc, validBlockIds)  //根據有效的BlockIds生成BlockRDD
    }
  } else {
    // If no block is ready now, creating WriteAheadLogBackedBlockRDD or BlockRDD
    // according to the configuration
    if (WriteAheadLogUtils.enableReceiverLog(ssc.conf)) {
      new WriteAheadLogBackedBlockRDD[T](
        ssc.sparkContext, Array.empty, Array.empty, Array.empty)
    } else {
      new BlockRDD[T](ssc.sc, Array.empty)  //生成空的BlockRDD
    }
  }
}

再看MappedDStream,其中的compute方法中使用了parent來獲得RDD,

從中也明白了,對DStream的Transformation操作其實就是對RDD的Transformation操作,在RDD上的這種映射關係是要加上時間維度

private[streaming]
class MappedDStream[T: ClassTag, U: ClassTag] (
    parent: DStream[T],
    mapFunc: T => U
  ) extends DStream[U](parent.ssc) {

  override def dependencies: List[DStream[_]] = List(parent)

  override def slideDuration: Duration = parent.slideDuration

  override def compute(validTime: Time): Option[RDD[U]] = {
    parent.getOrCompute(validTime).map(_.map[U](mapFunc))   
  }
}

其中mapFunc是 之前代碼中的 計數

 val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)






發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章