【spark】spark streaming 和flume、kafka整合

一、Spark Streaming整合flume

flume作爲日誌實時採集的框架,可以與SparkStreaming實時處理框架進行對接,flume實時產生數據,sparkStreaming做實時處理。

Spark Streaming對接FlumeNG有兩種方式,
一種是FlumeNG將消息Push推給Spark Streaming,
還有一種是Spark Streaming從flume 中Poll拉取數據。

Poll方式

(1)安裝flume1.6以上

(2)下載依賴包

spark-streaming-flume-sink_2.11-2.0.2.jar放入到flume的lib目錄下
(3)修改flume/lib下的scala依賴包版本

從spark安裝目錄的jars文件夾下找到scala-library-2.11.8.jar 包替換掉flume的lib目錄下自帶的scala-library-2.10.1.jar。

(4)寫flume的agent,注意既然是拉取的方式,那麼flume向自己所在的機器上產數據就行

(5)編寫flume-poll-spark.conf配置文件

a1.sources = r1
a1.sinks = k1
a1.channels = c1
#source
a1.sources.r1.channels = c1
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /root/data
a1.sources.r1.fileHeader = true
#channel
a1.channels.c1.type =memory
a1.channels.c1.capacity = 20000
a1.channels.c1.transactionCapacity=5000
#sinks
a1.sinks.k1.channel = c1
a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink
a1.sinks.k1.hostname=node1
a1.sinks.k1.port = 8888
a1.sinks.k1.batchSize= 2000

啓動flume

bin/flume-ng agent -n a1 -c conf -f conf/flume-poll-spark.conf -Dflume.root.logger=info,console

代碼實現:

引入依賴

 <!--sparkStreaming整合flume-->
 <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-flume_2.11</artifactId>
            <version>2.0.2</version>
 </dependency>

代碼

import java.net.InetSocketAddress
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.flume.{FlumeUtils, SparkFlumeEvent}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
//todo:需求:利用sparkStreaming整合flume---(poll拉模式)
 //通過poll拉模式整合,啓動順序---》啓動flume----》sparkStreaming程序
object SparkStreamingFlumePoll {
  def main(args: Array[String]): Unit = {
      //1、創建SparkConf
        val sparkConf: SparkConf = new SparkConf().setAppName("SparkStreamingFlumePoll").setMaster("local[2]")
      //2、創建SparkContext
      val sc = new SparkContext(sparkConf)
      sc.setLogLevel("WARN")
      //3、創建StreamingContext
      val ssc = new StreamingContext(sc,Seconds(5))
      //4、讀取flume中的數據
      val pollingStream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createPollingStream(ssc,"192.168.200.100",8888)
//定義一個List集合可以封裝不同的flume收集數據
        val address=List(new InetSocketAddress("node1",8888),new InetSocketAddress("node2",8888),new InetSocketAddress("node3",8888))
        //val pollingStream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createPollingStream(ssc,address,StorageLevel.MEMORY_AND_DISK_SER_2)
//5、 event是flume中傳輸數據的最小單元,event中數據結構:{"headers":"xxxxx","body":"xxxxxxx"}
     val flume_data: DStream[String] = pollingStream.map(x => new String(x.event.getBody.array()))
      //6、切分每一行
      val words: DStream[String] = flume_data.flatMap(_.split(" "))
      //7、每個單詞計爲1
      val wordAndOne: DStream[(String, Int)] = words.map((_,1))
      //8、相同單詞出現次數累加
      val result: DStream[(String, Int)] = wordAndOne.reduceByKey(_+_)
      //9、打印
      result.print()
      ssc.start()
      ssc.awaitTermination()
  }
}

Push方式

編寫flume-push-spark.conf配置文件

#push mode
a1.sources = r1
a1.sinks = k1
a1.channels = c1
#source
a1.sources.r1.channels = c1
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /root/data
a1.sources.r1.fileHeader = true
#channel
a1.channels.c1.type =memory
a1.channels.c1.capacity = 20000
a1.channels.c1.transactionCapacity=5000
#sinks
a1.sinks.k1.channel = c1
a1.sinks.k1.type = avro
a1.sinks.k1.hostname=192.168.1.120
a1.sinks.k1.port = 8888
a1.sinks.k1.batchSize= 2000
[root@node1 conf]# 
[root@node1 conf]# 
[root@node1 conf]# vi flume-push-spark.conf 
#push mode
a1.sources = r1
a1.sinks = k1
a1.channels = c1
#source
a1.sources.r1.channels = c1
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /root/data
a1.sources.r1.fileHeader = true
#channel
a1.channels.c1.type =memory
a1.channels.c1.capacity = 20000
a1.channels.c1.transactionCapacity=5000
#sinks
a1.sinks.k1.channel = c1
a1.sinks.k1.type = avro
a1.sinks.k1.hostname=192.168.75.57
a1.sinks.k1.port = 8888
a1.sinks.k1.batchSize= 2000

注意配置文件中指明的hostname和port是spark應用程序所在服務器的ip地址和端口。

啓動flume

bin/flume-ng agent -n a1 -c conf -f conf/flume-push-spark.conf -Dflume.root.logger=info,console

代碼開發

import java.net.InetSocketAddress
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.flume.{FlumeUtils, SparkFlumeEvent}
//todo:需求:利用sparkStreaming整合flume-------(push推模式)
//push推模式啓動順序:先啓動sparkStreaming程序,然後啓動flume
object SparkStreamingFlumePush {
  def main(args: Array[String]): Unit = {
    //1、創建SparkConf
    val sparkConf: SparkConf = new SparkConf().setAppName("SparkStreamingFlumePush").setMaster("local[2]")
    //2、創建SparkContext
    val sc = new SparkContext(sparkConf)
    sc.setLogLevel("WARN")
    //3、創建StreamingContext
    val ssc = new StreamingContext(sc,Seconds(5))
    //4、讀取flume中的數據
    val pollingStream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createStream(ssc,"192.168.75.57",8888)
//5、 event是flume中傳輸數據的最小單元,event中數據結構:{"headers":"xxxxx","body":"xxxxxxx"}
    val flume_data: DStream[String] = pollingStream.map(x => new String(x.event.getBody.array()))
    //6、切分每一行
    val words: DStream[String] = flume_data.flatMap(_.split(" "))
    //7、每個單詞計爲1
    val wordAndOne: DStream[(String, Int)] = words.map((_,1))
    //8、相同單詞出現次數累加
    val result: DStream[(String, Int)] = wordAndOne.reduceByKey(_+_)
    //9、打印
    result.print()
ssc.start()
    ssc.awaitTermination()
  }
}

二、Spark Streaming整合kafka

KafkaUtils.createDstream

  • 當前是利用kafka高層次api(偏移量由zk維護)
  • 這種方式默認數據會丟失,可以通過啓用WAL預寫日誌,將接受到的數據同時也寫入了到HDFS中,可以保證數據源端的安全性,當前Dstream中某個rdd的分區數據丟失了,可以通過血統,拿到原始數據重新計算恢復得到。
  • 但是它保證不了數據只被處理一次。

引入依賴

<!--sparkStreaming整合kafka-->
<dependency>
       <groupId>org.apache.spark</groupId>
       <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
       <version>2.0.2</version>
</dependency>

代碼開發

import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.kafka.KafkaUtils
import scala.collection.immutable
//todo:需求:利用sparkStreaming整合kafka---利用kafka高層次api(偏移量由zk維護)
object SparkStreamingKafkaReceiver {
  def main(args: Array[String]): Unit = {
      //1、創建SparkConf
      val sparkConf: SparkConf = new SparkConf()
                                .setAppName("SparkStreamingKafkaReceiver")
                                .setMaster("local[4]")                                 
.set("spark.streaming.receiver.writeAheadLog.enable","true") //開啓WAL日誌,保證數據源端的安全性
      //2、創建SparkContext
      val sc = new SparkContext(sparkConf)
      sc.setLogLevel("WARN")
     //3、創建StreamingContext
      val ssc = new StreamingContext(sc,Seconds(5))
       ssc.checkpoint("./kafka-receiver")
//4、接受topic的數據
        //zk服務地址
       val zkQuorum="node1:2181,node2:2181,node3:2181"
       //消費者組id
      val groupId="sparkStreaming_group"
       //topic信息     //map中的key表示topic名稱,map中的value表示當前針對於每一個receiver接受器採用多少個線程去消費數據
       val topics=Map("itcast" -> 1)
       //(String, String):第一個String表示消息的key,第二個String就是消息具體內容
       // val kafkaDstream: ReceiverInputDStream[(String, String)] = KafkaUtils.createStream(ssc,zkQuorum,groupId,topics)
//這裏構建了多個receiver接受數據
    val receiverListDstream: immutable.IndexedSeq[ReceiverInputDStream[(String, String)]] = (1 to 3).map(x => {
      val kafkaDstream: ReceiverInputDStream[(String, String)] = KafkaUtils.createStream(ssc, zkQuorum, groupId, topics)
      kafkaDstream
    })
        //通過調用streamingContext的union將所有的receiver接收器中的數據合併
      val kafkaDstream: DStream[(String, String)] = ssc.union(receiverListDstream)
    //    kafkaDstream.foreachRDD(rdd =>{
//      rdd.foreach(x=>println("key:"+x._1+" value:"+x._2))
//    })
      //5、獲取topic中的數據
      val kafkaData: DStream[String] = kafkaDstream.map(_._2)
//6、切分每一行
        val words: DStream[String] = kafkaData.flatMap(_.split(" "))
        //7、每個單詞計爲1
        val wordAndOne: DStream[(String, Int)] = words.map((_,1))
        //8、相同單詞出現次數累加
        val result: DStream[(String, Int)] = wordAndOne.reduceByKey(_+_)
        //9、打印
        result.print()
ssc.start()
        ssc.awaitTermination()
  }
}

(1)啓動zookeeper集羣

zkServer.sh start

(2)啓動kafka集羣

kafka-server-start.sh  /export/servers/kafka/config/server.properties

(3) 創建topic

kafka-topics.sh --create --zookeeper hdp-node-01:2181 --replication-factor 1 --partitions 3 --topic itcast

(4) 向topic中生產數據
通過shell命令向topic發送消息

kafka-console-producer.sh --broker-list hdp-node-01:9092 --topic  itcast

(5)運行代碼,查看控制檯結果數據
在這裏插入圖片描述

總結:

通過這種方式實現,剛開始的時候系統正常運行,沒有發現問題,但是如果系統異常重新啓動sparkstreaming程序後,發現程序會重複處理已經處理過的數據,這種基於receiver的方式,是使用Kafka的高級API,topic的offset偏移量在ZooKeeper中。這是消費Kafka數據的傳統方式。這種方式配合着WAL機制可以保證數據零丟失的高可靠性,但是卻無法保證數據只被處理一次,可能會處理兩次。因爲Spark和ZooKeeper之間可能是不同步的。官方現在也已經不推薦這種整合方式,我們使用官網推薦的第二種方式kafkaUtils的createDirectStream()方式。

KafkaUtils.createDirectStream

這種方式不同於Receiver接收數據,它定期地從kafka的topic下對應的partition中查詢最新的偏移量,再根據偏移量範圍在每個batch裏面處理數據,Spark通過調用kafka簡單的消費者Api(低級api)讀取一定範圍的數據。

相比基於Receiver方式有幾個優點

A、簡化並行
不需要創建多個kafka輸入流,然後union它們,sparkStreaming將會創建和kafka分區數相同的rdd的分區數,而且會從kafka中並行讀取數據,spark中RDD的分區數和kafka中的topic分區數是一一對應的關係。

B、高效,
第一種實現數據的零丟失是將數據預先保存在WAL中,會複製一遍數據,會導致數據被拷貝兩次,第一次是接受kafka中topic的數據,另一次是寫到WAL中。而沒有receiver的這種方式消除了這個問題。

C、恰好一次語義(Exactly-once-semantics)
Receiver讀取kafka數據是通過kafka高層次api把偏移量寫入zookeeper中,雖然這種方法可以通過數據保存在WAL中保證數據不丟失,但是可能會因爲sparkStreaming和ZK中保存的偏移量不一致而導致數據被消費了多次。EOS通過實現kafka低層次api,偏移量僅僅被ssc保存在checkpoint中,消除了zk和ssc偏移量不一致的問題。缺點是無法使用基於zookeeper的kafka監控工具。

代碼

import kafka.serializer.StringDecoder
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.dstream.{DStream, InputDStream}
import org.apache.spark.streaming.kafka.KafkaUtils
//todo:需求:利用sparkStreaming整合kafka,利用kafka低級api(偏移量不在由zk維護)
object SparkStreamingKafkaDirect {
  def main(args: Array[String]): Unit = {
    //1、創建SparkConf
    val sparkConf: SparkConf = new SparkConf()
      .setAppName("SparkStreamingKafkaDirect")
      .setMaster("local[4]")
    //2、創建SparkContext
    val sc = new SparkContext(sparkConf)
    sc.setLogLevel("WARN")
    //3、創建StreamingContext
    val ssc = new StreamingContext(sc,Seconds(5))
    //此時偏移量由客戶端自己去維護,保存在checkpoint裏面
    ssc.checkpoint("./kafka-direct")
//4、接受topic的數據
        //配置kafka相關參數
      val kafkaParams=Map("bootstrap.servers" ->"node1:9092,node2:9092,node3:9092","group.id" -> "spark_direct","auto.offset.reset" ->"smallest")
        //指定topic的名稱
      val topics=Set("itcast")
    //此時產生的Dstream中rdd的分區跟kafka中的topic分區一一對應
      val kafkaDstream: InputDStream[(String, String)] = KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,kafkaParams,topics)
//5、獲取topic中的數據
    val kafkaData: DStream[String] = kafkaDstream.map(_._2)
    //6、切分每一行
    val words: DStream[String] = kafkaData.flatMap(_.split(" "))
    //7、每個單詞計爲1
    val wordAndOne: DStream[(String, Int)] = words.map((_,1))
    //8、相同單詞出現次數累加
    val result: DStream[(String, Int)] = wordAndOne.reduceByKey(_+_)
    //9、打印
    result.print()
ssc.start()
    ssc.awaitTermination()
  }
}

StreamingContext.getOrCreate

可以checkpoint目錄來恢復StreamingContext對象

代碼開發

import cn.itcast.streaming.socket.SparkStreamingSocketTotal.updateFunc
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
//todo:利用sparkStreaming接受socket數據,實現整個程序每天24小時運行
//可以允許程序出現異常,再次重新啓動,又可以恢復回來
object SparkStreamingSocketTotalCheckpoint {
  def createStreamingContext(checkpointPath: String): StreamingContext = {
    //1、創建SparkConf
    val sparkConf: SparkConf = new SparkConf().setAppName("SparkStreamingSocketTotalCheckpoint").setMaster("local[2]")
    //2、創建SparkContext
    val sc = new SparkContext(sparkConf)
    sc.setLogLevel("WARN")
    //3、創建StreamingContext對象
    val ssc = new StreamingContext(sc,Seconds(5))
//需要設置一個checkpoint,由於保存每一個批次中間結果數據
    //還會繼續保存這個Driver代碼邏輯,還有任務運行的資源(整個application信息)
    ssc.checkpoint(checkpointPath)
    //4、接受socket是數據
    val socketTextStream: ReceiverInputDStream[String] = ssc.socketTextStream("192.168.200.100",9999)
    //5、切分每一行
    val words: DStream[String] = socketTextStream.flatMap(_.split(" "))
    //6、每個單詞計爲1
    val wordAndOne: DStream[(String, Int)] = words.map((_,1))
    //7、相同單詞所有批次結果累加
    val result: DStream[(String, Int)] = wordAndOne.updateStateByKey(updateFunc)
    //8、打印結果
    result.print()
    ssc
  }
def main(args: Array[String]): Unit = {
    val checkpointPath="./ck2018"
    //1、創建StreamingContext
      //通過StreamingContext.getOrCreate方法可以從checkpoint目錄中恢復之前掛掉的StreamingContext
       //第一次啓動程序,最開始這個checkpointPath目錄沒有的數據,就通過後面的函數來幫助我們產生一個StreamingContext
            //並且保存所有數據(application信息)到checkpoint
       //程序掛掉後,第二次啓動程序,它會讀取checkpointPath目錄中數據,就從這個checkpointPath目錄裏面來恢復之前掛掉的StreamingContext
       //如果checkpoint目錄中的數據損壞,這個你再次通過讀取checkpoint目錄中的數據來恢復StreamingContext對象不會成功,報異常
    val ssc: StreamingContext = StreamingContext.getOrCreate(checkpointPath, () => {
      val newSSC: StreamingContext = createStreamingContext(checkpointPath)
        newSSC
    })
//啓動流式計算
      ssc.start()
      ssc.awaitTermination()
  }
}

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章