Flink-Environment的三種方式和Source的四種讀取方式-從集合中、從kafka中、從文件中、自定義

Environment

getExecutionEnvironment:創建一個執行環境,表示當前執行程序的上下文。 如果程序是獨立調用的,則此方法返回本地執行環境;如果從命令行客戶端調用程序以提交到集羣,則此方法返回此集羣的執行環境,也就是說,getExecutionEnvironment會根據查詢運行的方式決定返回什麼樣的運行環境,是最常用的一種創建執行環境的方式。如果沒有設置並行度,會以flink-conf.yaml中的配置爲準,默認是1。

// 批處理
val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment
// 流處理
val env = StreamExecutionEnvironment.getExecutionEnvironment

createLocalEnvironment:返回本地執行環境,需要在調用時指定默認的並行度

val env = StreamExecutionEnvironment.createLocalEnvironment(1)

createRemoteEnvironment:返回集羣執行環境,將Jar提交到遠程服務器。需要在調用時指定JobManager的IP和端口號,並指定要在集羣中運行的Jar包。

val env = ExecutionEnvironment.createRemoteEnvironment("jobmanage-hostname", 6123,"YOURPATH//wordcount.jar")

Source之從集合中讀取數據

SensorReading.scala

// 定義樣例類,傳感器id,時間戳,溫度
case class SensorReading(id: String, timestamp: Long, temperature: Double)

SourceForCollection.scala

// 隱式轉換很重要
import org.apache.flink.streaming.api.scala._

/**
 * 從集合中獲取數據
 */
object SourceForCollection {
  def main(args: Array[String]): Unit = {

    // 創建執行環境
    val env = StreamExecutionEnvironment.getExecutionEnvironment

    // 從集合中讀取數據
    val listDstream : DataStream[SensorReading] = env.fromCollection(List(
      SensorReading("sensor_1", 1547718199, 35.8),
      SensorReading("sensor_6", 1547718201, 15.4),
      SensorReading("sensor_7", 1547718202, 6.7),
      SensorReading("sensor_10", 1547718205, 38.1)
    ))

    listDstream.print("stream for list").setParallelism(1)

    // 執行job
    env.execute("source test job")
  }
}

在這裏插入圖片描述

Source之從文件中讀取數據

SensorReading.txt

sensor_1,1547718199,35.8
sensor_6,1547718201,15.4
sensor_7,1547718202,6.7
sensor_10,1547718205,38.1

SourceForFile.scala

import org.apache.flink.streaming.api.scala._

/**
 * Source從文件中讀取
 */
object SourceForFile {
  def main(args: Array[String]): Unit = {

    // 創建執行環境
    val env = StreamExecutionEnvironment.getExecutionEnvironment

    val fileDstream: DataStream[String] =
      env.readTextFile("D:\\MyWork\\WorkSpaceIDEA\\flink-tutorial\\src\\main\\resources\\SensorReading.txt")

    fileDstream.print("source for file")

    // 執行job
    env.execute("source test job")
  }
}

在這裏插入圖片描述

Source之從Kafka消息隊列的數據作爲來源

pom.xml

<dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-connector-kafka-0.11_2.11</artifactId>
    <version>1.10.0</version>
</dependency>

SourceForKafka.scala

import java.util.Properties


import org.apache.flink.api.common.serialization.SimpleStringSchema
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011


/**
 * 以kafka消息隊列的數據作爲來源
 */
object SourceForKafka {
  def main(args: Array[String]): Unit = {

    val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment

    // 先創建kafka的相關配置
    val properties: Properties = new Properties()
    properties.setProperty("bootstrap.servers", "hadoop102:9092")
    properties.setProperty("group.id", "consumer-group")
    properties.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
    properties.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
    properties.setProperty("auto.offset.reset", "latest")

    val kafkaDstream:DataStream[String] = env.addSource( new FlinkKafkaConsumer011[String]("sensor", new SimpleStringSchema(), properties))

    kafkaDstream.print("source for kafka")

    env.execute("source test job")
  }
}

開啓kafka生產者

// kafka數據生產者
./bin/kafka-console-producer.sh --broker-list hadoop102:9092 --topic sensor

在這裏插入圖片描述

自定義Source

SourceForCustom.scala

import com.atguigu.bean.SensorReading
import org.apache.flink.streaming.api.functions.source.SourceFunction
import org.apache.flink.streaming.api.scala._

import scala.collection.immutable
import scala.util.Random

/**
 * 自定義一個Source
 */
object SourceForCustom {

  def main(args: Array[String]): Unit = {

    // 創建執行環境
    val env = StreamExecutionEnvironment.getExecutionEnvironment

    val customDstream: DataStream[SensorReading] = env.addSource( MySensorSource())

    customDstream.print("source for custom").setParallelism(4)

    env.execute("source test job")
  }
}


// 自定義生成測試數據源的SourceFunction
case class MySensorSource() extends SourceFunction[SensorReading]{

  // 定義一個標識位,用來表示數據源是否正常運行
  var running: Boolean = true

  override def cancel(): Unit = {
    running = false
  }

  // 隨機生成10個傳感器的溫度數據
  override def run(sourceContext: SourceFunction.SourceContext[SensorReading]): Unit = {

    // 初始化一個隨機數生成器
    val random = new Random()

    // 初始化10個傳感器的溫度值,隨機生成,包裝成二元組(id, temperature)
    var createTemperature: immutable.IndexedSeq[(String, Double)] = 1.to(10).map(
      i => ("sensor_" + i, 60 + random.nextGaussian() * 20)
    )

    // 無限循環生成數據,如果cancel的話就停止
    while (running) {
      // 更新當前溫度值,再之前溫度上增加微小擾動(上下浮動的數)
      createTemperature = createTemperature.map(
        data => (data._1, data._2 + random.nextGaussian())
      )

      // 獲取當前時間戳,包裝樣例類
      val timestamp: Long = System.currentTimeMillis()
      createTemperature.foreach(
        data => sourceContext.collect( SensorReading(data._1, timestamp, data._2))
      )

      // 間隔200ms
      Thread.sleep(200)
    }
  }
}

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章