Flink-創建Maven編寫流式處理和批處理得WordCount程序並測試

創建Maven並導入POM

<dependencies>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-scala_2.11</artifactId>
            <version>1.10.0</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-scala -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-scala_2.11</artifactId>
            <version>1.10.0</version>
        </dependency>
    </dependencies>

<build>
    <plugins>
    <!-- 該插件用於將Scala代碼編譯成class文件 -->
    <plugin>
        <groupId>net.alchim31.maven</groupId>
        <artifactId>scala-maven-plugin</artifactId>
        <version>3.4.6</version>
        <executions>
            <execution>
                <!-- 聲明綁定到maven的compile階段 -->
                <goals>
                    <goal>compile</goal>
                </goals>
            </execution>
        </executions>
    </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-assembly-plugin</artifactId>
            <version>3.0.0</version>
            <configuration>
                <descriptorRefs>
                    <descriptorRef>jar-with-dependencies</descriptorRef>
                </descriptorRefs>
            </configuration>
            <executions>
                <execution>
                    <id>make-assembly</id>
                    <phase>package</phase>
                    <goals>
                        <goal>single</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

在這裏插入圖片描述

添加scala框架 和 scala文件夾

在這裏插入圖片描述

添加data.txt

flink\src\main\resources\data.txt

hello world
hello spark
hello scala you
hello flink yes hao are you

編寫批處理 WordCount

scala\com\atguigu\wordcount\Wordcount.scala

// 隱式轉換
import org.apache.flink.api.scala._

/**
 * 批處理
 */
object Wordcount {
  def main(args: Array[String]): Unit = {

    // 創建執行環境
    val env = ExecutionEnvironment.getExecutionEnvironment
    // 從文件中讀取數據
    val inputPath = "D:\\MyWork\\WorkSpaceIDEA\\flink\\src\\main\\resources\\data.txt"
    val inputDS: DataSet[String] = env.readTextFile(inputPath)
    // 分詞之後,對單詞進行groupby分組,然後用sum進行聚合
    val wordCountDS: AggregateDataSet[(String, Int)] = inputDS
      .flatMap(_.split(" "))
      .map((_, 1))
      .groupBy(0)
      .sum(1)

    // 打印輸出
    wordCountDS.print()
  }
}

在這裏插入圖片描述

流處理WordCount

import org.apache.flink.api.java.utils.ParameterTool
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}

object StreamWordCount {
  def main(args: Array[String]): Unit = {

    // 從外部命令中獲取參數
    val params: ParameterTool =  ParameterTool.fromArgs(args)
    val host: String = params.get("host")
    val port: Int = params.getInt("port")

    // 創建流處理環境
    val env = StreamExecutionEnvironment.getExecutionEnvironment
    // 接收socket文本流
    val textDstream: DataStream[String] = env.socketTextStream(host, port)

    // flatMap和Map需要引用的隱式轉換
    import org.apache.flink.api.scala._
    val dataStream: DataStream[(String, Int)] = textDstream
      .flatMap(_.split(" "))
      .map((_, 1))
      .keyBy(0)
      .sum(1)

    dataStream.print().setParallelism(1)

    // 啓動executor,執行任務
    env.execute("Socket stream word count")
  }
}

設置參數

在這裏插入圖片描述

測試

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章