flink環境搭建
java環境
這裏我使用的是jdk 1.8,下載jdk,自行設置環境變量。
$: java -version
java version "1.8.0_221"
Java(TM) SE Runtime Environment (build 1.8.0_221-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.221-b11, mixed mode)
flink環境
目前Flink已經發展到1.10.0,我們採用最新的版本進行環境搭建,我們從這裏下載最新版本的Flink安裝包。
解壓放到我們經常存放software的地方,然後進行環境變量的更改:
FLINK_HOME=/Users/lidongmeng/software/flink-1.10.0
PATH=$PATH:$FLINK_HOME/bin
terminal裏面進行查看Flink version信息:
$: flink --version
Version: 1.10.0, Commit ID: aa4eb8f
flink啓動
讓我們先啓動起來flink,看一看flink自身提供的web UI。
$: ~/software/flink-1.10.0/bin/start-cluster.sh
Starting cluster.
Starting standalonesession daemon on host localhost.
Starting taskexecutor daemon on host localhost.
啓動後,我們可以在web界面上面查看:
flink–wordcount實現
詳細代碼參見:https://github.com/ldm0213/flink-repos
創建maven工程
筆者使用IDEA作爲編輯器進行開發,創建maven項目:
指定groupId和artifactId:
添加maven依賴
接下來我們需要添加flink需要的包依賴:
<properties>
<flink.version>1.10.0</flink.version>
<scala.compiler.version>2.11</scala.compiler.version>
<!-- 指定maven編譯時候的java版本 -->
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_${scala.compiler.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<!--加入下面兩個依賴纔會出現 Flink 的日誌出來-->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.25</version>
</dependency>
</dependencies>
程序編寫
我們要實現的是從socket stream裏面讀取數據,對每行數據進行按照空格分隔成單詞,統計各個單詞出現的次數,並進行輸出。
import com.flink.transformation.LineSplitMapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
public class FlinkWordCountMain {
public static String HOST = "127.0.0.1";
public static Integer PORT = 8823;
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStreamSource<String> stream = env.socketTextStream(HOST, PORT);
SingleOutputStreamOperator<Tuple2<String, Integer>> sum =
stream.flatMap(new LineSplitMapFunction()).
keyBy(0).
sum(1);
sum.print();
env.execute("Flink word-count example");
}
}
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.util.Collector;
public class LineSplitMapFunction implements FlatMapFunction<String, Tuple2<String, Integer>> {
public void flatMap(String s, Collector<Tuple2<String, Integer>> collector) {
String[] items = s.split(" ");
for (String item: items) {
collector.collect(new Tuple2<>(item, 1));
}
}
}
程序提交&運行
由於我們的數據源是socket中數據,所以我們終端裏面在端口8823開啓一個nc進程:
$:nc -l 8823
hello world
hello world
what ever
本地運行
直接運行FlinkWordCountMain
的main方法:
[Keyed Aggregation -> Sink: Print to Std. Out (7/8)] INFO org.apache.flink.runtime.state.heap.HeapKeyedStateBackend - Initializing heap keyed state backend with stream factory.
[Keyed Aggregation -> Sink: Print to Std. Out (5/8)] INFO org.apache.flink.runtime.state.heap.HeapKeyedStateBackend - Initializing heap keyed state backend with stream factory.
3> (hello,1)
5> (world,1)
6> (ever,1)
4> (what,1)
3> (hello,2)
5> (world,2)
提交任務
- 提交任務到web UI上運行,先打包:
mvn clean package
- 包上傳:
- 指定運行參數&提交任務:
- 運行界面:
- 結果輸出查看: