本篇解析spark的詞頻統計源程序代碼。
java源碼如下:
</pre><pre name="code" class="java">package sparkTest;
import java.util.Arrays;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;
public class WordCount {
public static void main(String[] args) {
String logFile = "file:///home/hadoop/workspace/sparkTest/input/README.md"; // Should be some file on your system
SparkConf conf = new SparkConf().setAppName("Simple Application").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> textFile = sc.textFile(logFile); //構建String型RDD
JavaRDD<String> words = textFile.flatMap(new FlatMapFunction<String, String>() { //flatMap相對map,多了flattening環節:即將所有行返回的結果合併很一個對象
public Iterable<String> call(String s) {
return Arrays.asList(s.split(" "));
}
});
JavaPairRDD<String, Integer> pairs = words.mapToPair(new PairFunction<String, String, Integer>() { //執行PairFunction,返回keyValue值對
public Tuple2<String, Integer> call(String s) {
return new Tuple2<String, Integer>(s, 1);
}
});
JavaPairRDD<String, Integer> counts = pairs.reduceByKey(new Function2<Integer, Integer, Integer>() { //合併相同的Key
public Integer call(Integer a, Integer b) {
return a + b;
}
});
counts.saveAsTextFile("file:///home/hadoop/workspace/sparkTest/output");
// System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);
}
}
源碼解析步驟:
1、textFile()之前,構建JavaRDD,String型的。
2、flatMap()對RDD元素進行操作併合並。不同於map(),flatMap的函數參數返回的必須是list等序列。官方文檔解釋如下:
flatMap(func) | Similar to map, but each input item can be mapped to 0 or more output items (so func should return a Seq rather than a single item). |
3、words.mapToPair(new PairFunction<String, String, Integer>())將flatMap結果轉化成keyValue對。
4、pairs.reduceByKey(new Function2<Integer, Integer, Integer>())將mapToPair結果合併成最終結果。
5、saveAsTextFile(path)把最終結果存入path對應的文件中,可以是local file system、hadoop支持的其它文件系統、HDFS等。
注意:這裏除了saveAsTextFile()是action操作,其他都屬於Transformation.
輸入文件內容如下:
# Apache Spark
Spark is a fast and general cluster computing system for Big Data. It provides
<http://spark.apache.org/>
(Spark,2)
(provides,1)
(is,1)
(general,1)
(a,1)
(Big,1)
(fast,1)
(Apache,1)
(#,1)
(,2)
(cluster,1)
(Data.,1)
(It,1)
(for,1)
(computing,1)
(and,1)
(<http://spark.apache.org/>,1)
(system,1)
觀察可知,實現了詞頻統計功能。