一、MapReduce程序的打包運行過程:
1> 選中待打包項目,右鍵選擇菜單export,導出項目
2> 點擊Next進行下一步操作,選擇需要打包的類,輸入導出jar包的名稱和路徑。(可以報lib包去掉,集羣上包含的有mr的依賴包)
3> 繼續點擊Next,在如下畫面選擇執行文件的主函數類,點擊Finish完成導出
4> 把導出的jar包通過wincsp等ftp工具上傳到集羣服務器上。
5> 通過hadoop命令在集羣上執行wordcount.jar包
命令:hadoop jar wordcount.jar
6> 執行結果如下:
二、MR程序在Yarn環境下運行過程:
1> 案例:計算單詞出現的詞頻(本地運行模式)
文件E:\\word.txt,內容:
鄭州,開封,洛陽,南陽,信陽,駐馬店,安陽,
鄭州,開封,洛陽,南陽,信陽,駐馬店,安陽,
鄭州,開封,洛陽,南陽,信陽,駐馬店,安陽,
鄭州,開封,洛陽,南陽,信陽,駐馬店,安陽,
鄭州,開封,洛陽,南陽,信陽,駐馬店,安陽,
鄭州,開封,洛陽,南陽,信陽,駐馬店,安陽,
周口,周口,鄭州,開封,洛陽,周口,開封
2> 詞頻統計代碼實現:
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;public class WordCountLocalToYarn {
/**
* mapper類
*
* @author Administrator
*
*/
public static class WordCountMapper extends Mapper<LongWritable, Text, Text, LongWritable> {@Override
protected void map(LongWritable key, Text line, Mapper<LongWritable, Text, Text, LongWritable>.Context context)
throws IOException, InterruptedException {
String[] words = line.toString().split(",");
System.out.println(words.length);
for (String w : words) {
System.out.println(w);
context.write(new Text(w), new LongWritable(1));
}
}
}/**
* reducer類
*
* @author Administrator
*
*/
public static class WordCountReducer extends Reducer<Text, LongWritable, Text, LongWritable> {@Override
protected void reduce(Text text, Iterable<LongWritable> counts,
Reducer<Text, LongWritable, Text, LongWritable>.Context context)
throws IOException, InterruptedException {
long sum = 0;
for (LongWritable i : counts) {
System.out.println(i);
sum += 1;
}
context.write(text, new LongWritable(sum));
}}
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Configuration conf = new Configuration();
// 爲什麼顯示設置?因爲默認讀取本地項目中的默認配置文件,裏面是local
// 運行在yarn上
conf.set("mapreduce.framework.name", "yarn");
// 告訴程序所運行的resourcemanager服務器
conf.set("yarn.resourcemanager.hostname", "192.168.248.100");
conf.set("fs.defaultFS", "hdfs://192.168.248.100:9000");// 告訴hadoop要兼容windows向Linux直接提交程序的兼容性
conf.set("mapreduce.app-submission.cross-platform", "true");
// 本地運行前,先手動導出包,如下,這樣當直接在myEclipse環境下運行程序時,纔會將這個jar自動提交到遠端Yarn集羣上運行,
// 通過開啓JobHistoryServer通過web界面可以看到提交上來的任務
conf.set("mapred.jar", "E://wordcount.jar");Job job = Job.getInstance(conf, "word");
job.setJarByClass(WordCountLocalToYarn.class);job.setMapperClass(WordCountMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(LongWritable.class);job.setReducerClass(WordCountReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(LongWritable.class);
FileInputFormat.setInputPaths(job, new Path("/input"));
FileOutputFormat.setOutputPath(job, new Path("/output"));boolean res = job.waitForCompletion(true);
if (res) {
System.out.println("success");
} else {
System.out.println("failse");
}}
}
3> 安裝上述步驟進行打包生成指定位置的wordcount.jar包
4> 運行主函數,直接將jar包上傳到服務器,並進行詞頻統計處理。