Hadoop學習筆記(十)---自定義分區

所謂的自定義分區,就是規定reduce任務的數量,例如下面的數據:

1 2
1 1
3 2
2 2
5 1

假設上面的數據分別對應矩形的長跟寬,你會發現裏面有正方形跟長方形,現在我們按照面積大小從大到小排序,一個文件輸出的是長方形的數據,一個輸出的是正方形的數據,這裏我們就要自定義一個分區:

package cn.edu.bjut.model;

import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Partitioner;

public class MyPatitioner extends Partitioner<DataSortable, NullWritable> {

    @Override
    public int getPartition(DataSortable key, NullWritable value, int arg2) {

        if(key.getFirst() == key.getSecond()) {
            return 0;   //如果是正方形就在第一個分區裏面執行
        } else {
            return 1;   //矩形就在分區二里面執行
        }

    }

}

然後主程序裏面就是這樣的,加上我們自定義的分區和reduce任務的數量:

package cn.edu.bjut.model;
import java.io.IOException;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;


public class NumSort {

    static final String INPUT_DIR = "hdfs://172.21.15.189:9000/input";
    static final String OUTPUT_DIR = "hdfs://172.21.15.189:9000/output";

    public static void main(String[] args) throws Exception {

        Configuration conf = new Configuration();

        Path path = new Path(OUTPUT_DIR);

        FileSystem fileSystem = FileSystem.get(new URI(OUTPUT_DIR), conf);

        if(fileSystem.exists(path)) {

            fileSystem.delete(path, true);

        }

        Job job = new Job(conf, "NumSort");

        FileInputFormat.setInputPaths(job, INPUT_DIR); //設置輸入路徑
        FileOutputFormat.setOutputPath(job, path);  //設置輸出路徑

        job.setJarByClass(DataSortable.class);

        job.setMapperClass(MyMapper.class); //設置自定義的mapper類
        job.setMapOutputKeyClass(DataSortable.class);
        job.setMapOutputValueClass(NullWritable.class);

        job.setReducerClass(MyReducer.class);  //設置自定義的reduce類
        job.setOutputKeyClass(LongWritable.class);  //設置輸出的key的類型
        job.setOutputValueClass(LongWritable.class);  //設置輸出的value類型

        job.setPartitionerClass(MyPatitioner.class); //自定義分區
        job.setNumReduceTasks(2); // 兩個reduce任務

        job.waitForCompletion(true);  //開始執行

    }


    /**
     * 自定義的map類
     * @author Gary
     *
     */
    static class MyMapper extends Mapper<LongWritable, Text, DataSortable, NullWritable> {

        @Override
        protected void map(
                LongWritable key,
                Text value,
                Mapper<LongWritable, Text, DataSortable,  NullWritable>.Context context)
                throws IOException, InterruptedException {

            String[] nums = value.toString().split(" ");

            DataSortable dataSortable = new DataSortable(nums[0], nums[1]);

            context.write(dataSortable, NullWritable.get());

        }



    }

    /**
     * 自定義的reduce類
     * @author Gary
     *
     */
    static class MyReducer extends Reducer<DataSortable, NullWritable, LongWritable, LongWritable> {

        @Override
        protected void reduce(
                DataSortable key,
                Iterable<NullWritable> value,
                Reducer<DataSortable, NullWritable, LongWritable, LongWritable>.Context context)
                throws IOException, InterruptedException {

            context.write(new LongWritable(key.getFirst()), new LongWritable(key.getSecond()));

        }


    }

}

但是需要注意的是,直接運行這個程序是會報錯的,必須打成jar包來運行,步驟如下:

這裏寫圖片描述

然後選擇jar文件:

這裏寫圖片描述

點擊next,選擇輸出路徑:

這裏寫圖片描述

再次點擊next — next,到下面的界面,選擇你的主方法:

這裏寫圖片描述

將jar包ftp上傳到linux,然後切換到該文件所在目錄,執行命令hadoop jar data.jar

[root@localhost Public]# hadoop jar data.jar
Warning: $HADOOP_HOME is deprecated.

15/06/02 08:44:07 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
15/06/02 08:44:07 INFO input.FileInputFormat: Total input paths to process : 1
15/06/02 08:44:07 INFO util.NativeCodeLoader: Loaded the native-hadoop library
15/06/02 08:44:07 WARN snappy.LoadSnappy: Snappy native library not loaded
15/06/02 08:44:07 INFO mapred.JobClient: Running job: job_201506011333_0001
15/06/02 08:44:08 INFO mapred.JobClient:  map 0% reduce 0%
15/06/02 08:44:15 INFO mapred.JobClient:  map 100% reduce 0%
15/06/02 08:44:23 INFO mapred.JobClient:  map 100% reduce 16%
15/06/02 08:44:24 INFO mapred.JobClient:  map 100% reduce 33%
15/06/02 08:44:25 INFO mapred.JobClient:  map 100% reduce 100%
15/06/02 08:44:26 INFO mapred.JobClient: Job complete: job_201506011333_0001
15/06/02 08:44:26 INFO mapred.JobClient: Counters: 29
15/06/02 08:44:26 INFO mapred.JobClient:   Job Counters 
15/06/02 08:44:26 INFO mapred.JobClient:     Launched reduce tasks=2
15/06/02 08:44:26 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=6376
15/06/02 08:44:26 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
15/06/02 08:44:26 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
15/06/02 08:44:26 INFO mapred.JobClient:     Launched map tasks=1
15/06/02 08:44:26 INFO mapred.JobClient:     Data-local map tasks=1
15/06/02 08:44:26 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=19748
15/06/02 08:44:26 INFO mapred.JobClient:   File Output Format Counters 
15/06/02 08:44:26 INFO mapred.JobClient:     Bytes Written=20
15/06/02 08:44:26 INFO mapred.JobClient:   FileSystemCounters
15/06/02 08:44:26 INFO mapred.JobClient:     FILE_BYTES_READ=102
15/06/02 08:44:26 INFO mapred.JobClient:     HDFS_BYTES_READ=121
15/06/02 08:44:26 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=168973
15/06/02 08:44:26 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=20
15/06/02 08:44:26 INFO mapred.JobClient:   File Input Format Counters 
15/06/02 08:44:26 INFO mapred.JobClient:     Bytes Read=20
15/06/02 08:44:26 INFO mapred.JobClient:   Map-Reduce Framework
15/06/02 08:44:26 INFO mapred.JobClient:     Map output materialized bytes=102
15/06/02 08:44:26 INFO mapred.JobClient:     Map input records=5
15/06/02 08:44:26 INFO mapred.JobClient:     Reduce shuffle bytes=102
15/06/02 08:44:26 INFO mapred.JobClient:     Spilled Records=10
15/06/02 08:44:26 INFO mapred.JobClient:     Map output bytes=80
15/06/02 08:44:26 INFO mapred.JobClient:     Total committed heap usage (bytes)=191762432
15/06/02 08:44:26 INFO mapred.JobClient:     CPU time spent (ms)=3190
15/06/02 08:44:26 INFO mapred.JobClient:     Combine input records=0
15/06/02 08:44:26 INFO mapred.JobClient:     SPLIT_RAW_BYTES=101
15/06/02 08:44:26 INFO mapred.JobClient:     Reduce input records=5
15/06/02 08:44:26 INFO mapred.JobClient:     Reduce input groups=5
15/06/02 08:44:26 INFO mapred.JobClient:     Combine output records=0
15/06/02 08:44:26 INFO mapred.JobClient:     Physical memory (bytes) snapshot=336629760
15/06/02 08:44:26 INFO mapred.JobClient:     Reduce output records=5
15/06/02 08:44:26 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2209480704
15/06/02 08:44:26 INFO mapred.JobClient:     Map output records=5

成功後查看一下output文件夾裏面的數據,你會發現現在兩個輸出文件,最下面的兩個:

[root@localhost Public]# hadoop fs -ls /output
Warning: $HADOOP_HOME is deprecated.

Found 4 items
-rw-r--r--   1 root supergroup          0 2015-06-02 08:44 /output/_SUCCESS
drwxr-xr-x   - root supergroup          0 2015-06-02 08:44 /output/_logs
-rw-r--r--   1 root supergroup          8 2015-06-02 08:44 /output/part-r-00000
-rw-r--r--   1 root supergroup         12 2015-06-02 08:44 /output/part-r-00001

查看一下文件內容:

[root@localhost Public]# hadoop fs -cat /output/p*0
Warning: $HADOOP_HOME is deprecated.

1   1
2   2
[root@localhost Public]# hadoop fs -cat /output/p*1
Warning: $HADOOP_HOME is deprecated.

1   2
5   1
3   2
發佈了74 篇原創文章 · 獲贊 3 · 訪問量 6萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章