9、Hadoop序列化(自定義傳輸對象)


序列化就是把內存中的對象轉化成字節序列,便於網絡間傳輸和持久化到硬盤上,避免數據掉電丟失。
在Haoop中定義的最常用的基本對象,都已經實現了org.apache.hadoop.io.Writable接口,比如BooleanWritable、ByteWritable、IntWritable、FloatWritable、LongWritable、DoubleWritable、Text、MapWritable、ArrayWritable等對象,這些對象都可以在Mapper和Reducer之間進行數據序列化傳輸或持久到磁盤中,因此我們可以自定義對象,實現Writable接口,便可實現同樣功能。

示例:有一個文本user.txt,每條記錄登記了一個工人id、性別、單位小時勞動力價格,以及時長,有的工人會做多分工作,因此有多條記錄。下面統計出每個工人id對應的性別和總金額。user.txt內容如下:

12001	male	10	5
12002	female	8	7
12003	male	15	5
12004	male	12	10
12005	female	7	12
12003	male	16	5

首先建立maven工程,pom配置參考文章

1、建立輸入數據對應的bean

建立User的bean,實現Writable接口,需要重寫兩個方法write(序列化方法)、readFields(反序列化方法),寫序列化方法和發序列化方法的寫入和讀取的順序必須一致,示例如下:

package com.lzj.hadoop.serialize;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

import org.apache.hadoop.io.Writable;

/*實現writable接口*/
public class User implements Writable {

	private String sex;
	private int amount;
	
	/*空參構造函數,反序列化時調用	*/
	public User() {
		super();
	}
	
	/*寫序列化方法*/
	@Override
	public void write(DataOutput out) throws IOException {
		out.writeUTF(sex);
		out.writeInt(amount);
	}

	/*反序列化,反序列化必須與讀序列化的方法一致*/
	@Override
	public void readFields(DataInput in) throws IOException {
		this.sex = in.readUTF();
		this.amount = in.readInt();
	}

	@Override
	public String toString() {
		return sex + "\t" + "\t" + amount;
	}

	public String getSex() {
		return sex;
	}

	public void setSex(String sex) {
		this.sex = sex;
	}

	public int getAmount() {
		return amount;
	}

	public void setAmount(int amount) {
		this.amount = amount;
	}
	
}

2、建立Mapper分割處理數據

package com.lzj.hadoop.serialize;

import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class UserMapper extends Mapper<LongWritable, Text, Text, User>{

	Text k = new Text();
	User v = new User();
	
	@Override
	protected void map(LongWritable key, Text value, Context context)
			throws IOException, InterruptedException {
		/*1、獲取一行*/
		String line = value.toString();
		
		/*2、切割字段*/
		String[] fields = line.split("\t");
		
		/*3、取出用戶id作爲key*/
		String userId = fields[0];
		
		/*4、取出用戶單價和時長,求總金額*/
		int price = Integer.valueOf(fields[2]);
		int hours = Integer.valueOf(fields[3]);
		int amount = price * hours;
		
		/*5、設置輸出鍵值對*/
		k.set(userId); 			//設置鍵
		v.setSex(fields[1]);
		v.setAmount(amount);
		context.write(k, v);
	}
}

3、建立Reducer合併數據

package com.lzj.hadoop.serialize;

import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class UserReducer extends Reducer<Text, User, Text, User>{
	@Override
	protected void reduce(Text key, Iterable<User> values, Context context)
			throws IOException, InterruptedException {
		int amount = 0;
		
		/*遍歷獲取總金額*/
		String sex = null;
		for(User u : values) {
			amount = amount + u.getAmount();
			sex = u.getSex();
		}
		
		/*封裝Reducer輸出對象*/
		User user = new User();
		user.setSex(sex);
		user.setAmount(amount);
		context.write(key, user);
	}
}

4、建立job的啓動類

package com.lzj.hadoop.serialize;

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class UserDriver {
	public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
		/*獲取job的配置信息*/
		Configuration config = new Configuration();
		Job job = Job.getInstance(config);
		
		/*指定jar的啓動類*/
		job.setJarByClass(UserDriver.class);
		
		/*指定關聯的mapper/reducer類*/
		job.setMapperClass(UserMapper.class);
		job.setReducerClass(UserReducer.class);
		
		/*指定Mapper輸出數據的KV類型*/
		job.setMapOutputKeyClass(Text.class);
		job.setMapOutputValueClass(User.class);
		
		/*指定最終的輸出數據KV類型*/
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(User.class);
		
		/*設定job的輸入和輸出路徑*/
		FileInputFormat.setInputPaths(job, new Path("D:/tmp/user.txt"));
		FileOutputFormat.setOutputPath(job, new Path("D:/tmp/userOut"));
		
		/*提交任務*/
		boolean flag = job.waitForCompletion(true);
		System.out.println(flag);
	}
}

5、測試

運行job的啓動類UserDriver,輸出結果如下:

12001	male		50
12002	female		56
12003	male		155
12004	male		120
12005	female		84
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章