文章目錄
0x00 文章內容
- 行存儲與列存儲
- 編碼實現Parquet格式的讀寫
- 彩蛋
0x01 行存儲與列存儲
1. Avro與Parquet
a. 請參考文章:Hadoop支持的文件格式之Avro的0x01 行存儲與列存儲
0x02 編碼實現Parquet格式的讀寫
1. 編碼實現讀寫Parquet文件
a. 引入Parquet相關jar包
<!--添加Parquet依賴-->
<dependency>
<groupId>org.apache.parquet</groupId>
<artifactId>parquet-column</artifactId>
<version>1.8.1</version>
</dependency>
<dependency>
<groupId>org.apache.parquet</groupId>
<artifactId>parquet-hadoop</artifactId>
<version>1.8.1</version>
</dependency>
b. 完整的寫Parquet文件代碼(寫到HDFS)
package com.shaonaiyi.hadoop.filetype.parquet;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.parquet.column.ParquetProperties;
import org.apache.parquet.example.data.Group;
import org.apache.parquet.example.data.GroupFactory;
import org.apache.parquet.example.data.simple.SimpleGroupFactory;
import org.apache.parquet.hadoop.ParquetWriter;
import org.apache.parquet.hadoop.example.GroupWriteSupport;
import org.apache.parquet.hadoop.metadata.CompressionCodecName;
import org.apache.parquet.schema.MessageType;
import org.apache.parquet.schema.MessageTypeParser;
import java.io.IOException;
/**
* @Author [email protected]
* @Date 2019/12/18 10:14
* @Description 編碼實現寫Parquet文件
*/
public class ParquetFileWriter {
public static void main(String[] args) throws IOException {
MessageType schema = MessageTypeParser.parseMessageType("message Person {\n" +
" required binary name;\n" +
" required int32 age;\n" +
" required int32 favorite_number;\n" +
" required binary favorite_color;\n" +
"}");
Configuration configuration = new Configuration();
Path path = new Path("hdfs://master:9999/user/hadoop-sny/mr/filetype/parquet/data.parquet");
GroupWriteSupport writeSupport = new GroupWriteSupport();
GroupWriteSupport.setSchema(schema, configuration);
ParquetWriter<Group> writer = new ParquetWriter<Group>(path, writeSupport,
CompressionCodecName.SNAPPY,
ParquetWriter.DEFAULT_BLOCK_SIZE,
ParquetWriter.DEFAULT_PAGE_SIZE,
ParquetWriter.DEFAULT_PAGE_SIZE,
ParquetWriter.DEFAULT_IS_DICTIONARY_ENABLED,
ParquetWriter.DEFAULT_IS_VALIDATING_ENABLED,
ParquetProperties.WriterVersion.PARQUET_1_0, configuration);
GroupFactory groupFactory = new SimpleGroupFactory(schema);
Group group = groupFactory.newGroup()
.append("name", "shaonaiyi")
.append("age", 18)
.append("favorite_number", 7)
.append("favorite_color", "red");
writer.write(group);
writer.close();
}
}
c. 完整的讀Parquet文件代碼(從HDFS讀)
package com.shaonaiyi.hadoop.filetype.parquet;
import org.apache.hadoop.fs.Path;
import org.apache.parquet.example.data.Group;
import org.apache.parquet.hadoop.ParquetReader;
import org.apache.parquet.hadoop.example.GroupReadSupport;
import java.io.IOException;
/**
* @Author [email protected]
* @Date 2019/12/18 10:18
* @Description 編碼實現讀Parquet文件
*/
public class ParquetFileReader {
public static void main(String[] args) throws IOException {
Path path = new Path("hdfs://master:9999/user/hadoop-sny/mr/filetype/parquet/parquet-data.parquet");
GroupReadSupport readSupport = new GroupReadSupport();
ParquetReader<Group> reader = new ParquetReader<>(path, readSupport);
Group result = reader.read();
System.out.println("name:" + result.getString("name", 0).toString());
System.out.println("age:" + result.getInteger("age", 0));
System.out.println("favorite_number:" + result.getInteger("favorite_number", 0));
System.out.println("favorite_color:" + result.getString("favorite_color", 0));
}
}
2. 查看讀寫Parquet文件結果
a. 寫Parquet文件
b. 讀Parquet文件
3. 編碼實現讀寫Parquet文件(HDFS)
a. 引入Parquet與Avro關聯的jar包
<dependency>
<groupId>org.apache.parquet</groupId>
<artifactId>parquet-avro</artifactId>
<version>1.8.1</version>
</dependency>
從上面的代碼我們可以看出,以下面這種方式定義Schema很不友好:
MessageType schema = MessageTypeParser.parseMessageType("message Person {\n" +
" required binary name;\n" +
" required int32 age;\n" +
" required int32 favorite_number;\n" +
" required binary favorite_color;\n" +
"}");
所以我們可以將Parquet與Avro關聯,直接使用Avro的Schema即可。
b. 完整的寫Parquet文件代碼(HDFS)
package com.shaonaiyi.hadoop.filetype.parquet;
import com.shaonaiyi.hadoop.filetype.avro.Person;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.task.JobContextImpl;
import org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl;
import org.apache.parquet.avro.AvroParquetOutputFormat;
import java.io.IOException;
/**
* @Author [email protected]
* @Date 2019/12/18 10:47
* @Description 編碼實現寫Parquet文件(HDFS)
*/
public class MRAvroParquetFileWriter {
public static void main(String[] args) throws IOException, IllegalAccessException, InstantiationException, ClassNotFoundException, InterruptedException {
//1 構建一個job實例
Configuration hadoopConf = new Configuration();
Job job = Job.getInstance(hadoopConf);
//2 設置job的相關屬性
job.setOutputKeyClass(NullWritable.class);
job.setOutputValueClass(Person.class);
job.setOutputFormatClass(AvroParquetOutputFormat.class);
//AvroJob.setOutputKeySchema(job, Schema.create(Schema.Type.INT));
AvroParquetOutputFormat.setSchema(job, Person.SCHEMA$);
//3 設置輸出路徑
FileOutputFormat.setOutputPath(job, new Path("hdfs://master:9999/user/hadoop-sny/mr/filetype/avro-parquet"));
//4 構建JobContext
JobID jobID = new JobID("jobId", 123);
JobContext jobContext = new JobContextImpl(job.getConfiguration(), jobID);
//5 構建taskContext
TaskAttemptID attemptId = new TaskAttemptID("attemptId", 123, TaskType.REDUCE, 0, 0);
TaskAttemptContext hadoopAttemptContext = new TaskAttemptContextImpl(job.getConfiguration(), attemptId);
//6 構建OutputFormat實例
OutputFormat format = job.getOutputFormatClass().newInstance();
//7 設置OutputCommitter
OutputCommitter committer = format.getOutputCommitter(hadoopAttemptContext);
committer.setupJob(jobContext);
committer.setupTask(hadoopAttemptContext);
//8 獲取writer寫數據,寫完關閉writer
RecordWriter<Void, Person> writer = format.getRecordWriter(hadoopAttemptContext);
Person person = new Person();
person.setName("shaonaiyi");
person.setAge(18);
person.setFavoriteNumber(7);
person.setFavoriteColor("red");
writer.write(null, person);
writer.close(hadoopAttemptContext);
//9 committer提交job和task
committer.commitTask(hadoopAttemptContext);
committer.commitJob(jobContext);
}
}
c. 完整的讀Parquet文件代碼(HDFS)
package com.shaonaiyi.hadoop.filetype.parquet;
import com.shaonaiyi.hadoop.filetype.avro.Person;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.task.JobContextImpl;
import org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl;
import org.apache.parquet.avro.AvroParquetInputFormat;
import java.io.IOException;
import java.util.List;
import java.util.function.Consumer;
/**
* @Author [email protected]
* @Date 2019/12/18 10:52
* @Description 編碼實現讀Parquet文件(HDFS)
*/
public class MRAvroParquetFileReader {
public static void main(String[] args) throws IOException, IllegalAccessException, InstantiationException {
//1 構建一個job實例
Configuration hadoopConf = new Configuration();
Job job = Job.getInstance(hadoopConf);
//2 設置需要讀取的文件全路徑
FileInputFormat.setInputPaths(job, "hdfs://master:9999/user/hadoop-sny/mr/filetype/avro-parquet");
//3 獲取讀取文件的格式
AvroParquetInputFormat inputFormat = AvroParquetInputFormat.class.newInstance();
AvroParquetInputFormat.setAvroReadSchema(job, Person.SCHEMA$);
//AvroJob.setInputKeySchema(job, Person.SCHEMA$);
//4 獲取需要讀取文件的數據塊的分區信息
//4.1 獲取文件被分成多少數據塊了
JobID jobID = new JobID("jobId", 123);
JobContext jobContext = new JobContextImpl(job.getConfiguration(), jobID);
List<InputSplit> inputSplits = inputFormat.getSplits(jobContext);
//讀取每一個數據塊的數據
inputSplits.forEach(new Consumer<InputSplit>() {
@Override
public void accept(InputSplit inputSplit) {
TaskAttemptID attemptId = new TaskAttemptID("jobTrackerId", 123, TaskType.MAP, 0, 0);
TaskAttemptContext hadoopAttemptContext = new TaskAttemptContextImpl(job.getConfiguration(), attemptId);
RecordReader<NullWritable, Person> reader = null;
try {
reader = inputFormat.createRecordReader(inputSplit, hadoopAttemptContext);
reader.initialize(inputSplit, hadoopAttemptContext);
while (reader.nextKeyValue()) {
System.out.println(reader.getCurrentKey());
Person person = reader.getCurrentValue();
System.out.println(person);
}
reader.close();
} catch (IOException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
}
}
4. 查看讀寫Parquet文件(HDFS)結果
a. 寫Parquet文件(HDFS)
b. 讀Parquet文件(HDFS),Key沒有設置值
0x03 彩蛋
- 編寫讀寫Parquet文件Demo
package com.shaonaiyi.hadoop.filetype.parquet;
import com.shaonaiyi.hadoop.filetype.avro.Person;
import org.apache.hadoop.fs.Path;
import org.apache.parquet.avro.AvroParquetReader;
import org.apache.parquet.avro.AvroParquetWriter;
import org.apache.parquet.hadoop.ParquetReader;
import org.apache.parquet.hadoop.ParquetWriter;
import org.apache.parquet.hadoop.metadata.CompressionCodecName;
import java.io.IOException;
/**
* @Author [email protected]
* @Date 2019/12/18 11:11
* @Description 編寫讀寫Parquet文件Demo
*/
public class AvroParquetDemo {
public static void main(String[] args) throws IOException {
Person person = new Person();
person.setName("shaonaiyi");
person.setAge(18);
person.setFavoriteNumber(7);
person.setFavoriteColor("red");
Path path = new Path("hdfs://master:9999/user/hadoop-sny/mr/filetype/avro-parquet2");
ParquetWriter<Object> writer = AvroParquetWriter.builder(path)
.withSchema(Person.SCHEMA$)
.withCompressionCodec(CompressionCodecName.SNAPPY)
.build();
writer.write(person);
writer.close();
ParquetReader<Object> avroParquetReader = AvroParquetReader.builder(path).build();
Person record = (Person)avroParquetReader.read();
System.out.println("name:" + record.getName());
System.out.println("age:" + record.get("age").toString());
System.out.println("favorite_number:" + record.get("favorite_number").toString());
System.out.println("favorite_color:" + record.get("favorite_color"));
}
}
- 控制檯可以讀出文件
- HDFS上也有數據了
0xFF 總結
- 在MapReduce作業中如何使用:
job.setInputFormatClass(AvroParquetInputFormat.class);
AvroParquetInputFormat.setAvroReadSchema(job, Person.SCHEMA$);
job.setOutputFormatClass(ParquetOutputFormat.class);
AvroParquetOutputFormat.setSchema(job, Person.SCHEMA$);
- 文章:網站用戶行爲分析項目之會話切割(二) 中 9. 保存統計結果 時就是以Parquet的格式保存下來的。
- Hadoop支持的文件格式系列:
Hadoop支持的文件格式之Text
Hadoop支持的文件格式之Avro
Hadoop支持的文件格式之Parquet
Hadoop支持的文件格式之SequenceFile
作者簡介:邵奈一
全棧工程師、市場洞察者、專欄編輯
| 公衆號 | 微信 | 微博 | CSDN | 簡書 |
福利:
邵奈一的技術博客導航
邵奈一 原創不易,如轉載請標明出處。