netty編解碼之使用protobuf

netty編解碼之使用protobuf


protobuf這個序列化框架在我們公司使用了,我負責的模塊中使用protobuf生成了一些model,然後使用了protostuff對緩存在redis中的數據進行序列化和反序列化,速度非常快,解決了一些當時的序列化和反序列化太慢的問題,這節來講下netty中使用protobuf進行序列化的用法。

這裏可以下載protobuf工具然後使用命令根據proto生成java類文件,也可以通過使用相應的protobuf插件來生成,這裏提供一下protobuf的maven插件和相應依賴,編寫的時候就不必再去網上尋找和進行調試了,如下:

<dependency>
            <groupId>io.grpc</groupId>
            <artifactId>grpc-netty</artifactId>
            <version>1.2.0</version>
        </dependency>
        <dependency>
            <groupId>io.grpc</groupId>
            <artifactId>grpc-protobuf</artifactId>
            <version>1.2.0</version>
        </dependency>
        <dependency>
            <groupId>io.grpc</groupId>
            <artifactId>grpc-stub</artifactId>
            <version>1.2.0</version>
        </dependency>

        <plugin>
          <groupId>org.xolstice.maven.plugins</groupId>
          <artifactId>protobuf-maven-plugin</artifactId>
          <version>0.5.0</version>
          <configuration>
            <!--
              The version of protoc must match protobuf-java. If you don't depend on
              protobuf-java directly, you will be transitively depending on the
              protobuf-java version that grpc depends on.
            -->
            <protocArtifact>com.google.protobuf:protoc:3.0.0-beta-2:exe:${os.detected.classifier}</protocArtifact>
            <pluginId>grpc-java</pluginId>
            <pluginArtifact>io.grpc:protoc-gen-grpc-java:0.13.2:exe:${os.detected.classifier}</pluginArtifact>
          </configuration>
          <executions>
            <execution>
              <goals>
                <goal>compile</goal>
                <goal>compile-custom</goal>
              </goals>
            </execution>
          </executions>
        </plugin>

基本以上這些就能滿足需求。

proto文件


使用protobuf需要另外編寫proto文件,我編寫的請求類與響應類如下:
SubscribeReq.proto

package netty;

option java_package = "cn.com.netty.codec.protobuf";
option java_outer_classname = "SubscribeReqProto";

message SubscribeReq {
    required int32 subReqID = 1;
    required string userName = 2;
    required string productName = 3;
    required string address = 4;
}

SubscribeResp.proto

package netty;

option java_package = "cn.com.netty.codec.protobuf";
option java_outer_classname = "SubscribeRespProto";

message SubscribeResp {
    required int32 subReqID = 1;
    required int32 respCode = 2;
    required string desc= 3;
}

這兩個文件放在與src同級下的proto文件夾下,在控制檯執行命令mvn install後可以在生成的target文件夾中發現生成的源碼,如下圖:

server程序


serverHandler類


package cn.com.protobuf;

import cn.com.netty.codec.protobuf.SubscribeReqProto;
import cn.com.netty.codec.protobuf.SubscribeRespProto;
import io.netty.channel.ChannelHandler;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;

/**
 * Created by xiaxuan on 17/11/27.
 */
@ChannelHandler.Sharable
public class SubReqServerHandler extends ChannelInboundHandlerAdapter {

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        SubscribeReqProto.SubscribeReq req = (SubscribeReqProto.SubscribeReq) msg;
        if ("xiaxuan".equalsIgnoreCase(req.getUserName())) {
            System.out.println("Service accept client subscribe req : [" + req.toString() + "]");
            ctx.writeAndFlush(resp(req.getSubReqID()));
        }
    }

    private SubscribeRespProto.SubscribeResp resp(int subReqID) {
        SubscribeRespProto.SubscribeResp.Builder builder = SubscribeRespProto.SubscribeResp.newBuilder();
        builder.setSubReqID(subReqID);
        builder.setRespCode(0);
        builder.setDesc("Netty book order succeed, 3 days later, sent to the designated address");
        return builder.build();
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        cause.printStackTrace();
        ctx.close();
    }
}

這裏的業務邏輯和之前沒有太多區別,只是在構建resp返回的時候使用了protobuf生成的類。

server類


package cn.com.protobuf;

....

/**
 * Created by xiaxuan on 17/11/27.
 */
public class SubReqServer {

    public void bind(int port) {
        EventLoopGroup bossGroup = new NioEventLoopGroup();
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        try {
            ServerBootstrap b = new ServerBootstrap();
            b.group(bossGroup, workerGroup)
                .channel(NioServerSocketChannel.class)
                    .option(ChannelOption.SO_BACKLOG, 100)
                    .handler(new LoggingHandler(LogLevel.INFO))
                    .childHandler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            ch.pipeline().addLast(
                                    new ProtobufVarint32FrameDecoder()
                            );
                            ch.pipeline().addLast(
                                    new ProtobufDecoder(
                                            SubscribeReqProto.SubscribeReq.getDefaultInstance()
                                    )
                            );
                            ch.pipeline().addLast(
                                    new ProtobufVarint32LengthFieldPrepender()
                            );
                            ch.pipeline().addLast(new ProtobufEncoder());
                            ch.pipeline().addLast(new SubReqServerHandler());
                        }
                    });

            //綁定端口,同步等待成功
            ChannelFuture f = b.bind(port).sync();

            //等待服務端監聽端口關閉
            f.channel().closeFuture().sync();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            bossGroup.shutdownGracefully();
            workerGroup.shutdownGracefully();
        }
    }

    public static void main(String[] args) {
        int port = 8080;
        new SubReqServer().bind(port);
    }
}

server和之前編寫的程序其實也是沒有明顯區別的,就是加入的編解碼的框架不同了,這裏使用的是:

ch.pipeline().addLast(
                                    new ProtobufDecoder(
                                            SubscribeReqProto.SubscribeReq.getDefaultInstance()
                                    )
                            );
                            ch.pipeline().addLast(new ProtobufEncoder());

這裏使用了protobuf提供的ProtobufDecoderProtobufEncoder進行編解碼,因此構造的resp響應中無需再進行手動編解碼,在pipline中會自動進行編解碼處理。

client程序


ClientHandler類


package cn.com.protobuf;

import cn.com.netty.codec.protobuf.SubscribeReqProto;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

/**
 * Created by xiaxuan on 17/11/27.
 */
public class SubReqClientHandler extends ChannelInboundHandlerAdapter {

    public SubReqClientHandler() {}

    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        for (int i = 0; i < 10; i++) {
            ctx.write(subReq(i));
        }
        ctx.flush();
    }

    private SubscribeReqProto.SubscribeReq subReq(int i) {
        SubscribeReqProto.SubscribeReq.Builder builder = SubscribeReqProto.SubscribeReq.newBuilder();
        builder.setSubReqID(i);
        builder.setUserName("xiaxuan");
        builder.setProductName("netty book for protobuf");
        builder.setAddress("beijin");
        return builder.build();
    }

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        System.out.println("Receive server response : [" + msg + "]");
    }

    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
        ctx.flush();
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        cause.printStackTrace();
        ctx.close();
    }
}

client的業務邏輯處理類也是比較簡單的,構建十個客戶端請求然後一次性發出,最後打出服務端響應結果。

client類


package cn.com.protobuf;

import cn.com.netty.codec.protobuf.SubscribeRespProto;
import io.netty.bootstrap.Bootstrap;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
import io.netty.handler.codec.protobuf.ProtobufDecoder;
import io.netty.handler.codec.protobuf.ProtobufEncoder;
import io.netty.handler.codec.protobuf.ProtobufVarint32FrameDecoder;
import io.netty.handler.codec.protobuf.ProtobufVarint32LengthFieldPrepender;

/**
 * Created by xiaxuan on 17/11/28.
 */
public class SubReqClient {

    public void connect(int port, String host) {
        EventLoopGroup group = new NioEventLoopGroup();
        try {
            Bootstrap b = new Bootstrap();
            b.group(group).channel(NioSocketChannel.class)
                    .option(ChannelOption.TCP_NODELAY, true)
                    .handler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        protected void initChannel(SocketChannel ch) throws Exception {
                            ch.pipeline().addLast(new ProtobufVarint32FrameDecoder());
                            ch.pipeline().addLast(
                                    new ProtobufDecoder(SubscribeRespProto.SubscribeResp.getDefaultInstance())
                            );
                            ch.pipeline().addLast(new ProtobufVarint32LengthFieldPrepender());
                            ch.pipeline().addLast(new ProtobufEncoder());
                            ch.pipeline().addLast(new SubReqClientHandler());
                        }
                    });

            //發起異步連接操作
            ChannelFuture f = b.connect(host, port).sync();

            //等待客戶端鏈路關閉
            f.channel().closeFuture().sync();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            group.shutdownGracefully();
        }
    }

    public static void main(String[] args) {
        new SubReqClient().connect(8080, "127.0.0.1");
    }
}

在pipline中添加的處理器和server基本類似,但是都有一個沒有提的就是ProtobufVarint32LengthFieldPrepender,進入這個類的源碼中可以很清晰的看到如下注釋:

/**
 * An encoder that prepends the the Google Protocol Buffers
 * <a href="http://code.google.com/apis/protocolbuffers/docs/encoding.html#varints">Base
 * 128 Varints</a> integer length field. For example:
 * <pre>
 * BEFORE ENCODE (300 bytes)       AFTER ENCODE (302 bytes)
 * +---------------+               +--------+---------------+
 * | Protobuf Data |-------------->| Length | Protobuf Data |
 * |  (300 bytes)  |               | 0xAC02 |  (300 bytes)  |
 * +---------------+               +--------+---------------+
 * </pre> *
 *
 * @see {@link CodedOutputStream} or (@link CodedOutputByteBufferNano)
 */

作用就是在在傳輸的數據上加了一個消息頭,這個就是用來進行一種對半包的處理,如果註釋掉這行代碼,在運行過程中可能會報錯,ProtobufVarint32FrameDecoder在接收到半包消息的時候便無法處理,便會報錯,因此一般要加上這個處理器。

運行結果


分別啓動server和client,結果如下:

server:

client:

運行成功,在使用上還是比較簡單。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章