RocketMQ中NameServer的啓動源碼分析

在RocketMQ中,使用NamesrvStartup作爲啓動類

主函數作爲其啓動的入口:

public static void main(String[] args) {
	main0(args);
}

main0方法:

public static NamesrvController main0(String[] args) {
    try {
        NamesrvController controller = createNamesrvController(args);
        start(controller);
        String tip = "The Name Server boot success. serializeType=" + RemotingCommand.getSerializeTypeConfigInThisServer();
        log.info(tip);
        System.out.printf("%s%n", tip);
        return controller;
    } catch (Throwable e) {
        e.printStackTrace();
        System.exit(-1);
    }

    return null;
}

首先通過createNamesrvController方法生成NameServer的控制器NamesrvController

createNamesrvController方法:

public static NamesrvController createNamesrvController(String[] args) throws IOException, JoranException {
    System.setProperty(RemotingCommand.REMOTING_VERSION_KEY, Integer.toString(MQVersion.CURRENT_VERSION));
    //PackageConflictDetect.detectFastjson();

    Options options = ServerUtil.buildCommandlineOptions(new Options());
    commandLine = ServerUtil.parseCmdLine("mqnamesrv", args, buildCommandlineOptions(options), new PosixParser());
    if (null == commandLine) {
        System.exit(-1);
        return null;
    }

    final NamesrvConfig namesrvConfig = new NamesrvConfig();
    final NettyServerConfig nettyServerConfig = new NettyServerConfig();
    nettyServerConfig.setListenPort(9876);
    if (commandLine.hasOption('c')) {
        String file = commandLine.getOptionValue('c');
        if (file != null) {
            InputStream in = new BufferedInputStream(new FileInputStream(file));
            properties = new Properties();
            properties.load(in);
            MixAll.properties2Object(properties, namesrvConfig);
            MixAll.properties2Object(properties, nettyServerConfig);

            namesrvConfig.setConfigStorePath(file);

            System.out.printf("load config properties file OK, %s%n", file);
            in.close();
        }
    }

    if (commandLine.hasOption('p')) {
        InternalLogger console = InternalLoggerFactory.getLogger(LoggerName.NAMESRV_CONSOLE_NAME);
        MixAll.printObjectProperties(console, namesrvConfig);
        MixAll.printObjectProperties(console, nettyServerConfig);
        System.exit(0);
    }

    MixAll.properties2Object(ServerUtil.commandLine2Properties(commandLine), namesrvConfig);

    if (null == namesrvConfig.getRocketmqHome()) {
        System.out.printf("Please set the %s variable in your environment to match the location of the RocketMQ installation%n", MixAll.ROCKETMQ_HOME_ENV);
        System.exit(-2);
    }

    LoggerContext lc = (LoggerContext) LoggerFactory.getILoggerFactory();
    JoranConfigurator configurator = new JoranConfigurator();
    configurator.setContext(lc);
    lc.reset();
    configurator.doConfigure(namesrvConfig.getRocketmqHome() + "/conf/logback_namesrv.xml");

    log = InternalLoggerFactory.getLogger(LoggerName.NAMESRV_LOGGER_NAME);

    MixAll.printObjectProperties(log, namesrvConfig);
    MixAll.printObjectProperties(log, nettyServerConfig);

    final NamesrvController controller = new NamesrvController(namesrvConfig, nettyServerConfig);

    // remember all configs to prevent discard
    controller.getConfiguration().registerConfig(properties);

    return controller;
}

這裏創建了兩個實體類NamesrvConfig和NettyServerConfig
這兩個實體類對應了其配置文件中的配置

NamesrvConfig:

private String rocketmqHome = System.getProperty(MixAll.ROCKETMQ_HOME_PROPERTY, System.getenv(MixAll.ROCKETMQ_HOME_ENV));
private String kvConfigPath = System.getProperty("user.home") + File.separator + "namesrv" + File.separator + "kvConfig.json";
private String configStorePath = System.getProperty("user.home") + File.separator + "namesrv" + File.separator + "namesrv.properties";
private String productEnvName = "center";
private boolean clusterTest = false;
private boolean orderMessageEnable = false;

NettyServerConfig:

private int listenPort = 8888;
private int serverWorkerThreads = 8;
private int serverCallbackExecutorThreads = 0;
private int serverSelectorThreads = 3;
private int serverOnewaySemaphoreValue = 256;
private int serverAsyncSemaphoreValue = 64;
private int serverChannelMaxIdleTimeSeconds = 120;

private int serverSocketSndBufSize = NettySystemConfig.socketSndbufSize // 65535;
private int serverSocketRcvBufSize = NettySystemConfig.socketRcvbufSize // 65535;
private boolean serverPooledByteBufAllocatorEnable = true;

對應如下配置文件:

##
# 名稱:NamesrvConfig.rocketmqHome <String>
# 默認值:(通過 sh mqnamesrv 設置 ROCKETMQ_HOME 環境變量,在源程序中獲取環境變量得
#        到的目錄)
# 描述:RocketMQ 主目錄 
# 建議:不主動配置
##
rocketmqHome = /usr/rocketmq

##
# 名稱:NamesrvConfig.kvConfigPath <String>
# 默認值:$user.home/namesrv/kvConfig.json <在源程序中獲取用戶環境變量後生成>
# 描述:kv 配置文件路徑,包含順序消息主題的配置信息 
# 建議:啓用順序消息時配置
##
kvConfigPath = /root/namesrv/kvConfig.json

##
# 名稱:NamesrvConfig.configStorePath <String>
# 默認值:$user.home/namesrv/namesrv.properties <在源程序中獲取用戶環境變量後生成>
# 描述:NameServer 配置文件路徑
# 建議:啓動時通過 -c 指定
##
configStorePath = /root/namesrv/namesrv.properties

##
# 名稱:NamesrvConfig.clusterTest <boolean>
# 默認值:false <在源程序中初始化字段時指定>
# 描述:是否開啓集羣測試
# 建議:不主動配置
##
clusterTest = false

##
# 名稱:NamesrvConfig.orderMessageEnable <boolean>
# 默認值:false <在源程序中初始化字段時指定>
# 描述:是否支持順序消息
# 建議:啓用順序消息時配置
##
orderMessageEnable = false

##
# 名稱:NettyServerConfig.listenPort <int>
# 默認值:9876 <在源程序中初始化後單獨設置>
# 描述:服務端監聽端口
# 建議:不主動配置
##
listenPort = 9876

##
# 名稱:NettyServerConfig.serverWorkerThreads <int>
# 默認值:8 <在源程序中初始化字段時指定>
# 描述:Netty 業務線程池線程個數
# 建議:不主動配置
##
serverWorkerThreads = 8

##
# 名稱:NettyServerConfig.serverCallbackExecutorThreads <int>
# 默認值:0 <在源程序中初始化字段時指定>
# 描述:Netty public 任務線程池線程個數,Netty 網絡設計,根據業務類型會創建不同的線程池,比如處理髮送消息、消息消費、心跳檢測等。如果該業務類型(RequestCode)未註冊線程池,則由 public 線程池執行
# 建議:
##
serverCallbackExecutorThreads = 0

##
# 名稱:NettyServerConfig.serverSelectorThreads <int>
# 默認值:3 <在源程序中初始化字段時指定>
# 描述:IO 線程池線程個數,主要是 NameServer、Broker 端解析請求、返回響應的線程個數,這類線程池主要是處理網絡請求的,解析請求包,然後轉發到各個業務線程池完成具體的業務操作,然後將結果再返回調用方
# 建議:不主動配置
##
serverSelectorThreads = 3

##
# 名稱:NettyServerConfig.serverOnewaySemaphoreValue <int>
# 默認值:256 <在源程序中初始化字段時指定>
# 描述:send oneway 消息請求併發度
# 建議:不主動配置
##
serverOnewaySemaphoreValue = 256

##
# 名稱:NettyServerConfig.serverAsyncSemaphoreValue <int>
# 默認值:64 <在源程序中初始化字段時指定>
# 描述:異步消息發送最大併發度
# 建議:不主動配置
##
serverAsyncSemaphoreValue = 64

##
# 名稱:NettyServerConfig.serverChannelMaxIdleTimeSeconds <int>
# 默認值:120 <在源程序中初始化字段時指定>
# 描述:網絡連接最大空閒時間,單位秒,如果連接空閒時間超過該參數設置的值,連接將被關閉
# 建議:不主動配置
##
serverChannelMaxIdleTimeSeconds = 120

##
# 名稱:NettyServerConfig.serverSocketSndBufSize <int>
# 默認值:65535 <在源程序中初始化字段時指定>
# 描述:網絡 socket 發送緩存區大小,單位 B,即默認爲 64KB
# 建議:不主動配置
##
serverSocketSndBufSize = 65535

##
# 名稱:NettyServerConfig.serverSocketRcvBufSize <int>
# 默認值:65535 <在源程序中初始化字段時指定>
# 描述:網絡 socket 接收緩存區大小,單位 B,即默認爲 64KB
# 建議:不主動配置
##
serverSocketRcvBufSize = 65535

##
# 名稱:NettyServerConfig.serverPooledByteBufAllocatorEnable <int>
# 默認值:true <在源程序中初始化字段時指定>
# 描述:ByteBuffer 是否開啓緩存,建議開啓
# 建議:不主動配置
##
serverPooledByteBufAllocatorEnable = true

##
# 名稱:NettyServerConfig.useEpollNativeSelector <int>
# 默認值:false <在源程序中初始化字段時指定>
# 描述:是否啓用 Epoll IO 模型
# 建議:Linux 環境開啓
##
useEpollNativeSelector = true

接下來是對‘-c’命令下配置文件的加載,以及‘-p’命令下namesrvConfig和nettyServerConfig屬性的打印
後續是對日誌的一系列配置

在完成這些後,會根據namesrvConfig和nettyServerConfig創建NamesrvController實例

NamesrvController:

public NamesrvController(NamesrvConfig namesrvConfig, NettyServerConfig nettyServerConfig) {
    this.namesrvConfig = namesrvConfig;
    this.nettyServerConfig = nettyServerConfig;
    this.kvConfigManager = new KVConfigManager(this);
    this.routeInfoManager = new RouteInfoManager();
    this.brokerHousekeepingService = new BrokerHousekeepingService(this);
    this.configuration = new Configuration(
        log,
        this.namesrvConfig, this.nettyServerConfig
    );
    this.configuration.setStorePathFromConfig(this.namesrvConfig, "configStorePath");
}

可以看到這裏創建了一個KVConfigManager和一個RouteInfoManager

KVConfigManager:

public class KVConfigManager {
	private final NamesrvController namesrvController;
	private final HashMap<String/* Namespace */, HashMap<String/* Key */, String/* Value */>> configTable =
	        new HashMap<String, HashMap<String, String>>();
	        
	public KVConfigManager(NamesrvController namesrvController) {
	        this.namesrvController = namesrvController;
	}
	......
}

KVConfigManager通過建立configTable管理KV

RouteInfoManager:

public class RouteInfoManager {
	private final HashMap<String/* topic */, List<QueueData>> topicQueueTable;
	private final HashMap<String/* brokerName */, BrokerData> brokerAddrTable;
	private final HashMap<String/* clusterName */, Set<String/* brokerName */>> clusterAddrTable;
	private final HashMap<String/* brokerAddr */, BrokerLiveInfo> brokerLiveTable;
	private final HashMap<String/* brokerAddr */, List<String>/* Filter Server */> filterServerTable;
	private final static long BROKER_CHANNEL_EXPIRED_TIME = 1000 * 60 * 2;

	public RouteInfoManager() {
	this.topicQueueTable = new HashMap<String, List<QueueData>>(1024);
	this.brokerAddrTable = new HashMap<String, BrokerData>(128);
	this.clusterAddrTable = new HashMap<String, Set<String>>(32);
	this.brokerLiveTable = new HashMap<String, BrokerLiveInfo>(256);
	this.filterServerTable = new HashMap<String, List<String>>(256);
	}
	......
}

RouteInfoManager則記錄了這些路由信息,其中BROKER_CHANNEL_EXPIRED_TIME 表示允許的不活躍的Broker存活時間

在NamesrvController中還創建了一個BrokerHousekeepingService:

public class BrokerHousekeepingService implements ChannelEventListener {
    private static final InternalLogger log = InternalLoggerFactory.getLogger(LoggerName.NAMESRV_LOGGER_NAME);
    private final NamesrvController namesrvController;

    public BrokerHousekeepingService(NamesrvController namesrvController) {
        this.namesrvController = namesrvController;
    }

    @Override
    public void onChannelConnect(String remoteAddr, Channel channel) {
    }

    @Override
    public void onChannelClose(String remoteAddr, Channel channel) {
        this.namesrvController.getRouteInfoManager().onChannelDestroy(remoteAddr, channel);
    }

    @Override
    public void onChannelException(String remoteAddr, Channel channel) {
        this.namesrvController.getRouteInfoManager().onChannelDestroy(remoteAddr, channel);
    }

    @Override
    public void onChannelIdle(String remoteAddr, Channel channel) {
        this.namesrvController.getRouteInfoManager().onChannelDestroy(remoteAddr, channel);
    }
}

可以看到這是一個ChannelEventListener,用來處理Netty的中的異步事件監聽

在創建完NamesrvController後,回到main0,調用start方法,真正開啓NameServer服務

start方法:

public static NamesrvController start(final NamesrvController controller) throws Exception {
    if (null == controller) {
        throw new IllegalArgumentException("NamesrvController is null");
    }

    boolean initResult = controller.initialize();
    if (!initResult) {
        controller.shutdown();
        System.exit(-3);
    }

    Runtime.getRuntime().addShutdownHook(new ShutdownHookThread(log, new Callable<Void>() {
        @Override
        public Void call() throws Exception {
            controller.shutdown();
            return null;
        }
    }));

    controller.start();

    return controller;
}

首先調用NamesrvController的initialize方法:

public boolean initialize() {
    this.kvConfigManager.load();

    this.remotingServer = new NettyRemotingServer(this.nettyServerConfig, this.brokerHousekeepingService);

    this.remotingExecutor =
        Executors.newFixedThreadPool(nettyServerConfig.getServerWorkerThreads(), new ThreadFactoryImpl("RemotingExecutorThread_"));

    this.registerProcessor();

    this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {

        @Override
        public void run() {
            NamesrvController.this.routeInfoManager.scanNotActiveBroker();
        }
    }, 5, 10, TimeUnit.SECONDS);

    this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {

        @Override
        public void run() {
            NamesrvController.this.kvConfigManager.printAllPeriodically();
        }
    }, 1, 10, TimeUnit.MINUTES);

    if (TlsSystemConfig.tlsMode != TlsMode.DISABLED) {
        // Register a listener to reload SslContext
        try {
            fileWatchService = new FileWatchService(
                new String[] {
                    TlsSystemConfig.tlsServerCertPath,
                    TlsSystemConfig.tlsServerKeyPath,
                    TlsSystemConfig.tlsServerTrustCertPath
                },
                new FileWatchService.Listener() {
                    boolean certChanged, keyChanged = false;
                    @Override
                    public void onChanged(String path) {
                        if (path.equals(TlsSystemConfig.tlsServerTrustCertPath)) {
                            log.info("The trust certificate changed, reload the ssl context");
                            reloadServerSslContext();
                        }
                        if (path.equals(TlsSystemConfig.tlsServerCertPath)) {
                            certChanged = true;
                        }
                        if (path.equals(TlsSystemConfig.tlsServerKeyPath)) {
                            keyChanged = true;
                        }
                        if (certChanged && keyChanged) {
                            log.info("The certificate and private key changed, reload the ssl context");
                            certChanged = keyChanged = false;
                            reloadServerSslContext();
                        }
                    }
                    private void reloadServerSslContext() {
                        ((NettyRemotingServer) remotingServer).loadSslContext();
                    }
                });
        } catch (Exception e) {
            log.warn("FileWatchService created error, can't load the certificate dynamically");
        }
    }

    return true;
}

先通過kvConfigManager的load方法,向KVConfigManager中的map加載之前配置好的KV文件路徑下的鍵值對

public void load() {
    String content = null;
    try {
        content = MixAll.file2String(this.namesrvController.getNamesrvConfig().getKvConfigPath());
    } catch (IOException e) {
        log.warn("Load KV config table exception", e);
    }
    if (content != null) {
        KVConfigSerializeWrapper kvConfigSerializeWrapper =
            KVConfigSerializeWrapper.fromJson(content, KVConfigSerializeWrapper.class);
        if (null != kvConfigSerializeWrapper) {
            this.configTable.putAll(kvConfigSerializeWrapper.getConfigTable());
            log.info("load KV config table OK");
        }
    }
}

方法比較簡單,將JSON形式的KV文件包裝成KVConfigSerializeWrapper,通過getConfigTable方法轉換成map放在configTable中

完成KV加載後,建立了一個NettyRemotingServer,即Netty服務器

public NettyRemotingServer(final NettyServerConfig nettyServerConfig,
        final ChannelEventListener channelEventListener) {
    super(nettyServerConfig.getServerOnewaySemaphoreValue(), nettyServerConfig.getServerAsyncSemaphoreValue());
    this.serverBootstrap = new ServerBootstrap();
    this.nettyServerConfig = nettyServerConfig;
    this.channelEventListener = channelEventListener;

    int publicThreadNums = nettyServerConfig.getServerCallbackExecutorThreads();
    if (publicThreadNums <= 0) {
        publicThreadNums = 4;
    }

    this.publicExecutor = Executors.newFixedThreadPool(publicThreadNums, new ThreadFactory() {
        private AtomicInteger threadIndex = new AtomicInteger(0);

        @Override
        public Thread newThread(Runnable r) {
            return new Thread(r, "NettyServerPublicExecutor_" + this.threadIndex.incrementAndGet());
        }
    });

    if (useEpoll()) {
        this.eventLoopGroupBoss = new EpollEventLoopGroup(1, new ThreadFactory() {
            private AtomicInteger threadIndex = new AtomicInteger(0);

            @Override
            public Thread newThread(Runnable r) {
                return new Thread(r, String.format("NettyEPOLLBoss_%d", this.threadIndex.incrementAndGet()));
            }
        });

        this.eventLoopGroupSelector = new EpollEventLoopGroup(nettyServerConfig.getServerSelectorThreads(), new ThreadFactory() {
            private AtomicInteger threadIndex = new AtomicInteger(0);
            private int threadTotal = nettyServerConfig.getServerSelectorThreads();

            @Override
            public Thread newThread(Runnable r) {
                return new Thread(r, String.format("NettyServerEPOLLSelector_%d_%d", threadTotal, this.threadIndex.incrementAndGet()));
            }
        });
    } else {
        this.eventLoopGroupBoss = new NioEventLoopGroup(1, new ThreadFactory() {
            private AtomicInteger threadIndex = new AtomicInteger(0);

            @Override
            public Thread newThread(Runnable r) {
                return new Thread(r, String.format("NettyNIOBoss_%d", this.threadIndex.incrementAndGet()));
            }
        });

        this.eventLoopGroupSelector = new NioEventLoopGroup(nettyServerConfig.getServerSelectorThreads(), new ThreadFactory() {
            private AtomicInteger threadIndex = new AtomicInteger(0);
            private int threadTotal = nettyServerConfig.getServerSelectorThreads();

            @Override
            public Thread newThread(Runnable r) {
                return new Thread(r, String.format("NettyServerNIOSelector_%d_%d", threadTotal, this.threadIndex.incrementAndGet()));
            }
        });
    }

    loadSslContext();
}

這裏創建了ServerBootstrap
channelEventListener就是剛纔創建的BrokerHousekeepingService

然後根據是否使用epoll,選擇創建兩個合適的EventLoopGroup

創建完成後,通過loadSslContext完成對SSL和TLS的設置

回到initialize方法,在創建完Netty的服務端後,調用registerProcessor方法:

private void registerProcessor() {
    if (namesrvConfig.isClusterTest()) {

        this.remotingServer.registerDefaultProcessor(new ClusterTestRequestProcessor(this, namesrvConfig.getProductEnvName()),
            this.remotingExecutor);
    } else {

        this.remotingServer.registerDefaultProcessor(new DefaultRequestProcessor(this), this.remotingExecutor);
    }
}

這裏和是否設置了clusterTest集羣測試有關,默認關閉

在默認情況下創建了DefaultRequestProcessor,這個類很重要,後面會詳細說明,然後通過remotingServer的registerDefaultProcessor方法,將DefaultRequestProcessor註冊給Netty服務器:

public void registerDefaultProcessor(NettyRequestProcessor processor, ExecutorService executor) {
    this.defaultRequestProcessor = new Pair<NettyRequestProcessor, ExecutorService>(processor, executor);
}

在做完這些後,提交了兩個定時任務
①定時清除不活躍的Broker
RouteInfoManager的scanNotActiveBroker方法:

public void scanNotActiveBroker() {
    Iterator<Entry<String, BrokerLiveInfo>> it = this.brokerLiveTable.entrySet().iterator();
    while (it.hasNext()) {
        Entry<String, BrokerLiveInfo> next = it.next();
        long last = next.getValue().getLastUpdateTimestamp();
        if ((last + BROKER_CHANNEL_EXPIRED_TIME) < System.currentTimeMillis()) {
            RemotingUtil.closeChannel(next.getValue().getChannel());
            it.remove();
            log.warn("The broker channel expired, {} {}ms", next.getKey(), BROKER_CHANNEL_EXPIRED_TIME);
            this.onChannelDestroy(next.getKey(), next.getValue().getChannel());
        }
    }
}

這裏比較簡單,在之前RouteInfoManager中創建的brokerLiveTable表中遍歷所有BrokerLiveInfo,找到超出規定時間BROKER_CHANNEL_EXPIRED_TIME的BrokerLiveInfo信息進行刪除,同時關閉Channel
而onChannelDestroy方法,會對其他幾張表進行相關聯的刪除工作,代碼重複量大就不細說了

BrokerLiveInfo記錄了Broker的活躍度信息:

private long lastUpdateTimestamp;
private DataVersion dataVersion;
private Channel channel;
private String haServerAddr;

lastUpdateTimestamp記錄上一次更新時間戳,是其活躍性的關鍵

②定時完成configTable的日誌記錄
KVConfigManager的printAllPeriodically方法:

public void printAllPeriodically() {
    try {
        this.lock.readLock().lockInterruptibly();
        try {
            log.info("--------------------------------------------------------");

            {
                log.info("configTable SIZE: {}", this.configTable.size());
                Iterator<Entry<String, HashMap<String, String>>> it =
                    this.configTable.entrySet().iterator();
                while (it.hasNext()) {
                    Entry<String, HashMap<String, String>> next = it.next();
                    Iterator<Entry<String, String>> itSub = next.getValue().entrySet().iterator();
                    while (itSub.hasNext()) {
                        Entry<String, String> nextSub = itSub.next();
                        log.info("configTable NS: {} Key: {} Value: {}", next.getKey(), nextSub.getKey(),
                            nextSub.getValue());
                    }
                }
            }
        } finally {
            this.lock.readLock().unlock();
        }
    } catch (InterruptedException e) {
        log.error("printAllPeriodically InterruptedException", e);
    }
}

很簡單,根據configTable表的內容,完成KV的日誌記錄

在創建完這兩個定時任務後會註冊一個偵聽器,以便完成SslContext的重新加載

initialize隨之結束,之後是對關閉事件的處理

最後調用NamesrvController的start,此時纔是真正的開啓物理上的服務
NamesrvController的start方法:

public void start() throws Exception {
    this.remotingServer.start();

    if (this.fileWatchService != null) {
        this.fileWatchService.start();
    }
}

這裏實際上就是開啓的Netty服務端

NettyRemotingServer的start方法:

public void start() {
    this.defaultEventExecutorGroup = new DefaultEventExecutorGroup(
        nettyServerConfig.getServerWorkerThreads(),
        new ThreadFactory() {

            private AtomicInteger threadIndex = new AtomicInteger(0);

            @Override
            public Thread newThread(Runnable r) {
                return new Thread(r, "NettyServerCodecThread_" + this.threadIndex.incrementAndGet());
            }
        });

    ServerBootstrap childHandler =
        this.serverBootstrap.group(this.eventLoopGroupBoss, this.eventLoopGroupSelector)
            .channel(useEpoll() ? EpollServerSocketChannel.class : NioServerSocketChannel.class)
            .option(ChannelOption.SO_BACKLOG, 1024)
            .option(ChannelOption.SO_REUSEADDR, true)
            .option(ChannelOption.SO_KEEPALIVE, false)
            .childOption(ChannelOption.TCP_NODELAY, true)
            .childOption(ChannelOption.SO_SNDBUF, nettyServerConfig.getServerSocketSndBufSize())
            .childOption(ChannelOption.SO_RCVBUF, nettyServerConfig.getServerSocketRcvBufSize())
            .localAddress(new InetSocketAddress(this.nettyServerConfig.getListenPort()))
            .childHandler(new ChannelInitializer<SocketChannel>() {
                @Override
                public void initChannel(SocketChannel ch) throws Exception {
                    ch.pipeline()
                        .addLast(defaultEventExecutorGroup, HANDSHAKE_HANDLER_NAME,
                            new HandshakeHandler(TlsSystemConfig.tlsMode))
                        .addLast(defaultEventExecutorGroup,
                            new NettyEncoder(),
                            new NettyDecoder(),
                            new IdleStateHandler(0, 0, nettyServerConfig.getServerChannelMaxIdleTimeSeconds()),
                            new NettyConnectManageHandler(),
                            new NettyServerHandler()
                        );
                }
            });

    if (nettyServerConfig.isServerPooledByteBufAllocatorEnable()) {
        childHandler.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT);
    }

    try {
        ChannelFuture sync = this.serverBootstrap.bind().sync();
        InetSocketAddress addr = (InetSocketAddress) sync.channel().localAddress();
        this.port = addr.getPort();
    } catch (InterruptedException e1) {
        throw new RuntimeException("this.serverBootstrap.bind().sync() InterruptedException", e1);
    }

    if (this.channelEventListener != null) {
        this.nettyEventExecutor.start();
    }

    this.timer.scheduleAtFixedRate(new TimerTask() {

        @Override
        public void run() {
            try {
                NettyRemotingServer.this.scanResponseTable();
            } catch (Throwable e) {
                log.error("scanResponseTable exception", e);
            }
        }
    }, 1000 * 3, 1000);
}

可以看到也就是正常的Netty服務端啓動流程

在childHandler的綁定中,可以看到向pipeline綁定了一個NettyServerHandler:

class NettyServerHandler extends SimpleChannelInboundHandler<RemotingCommand> {

    @Override
    protected void channelRead0(ChannelHandlerContext ctx, RemotingCommand msg) throws Exception {
        processMessageReceived(ctx, msg);
    }
}

那麼當客戶端和NameServre端建立連接後,之間傳輸的消息會通過processMessageReceived方法進行處理

processMessageReceived方法:

public void processMessageReceived(ChannelHandlerContext ctx, RemotingCommand msg) throws Exception {
       final RemotingCommand cmd = msg;
   if (cmd != null) {
        switch (cmd.getType()) {
            case REQUEST_COMMAND:
                processRequestCommand(ctx, cmd);
                break;
            case RESPONSE_COMMAND:
                processResponseCommand(ctx, cmd);
                break;
            default:
                break;
        }
    }
}

根據消息類型(請求消息、響應消息),使用不同的處理

processRequestCommand方法:

public void processRequestCommand(final ChannelHandlerContext ctx, final RemotingCommand cmd) {
    final Pair<NettyRequestProcessor, ExecutorService> matched = this.processorTable.get(cmd.getCode());
    final Pair<NettyRequestProcessor, ExecutorService> pair = null == matched ? this.defaultRequestProcessor : matched;
    final int opaque = cmd.getOpaque();

    if (pair != null) {
        Runnable run = new Runnable() {
            @Override
            public void run() {
                try {
                    doBeforeRpcHooks(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd);
                    final RemotingCommand response = pair.getObject1().processRequest(ctx, cmd);
                    doAfterRpcHooks(RemotingHelper.parseChannelRemoteAddr(ctx.channel()), cmd, response);

                    if (!cmd.isOnewayRPC()) {
                        if (response != null) {
                            response.setOpaque(opaque);
                            response.markResponseType();
                            try {
                                ctx.writeAndFlush(response);
                            } catch (Throwable e) {
                                log.error("process request over, but response failed", e);
                                log.error(cmd.toString());
                                log.error(response.toString());
                            }
                        } else {

                        }
                    }
                } catch (Throwable e) {
                    log.error("process request exception", e);
                    log.error(cmd.toString());

                    if (!cmd.isOnewayRPC()) {
                        final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_ERROR,
                            RemotingHelper.exceptionSimpleDesc(e));
                        response.setOpaque(opaque);
                        ctx.writeAndFlush(response);
                    }
                }
            }
        };

        if (pair.getObject1().rejectRequest()) {
            final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_BUSY,
                "[REJECTREQUEST]system busy, start flow control for a while");
            response.setOpaque(opaque);
            ctx.writeAndFlush(response);
            return;
        }

        try {
            final RequestTask requestTask = new RequestTask(run, ctx.channel(), cmd);
            pair.getObject2().submit(requestTask);
        } catch (RejectedExecutionException e) {
            if ((System.currentTimeMillis() % 10000) == 0) {
                log.warn(RemotingHelper.parseChannelRemoteAddr(ctx.channel())
                    + ", too many requests and system thread pool busy, RejectedExecutionException "
                    + pair.getObject2().toString()
                    + " request code: " + cmd.getCode());
            }

            if (!cmd.isOnewayRPC()) {
                final RemotingCommand response = RemotingCommand.createResponseCommand(RemotingSysResponseCode.SYSTEM_BUSY,
                    "[OVERLOAD]system busy, start flow control for a while");
                response.setOpaque(opaque);
                ctx.writeAndFlush(response);
            }
        }
    } else {
        String error = " request type " + cmd.getCode() + " not supported";
        final RemotingCommand response =
            RemotingCommand.createResponseCommand(RemotingSysResponseCode.REQUEST_CODE_NOT_SUPPORTED, error);
        response.setOpaque(opaque);
        ctx.writeAndFlush(response);
        log.error(RemotingHelper.parseChannelRemoteAddr(ctx.channel()) + error);
    }
}

在這裏創建了一個Runnable提交給線程池,這個Runnable的核心是

final RemotingCommand response = pair.getObject1().processRequest(ctx, cmd);

實際上調用的就是前面說過的DefaultRequestProcessor的processRequest方法:

public RemotingCommand processRequest(ChannelHandlerContext ctx,
   RemotingCommand request) throws RemotingCommandException {

    if (ctx != null) {
        log.debug("receive request, {} {} {}",
            request.getCode(),
            RemotingHelper.parseChannelRemoteAddr(ctx.channel()),
            request);
    }


    switch (request.getCode()) {
        case RequestCode.PUT_KV_CONFIG:
            return this.putKVConfig(ctx, request);
        case RequestCode.GET_KV_CONFIG:
            return this.getKVConfig(ctx, request);
        case RequestCode.DELETE_KV_CONFIG:
            return this.deleteKVConfig(ctx, request);
        case RequestCode.QUERY_DATA_VERSION:
            return queryBrokerTopicConfig(ctx, request);
        case RequestCode.REGISTER_BROKER:
            Version brokerVersion = MQVersion.value2Version(request.getVersion());
            if (brokerVersion.ordinal() >= MQVersion.Version.V3_0_11.ordinal()) {
                return this.registerBrokerWithFilterServer(ctx, request);
            } else {
                return this.registerBroker(ctx, request);
            }
        case RequestCode.UNREGISTER_BROKER:
            return this.unregisterBroker(ctx, request);
        case RequestCode.GET_ROUTEINTO_BY_TOPIC:
            return this.getRouteInfoByTopic(ctx, request);
        case RequestCode.GET_BROKER_CLUSTER_INFO:
            return this.getBrokerClusterInfo(ctx, request);
        case RequestCode.WIPE_WRITE_PERM_OF_BROKER:
            return this.wipeWritePermOfBroker(ctx, request);
        case RequestCode.GET_ALL_TOPIC_LIST_FROM_NAMESERVER:
            return getAllTopicListFromNameserver(ctx, request);
        case RequestCode.DELETE_TOPIC_IN_NAMESRV:
            return deleteTopicInNamesrv(ctx, request);
        case RequestCode.GET_KVLIST_BY_NAMESPACE:
            return this.getKVListByNamespace(ctx, request);
        case RequestCode.GET_TOPICS_BY_CLUSTER:
            return this.getTopicsByCluster(ctx, request);
        case RequestCode.GET_SYSTEM_TOPIC_LIST_FROM_NS:
            return this.getSystemTopicListFromNs(ctx, request);
        case RequestCode.GET_UNIT_TOPIC_LIST:
            return this.getUnitTopicList(ctx, request);
        case RequestCode.GET_HAS_UNIT_SUB_TOPIC_LIST:
            return this.getHasUnitSubTopicList(ctx, request);
        case RequestCode.GET_HAS_UNIT_SUB_UNUNIT_TOPIC_LIST:
            return this.getHasUnitSubUnUnitTopicList(ctx, request);
        case RequestCode.UPDATE_NAMESRV_CONFIG:
            return this.updateConfig(ctx, request);
        case RequestCode.GET_NAMESRV_CONFIG:
            return this.getConfig(ctx, request);
        default:
            break;
    }
    return null;
}

這個方法很直觀,根據不同的RequestCode,執行不同的方法,其中有熟悉的
REGISTER_BROKER 註冊Broker
GET_ROUTEINTO_BY_TOPIC 獲取Topic路由信息
而其相對性的方法執行就是通過查閱或者修改之前創建的表來完成
最後將相應的數據包裝,在Runnable中通過Netty的writeAndFlush完成發送

至此NameServer的啓動結束

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章