netty源碼分析(12)- 新連接接入

在研究NioEventLoop執行過程的時候,檢測IO事件(包括新連接)處理IO事件執行所有任務三個過程。其中檢測IO事件中通過持有的selector去輪詢事件,檢測出新連接。這裏複用同一段代碼。

今天我們研究的新連接介入的過程大概如下:

  1. 檢測新連接
  2. 檢測新連接之後,創建NioSocketChannel,也就是客戶端 channel
  3. 接着給channel分配一個NioEventLoop,並且把該channel註冊到NioEventLoop對應的selector上。至此,這條channel之後的讀寫都由該NioEventLoop進行管理。
  4. 最後向selector註冊讀寫事件,註冊的時候和服務端啓動註冊accept事件複用同一段邏輯。

netty的多連接複用指的是,多個連接父用一個NioEventLoop持有的線程。
netty服務端在啓動的時候會綁定一個bossGroup,即NioEventLoop,在bind()綁定端口的時候註冊accept(新連接接入)事件。掃描到該事件後,便處理。因此入口從:NioEventLoop#processSelectedKeys()開始

    private void processSelectedKeys() {
        if (selectedKeys != null) {
            processSelectedKeysOptimized();
        } else {
            processSelectedKeysPlain(selector.selectedKeys());
        }
    }

    private void processSelectedKeysOptimized() {
        for (int i = 0; i < selectedKeys.size; ++i) {
            final SelectionKey k = selectedKeys.keys[i];
            // null out entry in the array to allow to have it GC'ed once the Channel close
            // See https://github.com/netty/netty/issues/2363
            selectedKeys.keys[i] = null;

            final Object a = k.attachment();

            if (a instanceof AbstractNioChannel) {
                //真正的處理過程
                processSelectedKey(k, (AbstractNioChannel) a);
            } else {
                @SuppressWarnings("unchecked")
                NioTask<SelectableChannel> task = (NioTask<SelectableChannel>) a;
                processSelectedKey(k, task);
            }

            if (needsToSelectAgain) {
                // null out entries in the array to allow to have it GC'ed once the Channel close
                // See https://github.com/netty/netty/issues/2363
                selectedKeys.reset(i + 1);

                selectAgain();
                i = -1;
            }
        }
    }

真正的入口NioEventLoop#processSelectedKey(java.nio.channels.SelectionKey, io.netty.channel.nio.AbstractNioChannel),改方法整體邏輯我們前面分析過在《NioEventLoop執行之processSelectedKeys()》章節中有介紹,這裏我們直接看,新連接處理的邏輯

    private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
        final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
        //省略代碼
        // Also check for readOps of 0 to workaround possible JDK bug which may otherwise lead
        // to a spin loop
        //如果當前NioEventLoop是workGroup 則可能是OP_READ,bossGroup是OP_ACCEPT
        if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {

            //新連接接入以及讀事件處理入口
            unsafe.read();
        }
      }

這裏的unsafe是在《Channel創建過程》的時候,調用了父類AbstractChannel#AbstractChannel()的構造方法,和pipeline一起初始化的。

    protected AbstractChannel(Channel parent) {
        this.parent = parent;
        id = newId();
        unsafe = newUnsafe();
        pipeline = newChannelPipeline();
    }

unsalfNioServerSockeChannel的父類AbstractNioMessageChannel#newUnsafe()創建,可以看到對應的是AbstractNioMessageChannel.NioMessageUnsafe,內部類

    @Override
    protected AbstractNioUnsafe newUnsafe() {
        return new NioMessageUnsafe();
    }

查看該類的read()方法,其大致流程如下:

  1. 循環調用jdk底層的代碼創建channel,並用netty的NioSocketChannel包裝起來,代表新連接成功接入一個通道。
  2. 將所有獲取到的channel存儲到一個容器當中,檢測接入的連接數,默認是一次接16個連接
  3. 遍歷容器中的channel,依次調用方法fireChannelReadfireChannelReadCompletefireExceptionCaught來觸發對應的傳播事件。
    private final class NioMessageUnsafe extends AbstractNioUnsafe {
        //臨時存儲讀到的連接
        private final List<Object> readBuf = new ArrayList<Object>();

        @Override
        public void read() {
            assert eventLoop().inEventLoop();
            final ChannelConfig config = config();
            final ChannelPipeline pipeline = pipeline();

            //服務端接入速率處理器
            final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
            allocHandle.reset(config);

            boolean closed = false;
            Throwable exception = null;
            try {
                try {
                    //while循環調用doReadMessages()創建新連接對象
                    do {
                        //獲取jdk底層的channel,並加入readBuf容器
                        int localRead = doReadMessages(readBuf);
                        if (localRead == 0) {
                            break;
                        }
                        if (localRead < 0) {
                            closed = true;
                            break;
                        }
                        //把讀到的連接做一個累加totalMessages,默認最多累計讀取16個連接,結束循環
                        allocHandle.incMessagesRead(localRead);
                        
                    } while (allocHandle.continueReading());
                } catch (Throwable t) {
                    exception = t;
                }
                
                //觸發readBuf容器內所有的傳播事件:ChannelRead 讀事件
                int size = readBuf.size();
                for (int i = 0; i < size; i ++) {
                    readPending = false;
                    pipeline.fireChannelRead(readBuf.get(i));
                }
                //清空容器
                readBuf.clear();
                allocHandle.readComplete();
                //觸發傳播事件:ChannelReadComplete,所有的讀事件完成
                pipeline.fireChannelReadComplete();

                if (exception != null) {
                    closed = closeOnReadError(exception);
                    //觸發傳播事件:exceptionCaught,觸發異常
                    pipeline.fireExceptionCaught(exception);
                }

                if (closed) {
                    inputShutdown = true;
                    if (isOpen()) {
                        close(voidPromise());
                    }
                }
            } finally {
                // Check if there is a readPending which was not processed yet.
                // This could be for two reasons:
                // * The user called Channel.read() or ChannelHandlerContext.read() in channelRead(...) method
                // * The user called Channel.read() or ChannelHandlerContext.read() in channelReadComplete(...) method
                //
                // See https://github.com/netty/netty/issues/2254
                if (!readPending && !config.isAutoRead()) {
                    removeReadOp();
                }
            }
        }
    }
  • 獲取jdk底層的channel,調用的是NioServerSocketChannel#doReadMessages(),創建jdk底層channel並且用NioSocketChannel包裝起來,將該channel添加到傳入的容器保存起來,同時返回一個計數。
    @Override
    protected int doReadMessages(List<Object> buf) throws Exception {
        //獲取jdk底層的channel
        SocketChannel ch = SocketUtils.accept(javaChannel());

        try {
            if (ch != null) {
                //將jdk底層的channel封裝到netty的channel,並存儲到傳入的容器當中
                buf.add(new NioSocketChannel(this, ch));
                //成功和創建 客戶端接入的一條通道,並返回
                return 1;
            }
        } catch (Throwable t) {
            logger.warn("Failed to create a new channel from an accepted socket.", t);

            try {
                ch.close();
            } catch (Throwable t2) {
                logger.warn("Failed to close a socket.", t2);
            }
        }

        return 0;
    }

下面這段代碼allocHandle是一個服務器接入速率處理器,其實是DefaultMaxMessagesRecvByteBufAllocator.MaxMessageHandle,通過incMessagesRead()方法維持一個成員變量totalMessages,與continueReading()方法配合,控制一個while循環接入的連接最大數。循環獲取了一個批次的連接之後再統一處理這部分連接。

        //服務端接入速率處理器
        final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();

        @Override
        public final void incMessagesRead(int amt) {
            totalMessages += amt;
        }

        @Override
        public boolean continueReading(UncheckedBooleanSupplier maybeMoreDataSupplier) {
            return config.isAutoRead() && (!respectMaybeMoreData || maybeMoreDataSupplier.get()) &&
                    //判斷讀取到的連接總數是否大於最大連接數,maxMessagePerRead默認16
                   totalMessages < maxMessagePerRead &&
                   totalBytesRead > 0;
        }

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章