ZeroMQ源碼分析之Context

在庫中使用全局變量並不理想.一個庫也許會被程序加載很多次,但即便如此,也只會存在唯一一個全局變量集.


Figure24.1: ØMQ being used by different libraries


24.1中兩個不同且獨立的庫都使用了ZeroMQ,然後應用程序使用了這兩個庫.

當這種情況出現時,兩個ZeroMQ實例都訪問了相同的變量,會導致競爭條件,奇怪的錯誤和未定義行爲.

爲了防止這種問題的發生,ZeroMQ庫並沒有全局變量.取而代之的是,由使用該庫的用戶負責顯式創建全局狀態.我們把這個包含全局狀態的對象稱爲context.從用戶的角度看,這個context看起來像是一個工作線程池,而從ZeroMQ的角度看,它只不過是一個存儲我們需要的一些全局狀態的一個對象.在上面這幅圖中,libA會擁有自己的context,libB也同樣擁有自己的context.不會出現其中一個破壞或覆蓋另一個的情況.


在程序中使用ZeroMQ,需要先創建context,代碼如下:

void *context =zmq_ctx_new();


zmq_ctx_new()實現如下:

zmq::ctx_t *ctx =new(std::nothrow) zmq::ctx_t;
return ctx;


我們接着去ctx.cpp文件中看一下ctx_t類的構造函數中做了什麼工作:

zmq::ctx_t::ctx_t () :
    tag (ZMQ_CTX_TAG_VALUE_GOOD),
    starting (true),
    terminating (false),
    reaper (NULL),
    slot_count (0),
    slots (NULL),
    max_sockets (clipped_maxsocket (ZMQ_MAX_SOCKETS_DFLT)),
    io_thread_count (ZMQ_IO_THREADS_DFLT),
    ipv6 (false)
{
#ifdef HAVE_FORK
    pid = getpid();
#endif
}


果然如上文中提到的,context只是一個存儲我們需要的一些全局狀態的一個對象,它在構造函數中初始化了一些全局狀態:

// Used to checkwhether the object is a context.
uint32_t tag;

// If true,zmq_init has been called but no socket has been created
// yet. Launchingof I/O threads is delayed.
bool starting;
// If true,zmq_term was already called.
bool terminating;

// The reaperthread.
zmq::reaper_t*reaper;

// Array ofpointers to mailboxes for both application and I/O threads.
uint32_t slot_count;
mailbox_t **slots;

// Maximum numberof sockets that can be opened at the same time.
int max_sockets;

// Number of I/Othreads to launch.
int io_thread_count;

// Is IPv6 enabledon this context?
bool ipv6;

// the process thatcreated this context. Used to detect forking.
pid_t pid;


接着來看一下context中的全局狀態是如何影響整個ZeroMQ調用過程的.

我們開始創建第一個socket,在應用程序代碼中有:

void *responder =zmq_socket(context, ZMQ_REP);


zmq.cpp中對應的調用實現函數爲:

void *zmq_socket (void *ctx_, int type_)
{
    if (!ctx_ || !((zmq::ctx_t*) ctx_)->check_tag ()) {
        errno = EFAULT;
        return NULL;
    }
    zmq::ctx_t *ctx = (zmq::ctx_t*) ctx_;
    zmq::socket_base_t *s = ctx->create_socket (type_);
    return (void *) s;
}


可見創建socket是使用context中的創建函數來創建的,我們去ctx.cpp中看對應的創建socket實現函數(附上中文註釋):


zmq::socket_base_t *zmq::ctx_t::create_socket (int type_)
{
    slot_sync.lock ();
    if (unlikely (starting)) {    //  初始化mailboxes隊列.多出來的兩個slots 是爲
                                  // zmq_ctx_term線程andreaper 線程準備的. 

        starting = false;
        //  初始化mailboxes隊列.多出來的兩個slots 是爲
        // zmq_ctx_term線程andreaper 線程準備的. 
        opt_sync.lock ();
        int mazmq = max_sockets;
        int ios = io_thread_count;
        opt_sync.unlock ();
        slot_count = mazmq + ios + 2;
        slots = (mailbox_t**) malloc (sizeof (mailbox_t*) * slot_count);
        alloc_assert (slots);

        //  Initialise the infrastructure for zmq_ctx_term thread.
        slots [term_tid] = &term_mailbox;

        //  創建reaper線程. 
        reaper = new (std::nothrow) reaper_t (this, reaper_tid);
        alloc_assert (reaper);
        slots [reaper_tid] = reaper->get_mailbox ();
        reaper->start ();

        //  創建I/O線程對象並啓動它們. 
        for (int i = 2; i != ios + 2; i++) {
            io_thread_t *io_thread = new (std::nothrow) io_thread_t (this, i);
            alloc_assert (io_thread);
            io_threads.push_back (io_thread);
            slots [i] = io_thread->get_mailbox ();
            io_thread->start ();
        }
 
        //  爲slot隊列中沒有使用的部分,創建一個空slots列表指向它們
        for (int32_t i = (int32_t) slot_count - 1;
              i >= (int32_t) ios + 2; i--) {
            empty_slots.push_back (i);
            slots [i] = NULL;
        }
    }

    
         //  一旦zmq_ctx_term()被調用,我們就不能創建新的sockets了.
    if (terminating) {
        slot_sync.unlock ();
        errno = ETERM;
        return NULL;
    }

    // 如果達到了最大sockets數的限制,返回錯誤.
    if (empty_slots.empty ()) {
        slot_sync.unlock ();
        errno = EMFILE;
        return NULL;
    }

    // 在空的slot列表中選擇一個slot給新建socket.
    uint32_t slot = empty_slots.back ();
    empty_slots.pop_back ();
 
    // 產生一個新的唯一ID給socket.
    int sid = ((int) max_socket_id.add (1)) + 1;

    // 創建這個socket並且註冊它的mailbox.
    socket_base_t *s = socket_base_t::create (type_, this, slot, sid);
    if (!s) {
        empty_slots.push_back (slot);
        slot_sync.unlock ();
        return NULL;
    }
    sockets.push_back (s);
    slots [slot] = s->get_mailbox ();

    slot_sync.unlock ();
    return s;
}

接下來分析調用ZeroMQ程序中出現的如下兩行代碼:

zmq_close(responder);
zmq_ctx_destroy(context);


zmq.cpp:

int zmq_close (void *s_)
{
    if (!s_ || !((zmq::socket_base_t*) s_)->check_tag ()) {
        errno = ENOTSOCK;
        return -1;
    }
    ((zmq::socket_base_t*) s_)->close ();
    return 0;
}

可見關閉socket是由socket自己來做的,無需context插手.


接下來看zmq.cpp:

int zmq_ctx_destroy (void *ctx_)
{
    return zmq_ctx_term (ctx_);
}


int zmq_ctx_term (void *ctx_)
{
    if (!ctx_ || !((zmq::ctx_t*) ctx_)->check_tag ()) {
        errno = EFAULT;
        return -1;
    }

    int rc = ((zmq::ctx_t*) ctx_)->terminate ();
    int en = errno;

    //  Shut down only if termination was not interrupted by a signal.
    if (!rc || en != EINTR) {
#ifdef ZMQ_HAVE_WINDOWS
        //  On Windows, uninitialise socket layer.
        rc = WSACleanup ();
        wsa_assert (rc != SOCKET_ERROR);
#endif

#if defined ZMQ_HAVE_OPENPGM
        //  Shut down the OpenPGM library.
        if (pgm_shutdown () != TRUE)
            zmq_assert (false);
#endif
    }

    errno = en;
    return rc;
}


我們繼續去ctx.cpp中看:

int zmq::ctx_t::terminate ()
{
    // Connect up any pending inproc connections, otherwise we will hang
    pending_connections_t copy = pending_connections;
    for (pending_connections_t::iterator p = copy.begin (); p != copy.end (); ++p) {
        zmq::socket_base_t *s = create_socket (ZMQ_PAIR);
        s->bind (p->first.c_str ());
        s->close ();
    }

    slot_sync.lock ();
    if (!starting) {

#ifdef HAVE_FORK
        if (pid != getpid())
        {
            // we are a forked child process. Close all file descriptors
            // inherited from the parent.
            for (sockets_t::size_type i = 0; i != sockets.size (); i++)
            {
                sockets[i]->get_mailbox()->forked();
            }

            term_mailbox.forked();
        }
#endif
        //  Check whether termination was already underway, but interrupted and now
        //  restarted.
        bool restarted = terminating;
        terminating = true;

        //  First attempt to terminate the context.
        if (!restarted) {

            //  First send stop command to sockets so that any blocking calls
            //  can be interrupted. If there are no sockets we can ask reaper
            //  thread to stop.
            for (sockets_t::size_type i = 0; i != sockets.size (); i++)
                sockets [i]->stop ();
            if (sockets.empty ())
                reaper->stop ();
        }
        slot_sync.unlock();

        //  Wait till reaper thread closes all the sockets.
        command_t cmd;
        int rc = term_mailbox.recv (&cmd, -1);
        if (rc == -1 && errno == EINTR)
            return -1;
        errno_assert (rc == 0);
        zmq_assert (cmd.type == command_t::done);
        slot_sync.lock ();
        zmq_assert (sockets.empty ());
    }
    slot_sync.unlock ();

    //  Deallocate the resources.
    delete this;

    return 0;
}

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章