多線程原理--GCD源碼分析

閱讀源碼是枯燥的,可能暫時對我們的工作沒什麼幫助,現在但是作爲一個有一定開發經驗的開發人員而言,這一步是必須要走的;可能是受到了身邊同事、同行的影響,看別人在讀源碼也跟着讀源碼,或者是開發中遇到了瓶頸,亦或者是開發不再侷限於業務的開發,需要架構設計、設計模式以及數據結構和算法等需要閱讀源碼等等,一般開始的時候真的很難讀懂,看的頭大,但是當我們用盡辦法研究通後,那個時候真的很爽。我們不再只是知道這樣寫,我們可以知其然知其所以然,知道有些函數是做什麼的,知道其底層原理是怎樣的,比如同樣實現一個功能可以用很多種方法,我們知道這些方法底層原理後可以知道這些方法的本質區別,我們可以通過閱讀源碼學習到一些更好的設計思想、有更好的問題解決方案等等,也可以鍛鍊我們的耐心和毅力,閱讀源碼對我們來說真的受益無窮。

如果還不是很瞭解GCD,可以先簡單瞭解一下GCD:多線程原理--瞭解GCD,接下來開始分析當前最新版本的源碼:libdispatch-1008.200.78.tar.gz,建議去獲取最新版本GCD源碼:opensource源或者github源

創建隊列dispatch_queue_create

我們可以探索下串行和併發隊列的區別。
首先跟着創建隊列函數dispatch_queue_create進入源碼,除了我們賦值的label和attr,系統還將tq賦值DISPATCH_TARGET_QUEUE_DEFAULT,legacy 賦值爲true傳給dispatch_queue_create_with_target,其內部首先通過_dispatch_queue_attr_to_info和我們傳進來的attr來初始化dispatch_queue_attr_info_t

dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
    return _dispatch_lane_create_with_target(label, attr,
            DISPATCH_TARGET_QUEUE_DEFAULT, true);
}

---------------------

static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
        dispatch_queue_t tq, bool legacy)
{
    dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);
  // ...省略N行代碼--部分代碼
    const void *vtable;
    dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0;
    if (dqai.dqai_concurrent) {
        vtable = DISPATCH_VTABLE(queue_concurrent);
    } else {
        vtable = DISPATCH_VTABLE(queue_serial);
    }

    dispatch_lane_t dq = _dispatch_object_alloc(vtable,
            sizeof(struct dispatch_lane_s));
    _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
            DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
            (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));
    dq->dq_label = label;
    dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,
            dqai.dqai_relpri);
    // 自定義的queue的目標隊列是root隊列
    dq->do_targetq = tq;
    _dispatch_object_debug(dq, "%s", __func__);
    return _dispatch_trace_queue_create(dq)._dq;

再次通過全局搜索_dispatch_queue_attr_to_info,來查看_dispatch_queue_attr_to_info內部的實現。

dispatch_queue_attr_info_t
_dispatch_queue_attr_to_info(dispatch_queue_attr_t dqa)
{
    dispatch_queue_attr_info_t dqai = { };

    if (!dqa) return dqai;

#if DISPATCH_VARIANT_STATIC
    if (dqa == &_dispatch_queue_attr_concurrent) {
        dqai.dqai_concurrent = true;
        return dqai;
    }
#endif

    if (dqa < _dispatch_queue_attrs ||
            dqa >= &_dispatch_queue_attrs[DISPATCH_QUEUE_ATTR_COUNT]) {
        DISPATCH_CLIENT_CRASH(dqa->do_vtable, "Invalid queue attribute");
    }

    size_t idx = (size_t)(dqa - _dispatch_queue_attrs);

    dqai.dqai_inactive = (idx % DISPATCH_QUEUE_ATTR_INACTIVE_COUNT);
    idx /= DISPATCH_QUEUE_ATTR_INACTIVE_COUNT;

    dqai.dqai_concurrent = !(idx % DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT);
    idx /= DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT;

    dqai.dqai_relpri = -(idx % DISPATCH_QUEUE_ATTR_PRIO_COUNT);
    idx /= DISPATCH_QUEUE_ATTR_PRIO_COUNT;

    dqai.dqai_qos = idx % DISPATCH_QUEUE_ATTR_QOS_COUNT;
    idx /= DISPATCH_QUEUE_ATTR_QOS_COUNT;

    dqai.dqai_autorelease_frequency =
            idx % DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT;
    idx /= DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT;

    dqai.dqai_overcommit = idx % DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT;
    idx /= DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT;

    return dqai;
}

_dispatch_queue_attr_to_info方法內部首先判斷我們傳進來的dqa是否爲空,如果爲空則直接返回空結構體,也就是我們所說的串行隊列(串行隊列我們一般傳DISPATCH_QUEUE_SERIAL或者NULL,其實DISPATCH_QUEUE_SERIAL的宏定義就是NULL)。如果不爲空,則進入蘋果的算法,通過結構體位域來設置dqai的屬性並返回該結構體dispatch_queue_attr_info_t。結構體:

typedef struct dispatch_queue_attr_info_s {
    dispatch_qos_t dqai_qos : 8;
    int      dqai_relpri : 8;
    uint16_t dqai_overcommit:2;
    uint16_t dqai_autorelease_frequency:2;
    uint16_t dqai_concurrent:1;
    uint16_t dqai_inactive:1;
} dispatch_queue_attr_info_t;

再次回到_dispatch_lane_create_with_target內部,接下來出來overcommit(如果是串行隊列的話默認是開啓的,並行是關閉的)、_dispatch_get_root_queue獲取一個管理自己隊列的root隊列,每一個優先級都有對應的root隊列,每一個優先級又分爲是不是可以過載的隊列。再通過dqai.dqai_concurrent來區分併發和串行,DISPATCH_VTABLE內部利用OS_dispatch_##name##_class生成相應的class保存到結構體dispatch_queue_t內的do_vtable變量,接下來開闢內存_dispatch_object_alloc、構造方法_dispatch_queue_init這裏的第三個參數判斷是否並行隊列,如果不是則最多開闢一條線程,如果是並行隊列則最多可以開闢DISPATCH_QUEUE_WIDTH_FULL(0x1000) - 2條,也就是0xffe換算成10進制就是4094條線程,接下來就是設置dq的dq_label、dq_priority等屬性,最後返回_dispatch_trace_queue_create(dq)._dq。進入其內部再次返回_dispatch_introspection_queue_create(dqu),直到進入_dispatch_introspection_queue_create_hook內部的dispatch_introspection_queue_get_info返回串行或者並行的結構體用來保存關於隊列的信息。dispatch_introspection_queue_s

dispatch_introspection_queue_s diq = {
        .queue = dq->_as_dq,
        .target_queue = dq->do_targetq,
        .label = dq->dq_label,
        .serialnum = dq->dq_serialnum,
        .width = dq->dq_width,
        .suspend_count = _dq_state_suspend_cnt(dq_state) + dq->dq_side_suspend_cnt,
        .enqueued = _dq_state_is_enqueued(dq_state) && !global,
        .barrier = _dq_state_is_in_barrier(dq_state) && !global,
        .draining = (dq->dq_items_head == (void*)~0ul) ||
                (!dq->dq_items_head && dq->dq_items_tail),
        .global = global,
        .main = dx_type(dq) == DISPATCH_QUEUE_MAIN_TYPE,
    };

---------

dispatch_introspection_queue_s diq = {
        .queue = dwl->_as_dq,
        .target_queue = dwl->do_targetq,
        .label = dwl->dq_label,
        .serialnum = dwl->dq_serialnum,
        .width = 1,
        .suspend_count = 0,
        .enqueued = _dq_state_is_enqueued(dq_state),
        .barrier = _dq_state_is_in_barrier(dq_state),
        .draining = 0,
        .global = 0,
        .main = 0,
    };

對於dispatch_get_global_queue從底層_dispatch_get_root_queue中取得合適的隊列,其可以開闢DISPATCH_QUEUE_WIDTH_FULL(0x1000) - 1條線程,也就是0xfff,並且從dispatch_queue_s _dispatch_root_queues[]全局屬性裏面存放各種global_queue;而對於dispatch_get_main_queue則是通過DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q);返回,通過全局搜索

DISPATCH_DECL_SUBCLASS(dispatch_queue_main, dispatch_queue_serial);
#define OS_OBJECT_DECL_SUBCLASS(name, super) \
        OS_OBJECT_DECL_IMPL(name, <OS_OBJECT_CLASS(super)>)

可以發現dispatch_queue_main就是串行dispatch_queue_serial的子類,線程的width同樣是0x1,也就是隻有1條。

同步dispatch_sync

接下來研究一下同步函數dispatch_sync,查看其源碼進入內部方法_dispatch_sync_f,再次進入_dispatch_sync_f_inline內部:

DISPATCH_NOINLINE
static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
        uintptr_t dc_flags)
{
    _dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}

---------

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,
        dispatch_function_t func, uintptr_t dc_flags)
{
    if (likely(dq->dq_width == 1)) {
        return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
    }

    if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
        DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
    }

    dispatch_lane_t dl = upcast(dq)._dl;
    // Global concurrent queues and queues bound to non-dispatch threads
    // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
    if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {
        return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
    }

    if (unlikely(dq->do_targetq->do_targetq)) {
        return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
    }
    _dispatch_introspection_sync_begin(dl);
    _dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(
            _dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}

首先判斷dq_width是否等於1,,也就是當前隊列是否是串行隊列,如果是則執行_dispatch_barrier_sync_f,經過一系列的嵌套最終走到_dispatch_barrier_sync_f_inline_dispatch_barrier_sync_f_inline內部先通過_dispatch_thread_port獲取當前線程ID,進入_dispatch_queue_try_acquire_barrier_sync判斷線程狀態,進入內部_dispatch_queue_try_acquire_barrier_sync_and_suspend,在這裏會通過os_atomic_rmw_loop2o來獲取當前隊列依賴線程的狀態信息;如果判斷當前隊列是全局並行隊列或者綁定的是非調度線程的隊列會直接進入if判斷內執行_dispatch_sync_f_slow,在_dispatch_sync_f_slow內部會執行同步等待__DISPATCH_WAIT_FOR_QUEUE__,這裏涉及到死鎖的問題,其內部會將等待的隊列_dispatch_wait_prepare和當前調度的隊列進行對比_dq_state_drain_locked_by(dq_state, dsc->dsc_waiter),如果相同則直接拋出crash:"dispatch_sync called on queue " "already owned by current thread";如果沒有產生死鎖,最後執行_dispatch_sync_invoke_and_complete_recurse,其內部先執行_dispatch_thread_frame_push把任務壓棧到隊列後再執行func(block任務)後mach底層通過hook函數來監聽complete,再_dispatch_thread_frame_pop把任務pop出去,這也就是爲什麼同步併發會順序執行的原因。

_dispatch_barrier_sync_f_inline

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
        dispatch_function_t func, uintptr_t dc_flags)
{
    dispatch_tid tid = _dispatch_tid_self();

    if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
        DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
    }

    dispatch_lane_t dl = upcast(dq)._dl;
    // The more correct thing to do would be to merge the qos of the thread
    // that just acquired the barrier lock into the queue state.
    //
    // However this is too expensive for the fast path, so skip doing it.
    // The chosen tradeoff is that if an enqueue on a lower priority thread
    // contends with this fast path, this thread may receive a useless override.
    //
    // Global concurrent queues and queues bound to non-dispatch threads
    // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
    
    if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dl, tid))) {
        return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
                DC_FLAG_BARRIER | dc_flags);
    }

    if (unlikely(dl->do_targetq->do_targetq)) {
        return _dispatch_sync_recurse(dl, ctxt, func,
                DC_FLAG_BARRIER | dc_flags);
    }
    _dispatch_introspection_sync_begin(dl);
    _dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
            DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
                    dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}

DISPATCH_NOINLINE
static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
        dispatch_function_t func, uintptr_t dc_flags)
{
    _dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}

如果是全局並行隊列或者綁定的是非調度線程的隊列會直接進入_dispatch_sync_f_slow和上述邏輯相同。如果是加入柵欄函數的則開始驗證target是否存在,_dispatch_sync_recurse內遞歸_dispatch_sync_wait進行查找target,直到找到target後執行_dispatch_sync_invoke_and_complete_recurse完成回調。

異步dispatch_async

進入dispatch_async源碼內部,先進行了初始化操作:

void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
    dispatch_continuation_t dc = _dispatch_continuation_alloc();
    uintptr_t dc_flags = DC_FLAG_CONSUME;
    dispatch_qos_t qos;

    qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
    _dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}

進入_dispatch_continuation_init內部將dispatch_async的block任務重新賦值給func並保持爲dc的dc_func屬性。接下來執行_dispatch_continuation_async,最後進入_dispatch_continuation_async內部的dx_push,通過宏定義#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z),我們選擇進入全局併發隊列_dispatch_root_queue_push,最終進入_dispatch_root_queue_poke_slow

static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{
    int remaining = n;
    int r = ENOSYS;

    _dispatch_root_queues_init();
    _dispatch_debug_root_queue(dq, __func__);
    _dispatch_trace_runtime_event(worker_request, dq, (uint64_t)n);

#if !DISPATCH_USE_INTERNAL_WORKQUEUE
#if DISPATCH_USE_PTHREAD_ROOT_QUEUES
    if (dx_type(dq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE)
#endif
    {
        _dispatch_root_queue_debug("requesting new worker thread for global "
                "queue: %p", dq);
        r = _pthread_workqueue_addthreads(remaining,
                _dispatch_priority_to_pp_prefer_fallback(dq->dq_priority));
        (void)dispatch_assume_zero(r);
        return;
    }
#endif // !DISPATCH_USE_INTERNAL_WORKQUEUE
#if DISPATCH_USE_PTHREAD_POOL
    dispatch_pthread_root_queue_context_t pqc = dq->do_ctxt;
    if (likely(pqc->dpq_thread_mediator.do_vtable)) {
        while (dispatch_semaphore_signal(&pqc->dpq_thread_mediator)) {
            _dispatch_root_queue_debug("signaled sleeping worker for "
                    "global queue: %p", dq);
            if (!--remaining) {
                return;
            }
        }
    }

    bool overcommit = dq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
    if (overcommit) {
        os_atomic_add2o(dq, dgq_pending, remaining, relaxed);
    } else {
        if (!os_atomic_cmpxchg2o(dq, dgq_pending, 0, remaining, relaxed)) {
            _dispatch_root_queue_debug("worker thread request still pending for "
                    "global queue: %p", dq);
            return;
        }
    }

    int can_request, t_count;
    // seq_cst with atomic store to tail <rdar://problem/16932833>
    t_count = os_atomic_load2o(dq, dgq_thread_pool_size, ordered);
    do {
        can_request = t_count < floor ? 0 : t_count - floor;
        if (remaining > can_request) {
            _dispatch_root_queue_debug("pthread pool reducing request from %d to %d",
                    remaining, can_request);
            os_atomic_sub2o(dq, dgq_pending, remaining - can_request, relaxed);
            remaining = can_request;
        }
        if (remaining == 0) {
            _dispatch_root_queue_debug("pthread pool is full for root queue: "
                    "%p", dq);
            return;
        }
    } while (!os_atomic_cmpxchgvw2o(dq, dgq_thread_pool_size, t_count,
            t_count - remaining, &t_count, acquire));

    pthread_attr_t *attr = &pqc->dpq_thread_attr;
    pthread_t tid, *pthr = &tid;
#if DISPATCH_USE_MGR_THREAD && DISPATCH_USE_PTHREAD_ROOT_QUEUES
    if (unlikely(dq == &_dispatch_mgr_root_queue)) {
        pthr = _dispatch_mgr_root_queue_init();
    }
#endif
    do {
        _dispatch_retain(dq); // released in _dispatch_worker_thread
        while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) {
            if (r != EAGAIN) {
                (void)dispatch_assume_zero(r);
            }
            _dispatch_temporary_resource_shortage();
        }
    } while (--remaining);
#else
    (void)floor;
#endif // DISPATCH_USE_PTHREAD_POOL
}

_dispatch_root_queue_poke_slow先判斷當前隊列是否有問題,接下來執行_pthread_workqueue_addthreads調用底層直接添加線程到工作隊列;下面第一個do-while循環來判斷當前隊列的緩存池的大小能否繼續申請線程,如果大於可申請的大小則出現容積崩潰_dispatch_root_queue_debug("pthread pool reducing request from %d to %d", remaining, can_request);,如果等於0,則報_dispatch_root_queue_debug("pthread pool is full for root queue: " "%p", dq);。如果可以開闢的話,則進入下一個do-while循環,這時我們可以發現全局併發隊列pthread_create來創建線程,直到要創建的線程爲0。

單例dispatch_once

進入dispatch_once源碼內部dispatch_once_f方法內,首先對dispatch_once_t做標記,如果當前狀態爲DLOCK_ONCE_DONE說明有加載過下次就不再次加載;如果從來沒加載過則進入_dispatch_once_gate_tryenter,如果當前狀態是DLOCK_ONCE_UNLOCKED則執行_dispatch_once_callout內部通過_dispatch_client_callout來進行單例調用,_dispatch_once_gate_broadcast來做DLOCK_ONCE_DONE標記已經加載過。

void
dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
    dispatch_once_gate_t l = (dispatch_once_gate_t)val;
    //DLOCK_ONCE_DONE
#if !DISPATCH_ONCE_INLINE_FASTPATH || DISPATCH_ONCE_USE_QUIESCENT_COUNTER
    uintptr_t v = os_atomic_load(&l->dgo_once, acquire);
    if (likely(v == DLOCK_ONCE_DONE)) {
        return;
    }
#if DISPATCH_ONCE_USE_QUIESCENT_COUNTER
    if (likely(DISPATCH_ONCE_IS_GEN(v))) {
        return _dispatch_once_mark_done_if_quiesced(l, v);
    }
#endif
#endif
    if (_dispatch_once_gate_tryenter(l)) {
        // 單利調用 -- v->DLOCK_ONCE_DONE
        return _dispatch_once_callout(l, ctxt, func);
    }
    return _dispatch_once_wait(l);
}

信號量dispatch_semaphore

首先創建信號量dispatch_semaphore_create源碼內部主要是初始化信號量的信息和保存信號量dsema_value。接下來進入等待信號量dispatch_wait源碼內部dispatch_semaphore_wait,先執行os_atomic_dec2o對信號量-1操作後,再判斷當前信號量如果大於等於0則直接返回,否則進入等待_dispatch_semaphore_wait_slow邏輯,其內部會一直等待直到信號量爲0或者調用semaphore_signal()才能喚醒。

long
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
    long value = os_atomic_dec2o(dsema, dsema_value, acquire);
    if (likely(value >= 0)) {
        return 0;
    }
    return _dispatch_semaphore_wait_slow(dsema, timeout);
}

再看dispatch_semaphore_signal的源碼內部實現,首先等待信號量dispatch_wait正好相反,執行os_atomic_inc2o對信號量+1操作。

long
dispatch_semaphore_signal(dispatch_semaphore_t dsema)
{
    long value = os_atomic_inc2o(dsema, dsema_value, release);
    if (likely(value > 0)) {
        return 0;
    }
    if (unlikely(value == LONG_MIN)) {
        DISPATCH_CLIENT_CRASH(value,
                "Unbalanced call to dispatch_semaphore_signal()");
    }
    return _dispatch_semaphore_signal_slow(dsema);
}

long
_dispatch_semaphore_signal_slow(dispatch_semaphore_t dsema)
{
    _dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
    _dispatch_sema4_signal(&dsema->dsema_sema, 1);
    return 1;
}

調度組dispatch_group

首先進入dispatch_group_create源碼內部,利用_dispatch_object_alloc來創建dispatch_group_t並初始化,最後返回。接下來看dispatch_group_enter,其內部先通過os_atomic_sub_orig2o來進行-1操作,dispatch_group_leave則是進行+1操作,這裏可以看到如果進行dispatch_group_enter操作信號量不爲0或者進行dispatch_group_leave操作後信號量等於0,則說明dispatch_group_enterdispatch_group_leave不是匹配的,那麼直接報出DISPATCH_CLIENT_CRASH信息。如果目前沒問題的話那麼dispatch_group_leave會執行_dispatch_group_wake

DISPATCH_ALWAYS_INLINE
static inline dispatch_group_t
_dispatch_group_create_with_count(uint32_t n)
{
    dispatch_group_t dg = _dispatch_object_alloc(DISPATCH_VTABLE(group),
            sizeof(struct dispatch_group_s));
    dg->do_next = DISPATCH_OBJECT_LISTLESS;
    dg->do_targetq = _dispatch_get_default_queue(false);
    if (n) {
        os_atomic_store2o(dg, dg_bits,
                -n * DISPATCH_GROUP_VALUE_INTERVAL, relaxed);
        os_atomic_store2o(dg, do_ref_cnt, 1, relaxed); // <rdar://22318411>
    }
    return dg;
}

void
dispatch_group_leave(dispatch_group_t dg)
{
    // The value is incremented on a 64bits wide atomic so that the carry for
    // the -1 -> 0 transition increments the generation atomically.
    uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state,
            DISPATCH_GROUP_VALUE_INTERVAL, release);
    uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK);

    if (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {
        old_state += DISPATCH_GROUP_VALUE_INTERVAL;
        do {
            new_state = old_state;
            if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) {
                new_state &= ~DISPATCH_GROUP_HAS_WAITERS;
                new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
            } else {
                // If the group was entered again since the atomic_add above,
                // we can't clear the waiters bit anymore as we don't know for
                // which generation the waiters are for
                new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
            }
            if (old_state == new_state) break;
        } while (unlikely(!os_atomic_cmpxchgv2o(dg, dg_state,
                old_state, new_state, &old_state, relaxed)));
        return _dispatch_group_wake(dg, old_state, true);
    }

    if (unlikely(old_value == 0)) {
        DISPATCH_CLIENT_CRASH((uintptr_t)old_value,
                "Unbalanced call to dispatch_group_leave()");
    }
}

void
dispatch_group_enter(dispatch_group_t dg)
{
    // The value is decremented on a 32bits wide atomic so that the carry
    // for the 0 -> -1 transition is not propagated to the upper 32bits.
    uint32_t old_bits = os_atomic_sub_orig2o(dg, dg_bits,
            DISPATCH_GROUP_VALUE_INTERVAL, acquire);
    uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK;
    if (unlikely(old_value == 0)) {
        _dispatch_retain(dg); // <rdar://problem/22318411>
    }
    if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) {
        DISPATCH_CLIENT_CRASH(old_bits,
                "Too many nested calls to dispatch_group_enter()");
    }
}

_dispatch_group_wake內部會通過do-while執行_dispatch_continuation_async來循環遍歷添加到notify內的任務。這裏dispatch_group_leave後和_dispatch_group_notify最後的操作一樣都會調用_dispatch_group_wake來執行任務。

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq,
        dispatch_continuation_t dsn)
{
    uint64_t old_state, new_state;
    dispatch_continuation_t prev;

    dsn->dc_data = dq;
    _dispatch_retain(dq);

    prev = os_mpsc_push_update_tail(os_mpsc(dg, dg_notify), dsn, do_next);
    if (os_mpsc_push_was_empty(prev)) _dispatch_retain(dg);
    os_mpsc_push_update_prev(os_mpsc(dg, dg_notify), prev, dsn, do_next);
    if (os_mpsc_push_was_empty(prev)) {
        os_atomic_rmw_loop2o(dg, dg_state, old_state, new_state, release, {
            new_state = old_state | DISPATCH_GROUP_HAS_NOTIFS;
            if ((uint32_t)old_state == 0) {
                os_atomic_rmw_loop_give_up({
                    return _dispatch_group_wake(dg, new_state, false);
                });
            }
        });
    }
}

DISPATCH_NOINLINE
static void
_dispatch_group_wake(dispatch_group_t dg, uint64_t dg_state, bool needs_release)
{
    uint16_t refs = needs_release ? 1 : 0; // <rdar://problem/22318411>

    if (dg_state & DISPATCH_GROUP_HAS_NOTIFS) {
        dispatch_continuation_t dc, next_dc, tail;

        // Snapshot before anything is notified/woken <rdar://problem/8554546>
        dc = os_mpsc_capture_snapshot(os_mpsc(dg, dg_notify), &tail);
        do {
            dispatch_queue_t dsn_queue = (dispatch_queue_t)dc->dc_data;
            next_dc = os_mpsc_pop_snapshot_head(dc, tail, do_next);
            _dispatch_continuation_async(dsn_queue, dc,
                    _dispatch_qos_from_pp(dc->dc_priority), dc->dc_flags);
            _dispatch_release(dsn_queue);
        } while ((dc = next_dc));

        refs++;
    }

    if (dg_state & DISPATCH_GROUP_HAS_WAITERS) {
        _dispatch_wake_by_address(&dg->dg_gen);
    }

    if (refs) _dispatch_release_n(dg, refs);
}

說到調度組肯定少不了dispatch_group_asyncdispatch_group_async其實就是對dispatch_group_enterdispatch_group_leave的封裝。進入dispatch_group_async源碼在初始化_dispatch_continuation_init保存任務後開始執行_dispatch_continuation_group_async操作,我們可以看到內部先進行了dispatch_group_enter,然後經過_dispatch_continuation_asyncdx_push_dispatch_root_queue_poke等操作後最終調用_dispatch_client_callout執行任務,當任務執行完畢後再通過mach底層來通知完成complete操作,最後執行dispatch_group_leave

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq,
        dispatch_continuation_t dc, dispatch_qos_t qos)
{
    dispatch_group_enter(dg);
    dc->dc_data = dg;
    _dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}

static inline void
_dispatch_continuation_with_group_invoke(dispatch_continuation_t dc)
{
    struct dispatch_object_s *dou = dc->dc_data;
    unsigned long type = dx_type(dou);
    if (type == DISPATCH_GROUP_TYPE) {
        _dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
        _dispatch_trace_item_complete(dc);
        dispatch_group_leave((dispatch_group_t)dou);
    } else {
        DISPATCH_INTERNAL_CRASH(dx_type(dou), "Unexpected object type");
    }
}

該文章爲記錄本人的學習路程,也希望能夠幫助大家,知識共享,共同成長,共同進步!!!文章地址:https://www.jianshu.com/p/07a62a14e258

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章