Binder研究報告

Binder作爲Android系統進程間通信的機制是各種service能夠提供服務的基礎,本文從mediaserver入手,試分析Binder機制的實現
一.綜述
Binder機制的功能有二:
1.管理手機上的各種服務
2.應用程序通過Binder使用service提供的服務
爲此,在手機啓動過程中,需要註冊各種服務到ServiceManager。之後,應用程序可以查詢服務,並選擇和某種服務進行通信。
二.註冊服務
以mediaserver爲例,它的入口函數在main_mediaserver.cpp,在同目錄的Android.mk中,我們可以看到它被編譯爲mediaserver,代碼如下:
int main(int argc, char** argv)
{
    sp<ProcessState> proc(ProcessState::self());
    sp<IServiceManager> sm = defaultServiceManager();
    LOGI("ServiceManager: %p", sm.get());
    AudioFlinger::instantiate();
    MediaPlayerService::instantiate();
    CameraService::instantiate();
    AudioPolicyService::instantiate();
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
}
其中ProcessState::self()獲取ProcessState的唯一實例,它是進程相關的,每個進程只有一個實例,定義於ProcessState.cpp,其構造函數如下:
ProcessState::ProcessState()
    : mDriverFD(open_driver())//open_driver函數以讀寫方式打開/dev/binder設備
    , mVMStart(MAP_FAILED)
    , mManagesContexts(false)
    , mBinderContextCheckFunc(NULL)
    , mBinderContextUserData(NULL)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
{
    if (mDriverFD >= 0) {
        // XXX Ideally, there should be a specific define for whether we
        // have mmap (or whether we could possibly have the kernel module
        // availabla).
#if !defined(HAVE_WIN32_IPC)
        // mmap the binder, providing a chunk of virtual address space to receive transactions.
        //mmap的功能是爲fd創建一塊內存映射,對內存映射的讀寫就對應fd的一定偏移量的讀寫,這裏指定的BINDER_VM_SIZE=1M-8k,PROT_READ表示fd的映射區域是隻讀的,也就是我們只能從binder設備中讀取transaction
        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
        if (mVMStart == MAP_FAILED) {
            // *sigh*
            LOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
            close(mDriverFD);
            mDriverFD = -1;
        }
#else
        mDriverFD = -1;
#endif
    }

    LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened.  Terminating.");
}
也就是說,調用這個方法的進程都會打開binder設備,/dev/binder是Binder機制的基礎,後面我們會看到Binder使用了共享內存,使得進程間的通信機制效率更高。

下面再來看sp<IServiceManager> sm = defaultServiceManager();它定義於IServiceManager.cpp
sp<IServiceManager> defaultServiceManager()
{
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
    
    {
        AutoMutex _l(gDefaultServiceManagerLock);
        if (gDefaultServiceManager == NULL) {
            gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(NULL));
        }
    }
    
    return gDefaultServiceManager;
}
關鍵是紅色部分,先看參數ProcessState::self()->getContextObject(NULL),它在ProcessState.cpp中,實際上它涉及三個函數
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
{
    return getStrongProxyForHandle(0);
}
//mHandleToObject是一個vector,本函數會根據handle查找是否有對應的entry,如果沒找到會創建一個新的entry,其中binder爲NULL
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
    const size_t N=mHandleToObject.size();
    if (N <= (size_t)handle) {
        handle_entry e;
        e.binder = NULL;
        e.refs = NULL;
        status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
        if (err < NO_ERROR) return NULL;
    }
    return &mHandleToObject.editItemAt(handle);
}
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;

    AutoMutex _l(mLock);

    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we
        // are unable to acquire a weak reference on this current one.  See comment
        // in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            b = new BpBinder(handle);
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            // This little bit of nastyness is to allow us to add a primary
            // reference to the remote proxy when this team doesn't have one
            // but another team is sending the handle to us.
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }
    return result;
}
第一次調用getStrongProxyForHandle(0)的時候,必然會走new BpBinder(handle)的路徑,即返回了一個sp<BpBinder>,BpBinder繼承自IBinder,見參考2中的類圖,也就是說ProcessState::getContextObject函數返回的是一個sp<BpBinder>。

再回到gDefaultServiceManager = interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL));
interface_cast定義於frameworks/base/include/binder/IInterface.h
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
    return INTERFACE::asInterface(obj);
}
它是一個模板函數,interface_cast<IServiceManager>()將會返回IServiceManager::asInterface(obj)
在IInterface.h中定義瞭如下的宏
//用於在類中聲明
#define DECLARE_META_INTERFACE(INTERFACE)                               \
    static const android::String16 descriptor;                          \
    static android::sp<I##INTERFACE> asInterface(                       \
            const android::sp<android::IBinder>& obj);                  \
    virtual const android::String16& getInterfaceDescriptor() const;    \
    I##INTERFACE();                                                     \
    virtual ~I##INTERFACE();       
//用於實現
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME)                       \
    const android::String16 I##INTERFACE::descriptor(NAME);             \
    const android::String16&                                            \
            I##INTERFACE::getInterfaceDescriptor() const {              \
        return I##INTERFACE::descriptor;                                \
    }                                                                   \
    android::sp<I##INTERFACE> I##INTERFACE::asInterface(                \
            const android::sp<android::IBinder>& obj)                   \
    {                                                                   \
        android::sp<I##INTERFACE> intr;                                 \
        if (obj != NULL) {                                              \
            intr = static_cast<I##INTERFACE*>(                          \
                obj->queryLocalInterface(                               \
                        I##INTERFACE::descriptor).get());               \
            if (intr == NULL) {                                         \
                intr = new Bp##INTERFACE(obj);                          \
            }                                                           \
        }                                                               \
        return intr;                                                    \
    }                                                                   \
    I##INTERFACE::I##INTERFACE() { }                                    \
    I##INTERFACE::~I##INTERFACE() { }                                   \
在IServiceManager.h的IServiceManager類中聲明瞭這個宏,在IServiceManager.cpp中
IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");
也 就是說IServiceManager::descriptor = "android.os.IServiceManager"。同時,添加了兩個函數,getInterfaceDescriptor和asInterface
我們主要看asInterface函數,它的意思是說,參數obj,即IBinder實例,要調用queryLocalInterface函數,從剛纔的分析,我們知道這裏obj實際上是BpBinder,它沒有實現queryLocalInterface,而是繼承了基類IBinder的函數實現,而IBinder::queryLocalInterface只是返回了NULL,所以後續流程看,asInterface函數,將會返回new BpServiceManager(obj),在後面我們還會看到asInterface這個函數,理解它爲什麼這麼做。

從參考2的類圖中,我們可以看到BpServiceManager繼承自BpRefBase,mRemote是BpRefBase類的成員變量,在這裏參數obj將會保存至mRemote中,後面會用到這個變量。
class BpServiceManager定義於IServiceManager.cpp,它派生自BpInterface<IServiceManager>,而BpInterface定義於IInterface.h
template<typename INTERFACE>
class BpInterface : public INTERFACE, public BpRefBase
{
public:
                                BpInterface(const sp<IBinder>& remote);

protected:
    virtual IBinder*            onAsBinder();
};
從 BpInterface類的定義,我們可以看到BpInterface<IServiceManager>是從IServiceManager繼承而來,所以 defaultServiceManager函數返回的是一個sp<BpServiceManager>對象是與 sp<IServiceManager>兼容的。

再回到main函數中,後面的幾句都是用來註冊服務的,我們以MediaPlayerService爲例
MediaPlayerService::instantiate();
該函數實現於MediaPalyerService.cpp
void MediaPlayerService::instantiate() {
    defaultServiceManager()->addService(
            String16("media.player"), new MediaPlayerService());
}
defaultServiceManager(),我們知道將會返回一個sp<BpServiceManager>,其addService函數實現於IServiceManager.cpp,如下:
virtual status_t addService(const String16& name, const sp<IBinder>& service)
    {
        Parcel data, reply;
        //write "android.os.IServiceManager"
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
        //write service name
        data.writeString16(name);
        //write service,後面我們將會看到這裏到底寫了什麼
        data.writeStrongBinder(service);
        LOGI("Remote_transact add service");
        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
        return err == NO_ERROR ? reply.readExceptionCode() : err;
    }
關鍵點在紅色部分,首先remote(),即BpServiceManager::remote(),前面我們分析過,它返回BpBinder,即將會調用BpBinder->transact,該函數定義於BpBinder.cpp,如下:
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
         //mHandle即在new BpBinder的時候傳入的0
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}
IPCThreadState::self()會返回IPCThreadState的唯一實例,它是線程相關的,通過TLS實現,每個線程都會有一個這個實例
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    LOGI("ipc_client pid=%d,handle=%d,code=%d,flags=%d\n",getpid(),handle,code,flags);

    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    IF_LOG_TRANSACTIONS() {
        TextOutput::Bundle _b(alog);
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
            << handle << " / code " << TypeCode(code) << ": "
            << indent << data << dedent << endl;
    }
    
    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        LOGI("ipc_client pid=%d send cmd BC_TRANSACTION",getpid());
        //將會把參數寫入mOut變量,等待發送數據
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    
    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
    
    if ((flags & TF_ONE_WAY) == 0) {
        #if 0
        if (code == 4) { // relayout
            LOGI(">>>>>> CALLING transaction 4");
        } else {
            LOGI(">>>>>> CALLING transaction %d", code);
        }
        #endif
        if (reply) {
            //這裏發送和接收數據
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        #if 0
        if (code == 4) { // relayout
            LOGI("<<<<<< RETURNING transaction 4");
        } else {
            LOGI("<<<<<< RETURNING transaction %d", code);
        }
        #endif
        
        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }
    
    return err;
}

IPCThreadState::transact函數中有兩個函數需要重點說明:
1.err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
該函數把數據寫到mOut變量中,mOut是一個Parcle對象
2.err = waitForResponse(reply);
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    int32_t cmd;
    int32_t err;

    while (1) {
        //talkWithDriver是用來發送和接收數據的函數
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        
        cmd = mIn.readInt32();
        
        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        case BR_REPLY:
             ....
            goto finish
       。。。。。。
        default:
            //執行接收到的命令
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }
    
    return err;
}

talkWithDriver的定義如下:
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    LOG_ASSERT(mProcess->mDriverFD >= 0, "Binder driver is not opened");
   
    。。。。

    do {
        IF_LOG_COMMANDS() {
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(HAVE_ANDROID_OS)
        //使用ioctl控制內核收發數據,注意這裏的fd是mProcess->mDriverFD,它是屬於進程的,由線程共享的
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    。。。。
    
    return err;
}
talkWithDriver的功能很簡單,就是調用系統調用ioctl同時進行發送和接收。BC_XXX應該表示command,BR_XXX應該表示response
我們會經過一系列的command交互,以下是log顯示的一個command交互過程,handle=0,表示發送給servicemanager,code=3,表示是addService:
I/IPCThreadState(   75): ipc_client pid=75,handle=0,code=3,flags=0
I/IPCThreadState(   75): ipc_client pid=75 send cmd BC_TRANSACTION
I/Binder  (   27): svc_server pid=27 recv BR_NOOP
I/Binder  (   27): svc_server pid=27 recv BR_TRANSACTION
I/ServiceManager(   27): svc_server pid=27 ipc_client pid=75 target=0x0 code=3 uid=1000
I/ServiceManager(   27): svc_server add_service('batteryinfo',0xa) uid=1000
I/Binder  (   27): svc_server pid=27 send cmd BC_ACQUIRE
I/Binder  (   27): svc_server pid=27 send cmd  BC_REQUEST_DEATH_NOTIFICATION
I/Binder  (   27): svc_server pid=27 send cmd BC_FREE_BUFFER
I/Binder  (   27): svc_server pid=27 send cmd BC_REPLY,status=0
I/Binder  (   27): svc_server pid=27 recv BR_NOOP
I/Binder  (   27): svc_server pid=27 recv BR_TRANSACTION_COMPLETE
I/IPCThreadState(   75): ipc_client pid=75 recv cmd=BR_NOOP
I/IPCThreadState(   75): ipc_client pid=75 recv cmd=BR_INCREFS
I/IPCThreadState(   75): ipc_client pid=75 recv cmd=BR_ACQUIRE
I/IPCThreadState(   75): ipc_client pid=75 recv cmd=BR_TRANSACTION_COMPLETE
I/IPCThreadState(   75): ipc_client pid=75 recv cmd=BR_NOOP
I/IPCThreadState(   75): ipc_client pid=75 recv cmd=BR_REPLY
特別需要注意的是waitForResponse函數在此過程中,如果不處理收到的響應,一直處於休眠,直到我們收到REPLY,表示addService成功,才退出循環並返回。所以addService函數是一個同步函數。

至此,MediaPlayerService::instantiate();調用分析完成,但是現在的問題是,我們在和誰通信。

三.servicemanager
service_manager就是我們與之通信的進程,它由service_manager.c編譯而來,其main函數如下:
int main(int argc, char **argv)
{
    struct binder_state *bs;
    void *svcmgr = BINDER_SERVICE_MANAGER;

    bs = binder_open(128*1024);

    //這裏調用系統調用ioctl,告訴系統,servicemanager的handle爲0,也就是BpBinder中使用的handle
    if (binder_become_context_manager(bs)) {
        LOGE("cannot become context manager (%s)\n", strerror(errno));
        return -1;
    }

    svcmgr_handle = svcmgr;
    binder_loop(bs, svcmgr_handler);
    return 0;
}
其中,binder_open定義如下:
struct binder_state *binder_open(unsigned mapsize)
{
    struct binder_state *bs;

    bs = malloc(sizeof(*bs));
    if (!bs) {
        errno = ENOMEM;
        return 0;
    }
    //打開binder設備
    bs->fd = open("/dev/binder", O_RDWR);
    if (bs->fd < 0) {
        fprintf(stderr,"binder: cannot open device (%s)\n",
                strerror(errno));
        goto fail_open;
    }

    bs->mapsize = mapsize;
    //爲fd分配共享內存
    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
    if (bs->mapped == MAP_FAILED) {
        fprintf(stderr,"binder: cannot map device (%s)\n",
                strerror(errno));
        goto fail_map;
    }

        /* TODO: check version */

    return bs;

fail_map:
    close(bs->fd);
fail_open:
    free(bs);
    return 0;
}
再來看binder_loop
void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;
    unsigned readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;
    
    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(unsigned));

    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (unsigned) readbuf;
        //同樣調用ioctl進行收發數據
        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

        if (res < 0) {
            LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }
        //調用binder_parse解析收到的數據
        res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
        if (res == 0) {
            LOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            LOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
}
binder_parse中比較特殊的處理是調用了func回調函數,和binder_send_reply。binder_send_reply用來發送reply,func回調函數就是main函數中設置的svcmgr_handler,其定義如下:
int svcmgr_handler(struct binder_state *bs,
                   struct binder_txn *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    unsigned len;
    void *ptr;
    uint32_t strict_policy;

   LOGI("svc_server pid=%d ipc_client pid=%d target=%p code=%d uid=%d\n",getpid(),
        txn->sender_pid,txn->target, txn->code,  txn->sender_euid);

    if (txn->target != svcmgr_handle)
        return -1;

    // Equivalent to Parcel::enforceInterface(), reading the RPC
    // header with the strict mode policy mask and the interface name.
    // Note that we ignore the strict_policy and don't propagate it
    // further (since we do no outbound RPCs anyway).
    strict_policy = bio_get_uint32(msg);
    s = bio_get_string16(msg, &len);
    if ((len != (sizeof(svcmgr_id) / 2)) ||
        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
        fprintf(stderr,"invalid id %s\n", str8(s));
        return -1;
    }

    switch(txn->code) {
    case SVC_MGR_GET_SERVICE:
    case SVC_MGR_CHECK_SERVICE:
        s = bio_get_string16(msg, &len);
        ptr = do_find_service(bs, s, len);
        if (!ptr)
            break;
        bio_put_ref(reply, ptr);
        return 0;

    case SVC_MGR_ADD_SERVICE:
        s = bio_get_string16(msg, &len);
        ptr = bio_get_ref(msg);
        if (do_add_service(bs, s, len, ptr, txn->sender_euid))
            return -1;
        break;

    case SVC_MGR_LIST_SERVICES: {
        unsigned n = bio_get_uint32(msg);

        si = svclist;
        while ((n-- > 0) && si)
            si = si->next;
        if (si) {
            bio_put_string16(reply, si->name);
            return 0;
        }
        return -1;
    }
    default:
        LOGE("unknown code %d\n", txn->code);
        return -1;
    }

    bio_put_uint32(reply, 0);
    return 0;
}
從這裏我們可以很清楚的看到add_service交給do_add_service函數處理,實際上就是加入一個list中加入新的節點。

四.亢龍有悔,再來看IServiceManager
現在我們再回顧一下service是如何通過binder註冊給servicemanager的
1.每個進程都有一個ProcessState實例,它持有一個/dev/binder設備的fd,這樣線程就能通過它與servicemanager通信,而且ProcessState實例還有一個handle與BpBinder對應的列表,通過handle可以找到相應的BpBinder
2.通過defaultServiceManager函數,每個進程都得到了一個BpServiceManager實例,它實際上包含一個BpBinder,即ProcessState實例中,handle爲0的BpBinder
3.當前IPCThreadState,使用進程持有的binder設備的fd,使用ioctl系統調用,與BpBinder持有的handle對應的server端交互。
4.需要區分的是,BC_TRANSACTION,BR_REPLY是binder機制中的command,而ADD_SERVICE,CHECK_SERVICE是client與server協商好的code,是與binder無關的。
再來看IServiceManager.cpp中的BpServiceManager,它實際上是servicemanager的代理類,它運行在client進程中,如果一個進程想要提供服務,首先就要通過defaultServiceManager得到一個BpServiceManager實例,BpServiceManager提供getService,checkService,addService,listService服務,而這些功能又是通過它包含的BpBinder實現的。由此看來,BpBinder封裝了Binder通信機制,client端可以通過使用它與服務器端通信。

五.Service
現在我們瞭解service與servicemanager之間的通信,還有另外一個方面,應用程序作爲client是如何與作爲server端的service通信的呢?
我們再來看void MediaPlayerService::instantiate()函數,
void MediaPlayerService::instantiate() {
    defaultServiceManager()->addService(
            String16("media.player"), new MediaPlayerService());
}
addService的原型如下:
    virtual status_t addService(const String16& name, const sp<IBinder>& service)
按照我們的理解,這個函數是用來向servicemanager註冊服務的,name指定service的名稱,service是服務,client與之通信的服務,應該就是這個實例,那麼這個服務到底是什麼呢?我們來看MediaPlayerService類,MediaPlayerService類在MediaPlayerService.h中定義,從參考2的類圖中,我們可以看到MediaPlayerService繼承自BnMediaPlayerService,而BnMediaPlayerService繼承自BnInterface<IMediaPlayerService>(定義於IMediaPlayerService.h),而BnInterface<IMediaPlayerService>與BpInterface一樣都是定義於IInterface.h,它是從IMediaPlayerService和BBinder多重繼承的。IMediaPlayerService類繼承自IInterface。這個BBinder就是server端的代理,它定義於Binder.h它負責接收client端的請求,表現在它的transact函數的實現上,
status_t BBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    data.setDataPosition(0);

    status_t err = NO_ERROR;
    switch (code) {
        case PING_TRANSACTION:
            reply->writeInt32(pingBinder());
            break;
        default:
            err = onTransact(code, data, reply, flags);
            break;
    }

    if (reply != NULL) {
        reply->setDataPosition(0);
    }

    return err;
}
onTransact是一個可以由派生類覆蓋的虛函數,派生類可以實現自己的code的處理。那麼現在的問題是,BBinder::transact函數是如何被驅動起來的?它是在哪裏被調用的呢?答案在main_mediaserver.cpp的main函數中,在main函數的最後,
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
這裏除了主線程之外,還啓動了一個線程,線程中會調用talkWithDriver函數,然後調用executeCommand函數,該函數在處理BR_TRANSACTION的時候,有如下的一段:
if (tr.target.ptr) {
                sp<BBinder> b((BBinder*)tr.cookie);
                const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);
                if (error < NO_ERROR) reply.setError(error);

            } else {
                const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
                if (error < NO_ERROR) reply.setError(error);
            }
if ((tr.flags & TF_ONE_WAY) == 0) {
                LOG_ONEWAY("Sending reply to %d!", mCallingPid);
                sendReply(reply, 0);
            } else {
                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
            }
也就是說talkWithDriver把當前線程的命令發送給driver,然後從driver接收響應,交給executeCommand處理,在處理BR_TRANSACTION的時候,會從參數得到BBinder,然後調用其transact處理,這樣server端就對用戶的請求做了處理,sendReply發送結果給client端,從它的實現,我們可以看到,它實際上是發送了一個BC_REPLY的command
status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags)
{
    status_t err;
    status_t statusBuffer;
    err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer);
    if (err < NO_ERROR) return err;
    
    return waitForResponse(NULL, NULL);
}
同樣是mediaserver進程,對比註冊服務的過程,那時候BpServiceManager是作爲client端,使用BpBinder與servicemanager通信,現在Service作爲server端,使用BBinder與client通信。


4.reply

local,remote

通過handle確定了發送端和目的端之間的通信

六.Binder的層次結構

七.Binder實踐

參考
1.文件列表
frameworks/base/media/mediaserver/main_mediaserver.cpp
frameworks/base/include/binder/IServiceManager.h
frameworks/base/libs/binder/IServiceManager.cpp
frameworks/base/libs/binder/ProcessState.cpp
frameworks/base/include/binder/ProcessState.h
frameworks/base/libs/binder/IServiceManager.cpp
frameworks/base/include/binder/IInterface.h
frameworks/base/media/libmediaplayerservice/MediaPlayerService.cpp
frameworks/base/libs/binder/BpBinder.cpp
frameworks/base/include/binder/BpBinder.h
frameworks/base/libs/binder/IPCThreadState.cpp
frameworks/base/include/binder/IPCThreadState.h
frameworks/base/cmds/servicemanager/service_manager.c
frameworks/base/cmds/servicemanager/binder.c
frameworks/base/media/libmediaplayerservice/MediaPlayerService.h
frameworks/base/include/binder/Binder.h

2.Binder的類圖
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章