Android系統進程間通信(IPC)機制Binder中的Server啓動過程源代碼分析

在前面一篇文章淺談Android系統進程間通信(IPC)機制Binder中的Server和Client獲得Service Manager接口之路中, 介紹了在Android系統中Binder進程間通信機制中的Server角色是如何獲得Service Manager遠程接口的,即defaultServiceManager函數的實現。Server獲得了Service Manager遠程接口之後,就要把自己的Service添加到Service Manager中去,然後把自己啓動起來,等待Client的請求。本文將通過分析源代碼瞭解Server的啓動過程是怎麼樣的。

本文通過一個具體的例子來說明Binder機制中Server的啓動過程。我們知道,在Android系統中,提供了多媒體播放的功能,這個功能是以服 務的形式來提供的。這裏,我們就通過分析MediaPlayerService的實現來了解Media Server的啓動過程。

首先,看一下MediaPlayerService的類圖,以便我們理解下面要描述的內容。

我們將要介紹的主角MediaPlayerService繼承於BnMediaPlayerService類,熟悉Binder機制的同學應該知道BnMediaPlayerService是一個Binder Native類,用來處理Client請求的。BnMediaPlayerService繼承於BnInterface類,BnInterface是一個模板類,它定義在frameworks/base/include/binder/IInterface.h文件中:

1
2
3
4
5
6
7
8
9
10
template
class BnInterface : public INTERFACE, public BBinder
{
public:
    virtual sp      queryLocalInterface(const String16& _descriptor);
    virtual const String16&     getInterfaceDescriptor() const;
 
protected:
    virtual IBinder*            onAsBinder();
};

這裏可以看出,BnMediaPlayerService實際是繼承了IMediaPlayerService和BBinder類。IMediaPlayerService和BBinder類又分別繼承了IInterface和IBinder類,IInterface和IBinder類又同時繼承了RefBase類。

實際上,BnMediaPlayerService並不是直接接收到Client處發送過來的請求,而是使用了IPCThreadState接收Client處發送過來的請求,而IPCThreadState又藉助了ProcessState類來與Binder驅動程序交互。有關IPCThreadState和ProcessState的關係,可以參考上一篇文章淺談Android系統進程間通信(IPC)機制Binder中的Server和Client獲得Service Manager接口之路,接下來也會有相應的描述。IPCThreadState接收到了Client處的請求後,就會調用BBinder類的transact函數,並傳入相關參數,BBinder類的transact函數最終調用BnMediaPlayerService類的onTransact函數,於是,就開始真正地處理Client的請求了。

瞭解了MediaPlayerService類結構之後,就要開始進入到本文的主題了。

首先,看看MediaPlayerService是如何啓動的。啓動MediaPlayerService的代碼位於frameworks/base/media/mediaserver/main_mediaserver.cpp文件中:

1
2
3
4
5
6
7
8
9
10
11
12
int main(int argc, char** argv)
{
    sp proc(ProcessState::self());
    sp sm = defaultServiceManager();
    LOGI("ServiceManager: %p", sm.get());
    AudioFlinger::instantiate();
    MediaPlayerService::instantiate();
    CameraService::instantiate();
    AudioPolicyService::instantiate();
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
}

這裏我們不關注AudioFlinger和CameraService相關的代碼。

先看下面這句代碼:

1
sp proc(ProcessState::self());

這句代碼的作用是通過ProcessState::self()調用創建一個ProcessState實例。ProcessState::self()是ProcessState類的一個靜態成員變量,定義在frameworks/base/libs/binder/ProcessState.cpp文件中:

1
2
3
4
5
6
7
8
    sp ProcessState::self()
    {
        if (gProcess != NULL) return gProcess;  
 
        AutoMutex _l(gProcessMutex);
        if (gProcess == NULL) gProcess = new ProcessState;
        return gProcess;
    }

這裏可以看出,這個函數作用是返回一個全局唯一的ProcessState實例gProcess。全局唯一實例變量gProcess定義在frameworks/base/libs/binder/Static.cpp文件中:

1
2
    Mutex gProcessMutex;
    sp gProcess;

再來看ProcessState的構造函數:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    ProcessState::ProcessState()
        : mDriverFD(open_driver())
        , mVMStart(MAP_FAILED)
        , mManagesContexts(false)
        , mBinderContextCheckFunc(NULL)
        , mBinderContextUserData(NULL)
        , mThreadPoolStarted(false)
        , mThreadPoolSeq(1)
    {
        if (mDriverFD >= 0) {
            // XXX Ideally, there should be a specific define for whether we
            // have mmap (or whether we could possibly have the kernel module
            // availabla).
    #if !defined(HAVE_WIN32_IPC)
            // mmap the binder, providing a chunk of virtual address space to receive transactions.
            mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
            if (mVMStart == MAP_FAILED) {
                // *sigh*
                LOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
                close(mDriverFD);
                mDriverFD = -1;
            }
    #else
            mDriverFD = -1;
    #endif
        }
        if (mDriverFD < 0) {
            // Need to run without the driver, starting our own thread pool.
        }
    }

這個函數有兩個關鍵地方,一是通過open_driver函數打開Binder設備文件/dev/binder,並將打開設備文件描述符保存在成員變量mDriverFD中;二是通過mmap來把設備文件/dev/binder映射到內存中。

先看open_driver函數的實現,這個函數同樣位於frameworks/base/libs/binder/ProcessState.cpp文件中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    static int open_driver()
    {
        if (gSingleProcess) {
            return -1;
        }  
 
        int fd = open("/dev/binder", O_RDWR);
        if (fd >= 0) {
            fcntl(fd, F_SETFD, FD_CLOEXEC);
            int vers;
    #if defined(HAVE_ANDROID_OS)
            status_t result = ioctl(fd, BINDER_VERSION, &vers);
    #else
            status_t result = -1;
            errno = EPERM;
    #endif
            if (result == -1) {
                LOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
                close(fd);
                fd = -1;
            }
            if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
                LOGE("Binder driver protocol does not match user space protocol!");
                close(fd);
                fd = -1;
            }
    #if defined(HAVE_ANDROID_OS)
            size_t maxThreads = 15;
            result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
            if (result == -1) {
                LOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
            }
    #endif  
 
        } else {
            LOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
        }
        return fd;
    }

這個函數的作用主要是通過open文件操作函數來打開/dev/binder設備文件,然後再調用ioctl文件控制函數來分別執行BINDER_VERSION和BINDER_SET_MAX_THREADS兩個命令來和Binder驅動程序進行交互,前者用於獲得當前Binder驅動程序的版本號,後者用於通知Binder驅動程序,MediaPlayerService最多可同時啓動15個線程來處理Client端的請求。

open在Binder驅動程序中的具體實現,請參考前面一篇文章淺談Service Manager成爲Android進程間通信(IPC)機制Binder守護進程之路,這裏不再重複描述。打開/dev/binder設備文件後,Binder驅動程序就爲MediaPlayerService進程創建了一個struct binder_proc結構體實例來維護MediaPlayerService進程上下文相關信息。

我們來看一下ioctl文件操作函數執行BINDER_VERSION命令的過程:

1
status_t result = ioctl(fd, BINDER_VERSION, &vers);

這個函數調用最終進入到Binder驅動程序的binder_ioctl函數中,我們只關注BINDER_VERSION相關的部分邏輯:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
    static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
    {
        int ret;
        struct binder_proc *proc = filp->private_data;
        struct binder_thread *thread;
        unsigned int size = _IOC_SIZE(cmd);
        void __user *ubuf = (void __user *)arg;  
 
        /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/  
 
        ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
        if (ret)
            return ret;  
 
        mutex_lock(&binder_lock);
        thread = binder_get_thread(proc);
        if (thread == NULL) {
            ret = -ENOMEM;
            goto err;
        }  
 
        switch (cmd) {
        ......
        case BINDER_VERSION:
            if (size != sizeof(struct binder_version)) {
                ret = -EINVAL;
                goto err;
            }
            if (put_user(BINDER_CURRENT_PROTOCOL_VERSION, &((struct binder_version *)ubuf)->protocol_version)) {
                ret = -EINVAL;
                goto err;
            }
            break;
        ......
        }
        ret = 0;
    err:
            ......
        return ret;
    }

很簡單,只是將BINDER_CURRENT_PROTOCOL_VERSION寫入到傳入的參數arg指向的用戶緩衝區中去就返回了。BINDER_CURRENT_PROTOCOL_VERSION是一個宏,定義在kernel/common/drivers/staging/android/binder.h文件中:

1
2
    /* This is the current protocol version. */
    #define BINDER_CURRENT_PROTOCOL_VERSION 7

這裏爲什麼要把ubuf轉換成struct binder_version之後,再通過其protocol_version成員變量再來寫入呢,轉了一圈,最終內容還是寫入到ubuf中。我們看一下struct binder_version的定義就會明白,同樣是在kernel/common/drivers/staging/android/binder.h文件中:

1
2
3
4
5
    /* Use with BINDER_VERSION, driver fills in fields. */
    struct binder_version {
        /* driver protocol version -- increment with incompatible change */
        signed long protocol_version;
    };

從註釋中可以看出來,這裏是考慮到兼容性,因爲以後很有可能不是用signed long來表示版本號。

這裏有一個重要的地方要注意的是,由於這裏是打開設備文件/dev/binder之後,第一次進入到binder_ioctl函數,因此,這裏調用binder_get_thread的時候,就會爲當前線程創建一個struct binder_thread結構體變量來維護線程上下文信息,具體可以參考淺談Service Manager成爲Android進程間通信(IPC)機制Binder守護進程之路一文。

接着我們再來看一下ioctl文件操作函數執行BINDER_SET_MAX_THREADS命令的過程:

1
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);

這個函數調用最終進入到Binder驅動程序的binder_ioctl函數中,我們只關注BINDER_SET_MAX_THREADS相關的部分邏輯:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
    static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
    {
        int ret;
        struct binder_proc *proc = filp->private_data;
        struct binder_thread *thread;
        unsigned int size = _IOC_SIZE(cmd);
        void __user *ubuf = (void __user *)arg;  
 
        /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/  
 
        ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
        if (ret)
            return ret;  
 
        mutex_lock(&binder_lock);
        thread = binder_get_thread(proc);
        if (thread == NULL) {
            ret = -ENOMEM;
            goto err;
        }  
 
        switch (cmd) {
        ......
        case BINDER_SET_MAX_THREADS:
            if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) {
                ret = -EINVAL;
                goto err;
            }
            break;
        ......
        }
        ret = 0;
    err:
        ......
        return ret;
    }

這裏實現也是非常簡單,只是簡單地把用戶傳進來的參數保存在proc->max_threads中就完畢了。注意,這裏再調用binder_get_thread函數的時候,就可以在proc->threads中找到當前線程對應的struct binder_thread結構了,因爲前面已經創建好並保存在proc->threads紅黑樹中。

回到ProcessState的構造函數中,這裏還通過mmap函數來把設備文件/dev/binder映射到內存中,這個函數在淺談Service Manager成爲Android進程間通信(IPC)機制Binder守護進程之路一文也已經有詳細介紹,這裏不再重複描述。宏BINDER_VM_SIZE就定義在ProcessState.cpp文件中:

1
    #define BINDER_VM_SIZE ((1*1024*1024) - (4096 *2))

mmap函數調用完成之後,Binder驅動程序就爲當前進程預留了BINDER_VM_SIZE大小的內存空間了。

這樣,ProcessState全局唯一變量gProcess就創建完畢了,回到frameworks/base/media/mediaserver/main_mediaserver.cpp文件中的main函數,下一步是調用defaultServiceManager函數來獲得Service Manager的遠程接口,這個已經在上一篇文章淺談Android系統進程間通信(IPC)機制Binder中的Server和Client獲得Service Manager接口之路有詳細描述,讀者可以回過頭去參考一下。

再接下來,就進入到MediaPlayerService::instantiate函數把MediaPlayerService添加到Service Manger中去了。這個函數定義在frameworks/base/media/libmediaplayerservice/MediaPlayerService.cpp文件中:

1
2
3
4
void MediaPlayerService::instantiate() {
    defaultServiceManager()->addService(
            String16("media.player"), new MediaPlayerService());
}

我們重點看一下IServiceManger::addService的過程,這有助於我們加深對Binder機制的理解。

在上一篇文章淺談Android系統進程間通信(IPC)機制Binder中的Server和Client獲得Service Manager接口之路中說到,defaultServiceManager返回的實際是一個BpServiceManger類實例,因此,我們看一下BpServiceManger::addService的實現,這個函數實現在frameworks/base/libs/binder/IServiceManager.cpp文件中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class BpServiceManager : public BpInterface
{
public:
	BpServiceManager(const sp& impl)
		: BpInterface(impl)
	{
	}
 
	......
 
	virtual status_t addService(const String16& name, const sp& service)
	{
		Parcel data, reply;
		data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
		data.writeString16(name);
		data.writeStrongBinder(service);
		status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
		return err == NO_ERROR ? reply.readExceptionCode()
	}
 
	......
 
};

這裏的Parcel類是用來於序列化進程間通信數據用的。

先來看這一句的調用:

1
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());

IServiceManager::getInterfaceDescriptor()返回來的是一個字符串,即”android.os.IServiceManager”,具體可以參考IServiceManger的實現。我們看一下Parcel::writeInterfaceToken的實現,位於frameworks/base/libs/binder/Parcel.cpp文件中:

1
2
3
4
5
6
7
8
    // Write RPC headers.  (previously just the interface token)
    status_t Parcel::writeInterfaceToken(const String16& interface)
    {
        writeInt32(IPCThreadState::self()->getStrictModePolicy() |
                   STRICT_MODE_PENALTY_GATHER);
        // currently the interface identification token is just its name as a string
        return writeString16(interface);
    }

它的作用是寫入一個整數和一個字符串到Parcel中去。

再來看下面的調用:

1
data.writeString16(name);

這裏又是寫入一個字符串到Parcel中去,這裏的name即是上面傳進來的“media.player”字符串。

往下看:

1
data.writeStrongBinder(service);

這裏定入一個Binder對象到Parcel去。我們重點看一下這個函數的實現,因爲它涉及到進程間傳輸Binder實體的問題,比較複雜,需要重點關注,同時,也是理解Binder機制的一個重點所在。注意,這裏的service參數是一個MediaPlayerService對象。

1
2
3
4
    status_t Parcel::writeStrongBinder(const sp& val)
    {
        return flatten_binder(ProcessState::self(), val, this);
    }

看到flatten_binder函數,是不是似曾相識的感覺?我們在前面一篇文章淺談Service Manager成爲Android進程間通信(IPC)機制Binder守護進程之路中,曾經提到在Binder驅動程序中,使用struct flat_binder_object來表示傳輸中的一個binder對象,它的定義如下所示:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
/*
 * This is the flattened representation of a Binder object for transfer
 * between processes.  The 'offsets' supplied as part of a binder transaction
 * contains offsets into the data where these structures occur.  The Binder
 * driver takes care of re-writing the structure type and data as it moves
 * between processes.
 */
struct flat_binder_object {
	/* 8 bytes for large_flat_header. */
	unsigned long		type;
	unsigned long		flags;
 
	/* 8 bytes of data. */
	union {
		void		*binder;	/* local object */
		signed long	handle;		/* remote object */
	};
 
	/* extra data associated with local object */
	void			*cookie;
};

各個成員變量的含義請參考資料Android Binder設計與實現。

我們進入到flatten_binder函數看看:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    status_t flatten_binder(const sp& proc,
        const sp& binder, Parcel* out)
    {
        flat_binder_object obj;  
 
        obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
        if (binder != NULL) {
            IBinder *local = binder->localBinder();
            if (!local) {
                BpBinder *proxy = binder->remoteBinder();
                if (proxy == NULL) {
                    LOGE("null proxy");
                }
                const int32_t handle = proxy ? proxy->handle() : 0;
                obj.type = BINDER_TYPE_HANDLE;
                obj.handle = handle;
                obj.cookie = NULL;
            } else {
                obj.type = BINDER_TYPE_BINDER;
                obj.binder = local->getWeakRefs();
                obj.cookie = local;
            }
        } else {
            obj.type = BINDER_TYPE_BINDER;
            obj.binder = NULL;
            obj.cookie = NULL;
        }  
 
        return finish_flatten_binder(binder, obj, out);
    }

首先是初始化flat_binder_object的flags域:

1
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;

0x7f表示處理本Binder實體請求數據包的線程的最低優先級,FLAT_BINDER_FLAG_ACCEPTS_FDS表示這個Binder實體可以接受文件描述符,Binder實體在收到文件描述符時,就會在本進程中打開這個文件。

傳進來的binder即爲MediaPlayerService::instantiate函數中new出來的MediaPlayerService實例,因此,不爲空。又由於MediaPlayerService繼承自BBinder類,它是一個本地Binder實體,因此binder->localBinder返回一個BBinder指針,而且肯定不爲空,於是執行下面語句:

1
2
3
    obj.type = BINDER_TYPE_BINDER;
    obj.binder = local->getWeakRefs();
    obj.cookie = local;

設置了flat_binder_obj的其他成員變量,注意,指向這個Binder實體地址的指針local保存在flat_binder_obj的成員變量cookie中。

函數調用finish_flatten_binder來將這個flat_binder_obj寫入到Parcel中去:

1
2
3
4
5
    inline static status_t finish_flatten_binder(
        const sp& binder, const flat_binder_object& flat, Parcel* out)
    {
        return out->writeObject(flat, false);
    }

Parcel::writeObject的實現如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
    status_t Parcel::writeObject(const flat_binder_object& val, bool nullMetaData)
    {
        const bool enoughData = (mDataPos+sizeof(val)) <= mDataCapacity;
        const bool enoughObjects = mObjectsSize < mObjectsCapacity;
        if (enoughData && enoughObjects) {
    restart_write:
            *reinterpret_cast(mData+mDataPos) = val;  
 
            // Need to write meta-data?
            if (nullMetaData || val.binder != NULL) {
                mObjects[mObjectsSize] = mDataPos;
                acquire_object(ProcessState::self(), val, this);
                mObjectsSize++;
            }  
 
            // remember if it's a file descriptor
            if (val.type == BINDER_TYPE_FD) {
                mHasFds = mFdsKnown = true;
            }  
 
            return finishWrite(sizeof(flat_binder_object));
        }  
 
        if (!enoughData) {
            const status_t err = growData(sizeof(val));
            if (err != NO_ERROR) return err;
        }
        if (!enoughObjects) {
            size_t newSize = ((mObjectsSize+2)*3)/2;
            size_t* objects = (size_t*)realloc(mObjects, newSize*sizeof(size_t));
            if (objects == NULL) return NO_MEMORY;
            mObjects = objects;
            mObjectsCapacity = newSize;
        }  
 
        goto restart_write;
    }

這裏除了把flat_binder_obj寫到Parcel裏面之內,還要記錄這個flat_binder_obj在Parcel裏面的偏移位置:

1
mObjects[mObjectsSize] = mDataPos;

這裏因爲,如果進程間傳輸的數據間帶有Binder對象的時候,Binder驅動程序需要作進一步的處理,以維護各個Binder實體的一致性,下面我們將會看到Binder驅動程序是怎麼處理這些Binder對象的。

再回到BpServiceManager::addService函數中,調用下面語句:

1
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);

回到淺談Android系統進程間通信(IPC)機制Binder中的Server和Client獲得Service Manager接口之路一文中的類圖中去看一下,這裏的remote成員函數來自於BpRefBase類,它返回一個BpBinder指針。因此,我們繼續進入到BpBinder::transact函數中去看看:

1
2
3
4
5
6
7
8
9
10
11
12
13
    status_t BpBinder::transact(
        uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
    {
        // Once a binder has died, it will never come back to life.
        if (mAlive) {
            status_t status = IPCThreadState::self()->transact(
                mHandle, code, data, reply, flags);
            if (status == DEAD_OBJECT) mAlive = 0;
            return status;
        }  
 
        return DEAD_OBJECT;
    }

這裏又調用了IPCThreadState::transact進執行實際的操作。注意,這裏的mHandle爲0,code爲ADD_SERVICE_TRANSACTION。ADD_SERVICE_TRANSACTION是上面以參數形式傳進來的,那mHandle爲什麼是0呢?因爲這裏表示的是Service Manager遠程接口,它的句柄值一定是0,具體請參考淺談Android系統進程間通信(IPC)機制Binder中的Server和Client獲得Service Manager接口之路一文。
再進入到IPCThreadState::transact函數,看看做了些什麼事情:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
    status_t IPCThreadState::transact(int32_t handle,
                                      uint32_t code, const Parcel& data,
                                      Parcel* reply, uint32_t flags)
    {
        status_t err = data.errorCheck();  
 
        flags |= TF_ACCEPT_FDS;  
 
        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
                << handle << " / code " << TypeCode(code) << ": "
                << indent << data << dedent << endl;
        }  
 
        if (err == NO_ERROR) {
            LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
                (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
            err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
        }  
 
        if (err != NO_ERROR) {
            if (reply) reply->setError(err);
            return (mLastError = err);
        }  
 
        if ((flags & TF_ONE_WAY) == 0) {
            #if 0
            if (code == 4) { // relayout
                LOGI(">>>>>> CALLING transaction 4");
            } else {
                LOGI(">>>>>> CALLING transaction %d", code);
            }
            #endif
            if (reply) {
                err = waitForResponse(reply);
            } else {
                Parcel fakeReply;
                err = waitForResponse(&fakeReply);
            }
            #if 0
            if (code == 4) { // relayout
                LOGI("<<<<<< RETURNING transaction 4");
            } else {
                LOGI("<<<<<< RETURNING transaction %d", code);
            }
            #endif  
 
            IF_LOG_TRANSACTIONS() {
                TextOutput::Bundle _b(alog);
                alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                    << handle << ": ";
                if (reply) alog << indent << *reply << dedent << endl;
                else alog << "(none requested)" << endl;
            }
        } else {
            err = waitForResponse(NULL, NULL);
        }  
 
        return err;
    }

IPCThreadState::transact函數的參數flags是一個默認值爲0的參數,上面沒有傳相應的實參進來,因此,這裏就爲0。

函數首先調用writeTransactionData函數準備好一個struct binder_transaction_data結構體變量,這個是等一下要傳輸給Binder驅動程序的。struct binder_transaction_data的定義我們在淺談Service Manager成爲Android進程間通信(IPC)機制Binder守護進程之路一文中有詳細描述,讀者不妨回過去讀一下。這裏爲了方便描述,將struct binder_transaction_data的定義再次列出來:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
    struct binder_transaction_data {
        /* The first two are only used for bcTRANSACTION and brTRANSACTION,
         * identifying the target and contents of the transaction.
         */
        union {
            size_t  handle; /* target descriptor of command transaction */
            void    *ptr;   /* target descriptor of return transaction */
        } target;
        void        *cookie;    /* target object cookie */
        unsigned int    code;       /* transaction command */  
 
        /* General information about the transaction. */
        unsigned int    flags;
        pid_t       sender_pid;
        uid_t       sender_euid;
        size_t      data_size;  /* number of bytes of data */
        size_t      offsets_size;   /* number of bytes of offsets */  
 
        /* If this transaction is inline, the data immediately
         * follows here; otherwise, it ends with a pointer to
         * the data buffer.
         */
        union {
            struct {
                /* transaction data */
                const void  *buffer;
                /* offsets from buffer to flat_binder_object structs */
                const void  *offsets;
            } ptr;
            uint8_t buf[8];
        } data;
    };

writeTransactionData函數的實現如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;
 
    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
 
    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = statusBuffer;
        tr.offsets_size = 0;
        tr.data.ptr.offsets = NULL;
    } else {
        return (mLastError = err);
    }
 
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));
 
    return NO_ERROR;
}

注意,這裏的cmd爲BC_TRANSACTION。 這個函數很簡單,在這個場景下,就是執行下面語句來初始化本地變量tr:

1
2
3
4
    tr.data_size = data.ipcDataSize();
    tr.data.ptr.buffer = data.ipcData();
    tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
    tr.data.ptr.offsets = data.ipcObjects();

回憶一下上面的內容,寫入到tr.data.ptr.buffer的內容相當於下面的內容:

1
2
3
4
5
writeInt32(IPCThreadState::self()->getStrictModePolicy() |
               STRICT_MODE_PENALTY_GATHER);
writeString16("android.os.IServiceManager");
writeString16("media.player");
writeStrongBinder(new MediaPlayerService());

其中包含了一個Binder實體MediaPlayerService,因此需要設置tr.offsets_size就爲1,tr.data.ptr.offsets就指向了這個MediaPlayerService的地址在tr.data.ptr.buffer中的偏移量。最後,將tr的內容保存在IPCThreadState的成員變量mOut中。
回到IPCThreadState::transact函數中,接下去看,(flags & TF_ONE_WAY) == 0爲true,並且reply不爲空,所以最終進入到waitForResponse(reply)這條路徑來。我們看一下waitForResponse函數的實現:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    int32_t cmd;
    int32_t err;
 
    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
 
        cmd = mIn.readInt32();
 
        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }
 
        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
 
        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;
 
        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;
 
        case BR_ACQUIRE_RESULT:
            {
                LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;
 
        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;
 
                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t),
                            freeBuffer, this);
                    } else {
                        err = *static_cast(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(size_t), this);
                    continue;
                }
            }
            goto finish;
 
        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }
 
finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }
 
    return err;
}

這個函數雖然很長,但是主要調用了talkWithDriver函數來與Binder驅動程序進行交互:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    LOG_ASSERT(mProcess->mDriverFD >= 0, "Binder driver is not opened");
 
    binder_write_read bwr;
 
    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
 
    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
 
    bwr.write_size = outAvail;
    bwr.write_buffer = (long unsigned int)mOut.data();
 
    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (long unsigned int)mIn.data();
    } else {
        bwr.read_size = 0;
    }
 
    IF_LOG_COMMANDS() {
        TextOutput::Bundle _b(alog);
        if (outAvail != 0) {
            alog << "Sending commands to driver: " << indent;
            const void* cmds = (const void*)bwr.write_buffer;
            const void* end = ((const uint8_t*)cmds)+bwr.write_size;
            alog << HexDump(cmds, bwr.write_size) << endl;
            while (cmds < end) cmds = printCommand(alog, cmds);
            alog << dedent;
        }
        alog << "Size of receive buffer: " << bwr.read_size
            << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
    }
 
    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
 
    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        IF_LOG_COMMANDS() {
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(HAVE_ANDROID_OS)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);
 
    IF_LOG_COMMANDS() {
        alog << "Our err: " << (void*)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
			<< "), read consumed: " << bwr.read_consumed << endl;
    }
 
    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < (ssize_t)mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        IF_LOG_COMMANDS() {
            TextOutput::Bundle _b(alog);
            alog << "Remaining data size: " << mOut.dataSize() << endl;
            alog << "Received commands from driver: " << indent;
            const void* cmds = mIn.data();
            const void* end = mIn.data() + mIn.dataSize();
            alog << HexDump(cmds, mIn.dataSize()) << endl;
            while (cmds < end) cmds = printReturnCommand(alog, cmds);
            alog << dedent;
        }
        return NO_ERROR;
    }
 
    return err;
}

這裏doReceive和needRead均爲1,有興趣的讀者可以自已分析一下。因此,這裏告訴Binder驅動程序,先執行write操作,再執行read操作,下面我們將會看到。

最後,通過ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)進行到Binder驅動程序的binder_ioctl函數,我們只關注cmd爲BINDER_WRITE_READ的邏輯:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
    static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
    {
        int ret;
        struct binder_proc *proc = filp->private_data;
        struct binder_thread *thread;
        unsigned int size = _IOC_SIZE(cmd);
        void __user *ubuf = (void __user *)arg;  
 
        /*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/  
 
        ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
        if (ret)
            return ret;  
 
        mutex_lock(&binder_lock);
        thread = binder_get_thread(proc);
        if (thread == NULL) {
            ret = -ENOMEM;
            goto err;
        }  
 
        switch (cmd) {
        case BINDER_WRITE_READ: {
            struct binder_write_read bwr;
            if (size != sizeof(struct binder_write_read)) {
                ret = -EINVAL;
                goto err;
            }
            if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
                ret = -EFAULT;
                goto err;
            }
            if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
                printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",
                proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);
            if (bwr.write_size > 0) {
                ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
                if (ret < 0) {
                    bwr.read_consumed = 0;
                    if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                        ret = -EFAULT;
                    goto err;
                }
            }
            if (bwr.read_size > 0) {
                ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
                if (!list_empty(&proc->todo))
                    wake_up_interruptible(&proc->wait);
                if (ret < 0) {
                    if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                        ret = -EFAULT;
                    goto err;
                }
            }
            if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
                printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
                proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size);
            if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
                ret = -EFAULT;
                goto err;
            }
            break;
        }
        ......
        }
        ret = 0;
    err:
        ......
        return ret;
    }

函數首先是將用戶傳進來的參數拷貝到本地變量struct binder_write_read bwr中去。這裏bwr.write_size > 0爲true,因此,進入到binder_thread_write函數中,我們只關注BC_TRANSACTION部分的邏輯:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
					void __user *buffer, int size, signed long *consumed)
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;
 
	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
	        .....
		case BC_TRANSACTION:
		case BC_REPLY: {
			struct binder_transaction_data tr;
 
			if (copy_from_user(&tr, ptr, sizeof(tr)))
				return -EFAULT;
			ptr += sizeof(tr);
			binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
			break;
		}
		......
		}
		*consumed = ptr - buffer;
	}
	return 0;
}

首先將用戶傳進來的transact參數拷貝在本地變量struct binder_transaction_data tr中去,接着調用binder_transaction函數進一步處理,這裏我們忽略掉無關代碼:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
static void
binder_transaction(struct binder_proc *proc, struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
	struct binder_transaction *t;
	struct binder_work *tcomplete;
	size_t *offp, *off_end;
	struct binder_proc *target_proc;
	struct binder_thread *target_thread = NULL;
	struct binder_node *target_node = NULL;
	struct list_head *target_list;
	wait_queue_head_t *target_wait;
	struct binder_transaction *in_reply_to = NULL;
	struct binder_transaction_log_entry *e;
	uint32_t return_error;
 
        ......
 
	if (reply) {
         ......
	} else {
		if (tr->target.handle) {
            ......
		} else {
			target_node = binder_context_mgr_node;
			if (target_node == NULL) {
				return_error = BR_DEAD_REPLY;
				goto err_no_context_mgr_node;
			}
		}
		......
		target_proc = target_node->proc;
		if (target_proc == NULL) {
			return_error = BR_DEAD_REPLY;
			goto err_dead_binder;
		}
		......
	}
	if (target_thread) {
		......
	} else {
		target_list = &target_proc->todo;
		target_wait = &target_proc->wait;
	}
 
	......
 
	/* TODO: reuse incoming transaction for reply */
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	if (t == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_t_failed;
	}
	......
 
	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	if (tcomplete == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_tcomplete_failed;
	}
 
	......
 
	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;
	else
		t->from = NULL;
	t->sender_euid = proc->tsk->cred->euid;
	t->to_proc = target_proc;
	t->to_thread = target_thread;
	t->code = tr->code;
	t->flags = tr->flags;
	t->priority = task_nice(current);
	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
	if (t->buffer == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);
 
	offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));
 
	if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {
		......
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {
		......
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	......
 
	off_end = (void *)offp + tr->offsets_size;
	for (; offp < off_end; offp++) {
		struct flat_binder_object *fp;
		......
		fp = (struct flat_binder_object *)(t->buffer->data + *offp);
		switch (fp->type) {
		case BINDER_TYPE_BINDER:
		case BINDER_TYPE_WEAK_BINDER: {
			struct binder_ref *ref;
			struct binder_node *node = binder_get_node(proc, fp->binder);
			if (node == NULL) {
				node = binder_new_node(proc, fp->binder, fp->cookie);
				if (node == NULL) {
					return_error = BR_FAILED_REPLY;
					goto err_binder_new_node_failed;
				}
				node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
				node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
			}
			if (fp->cookie != node->cookie) {
				......
				goto err_binder_get_ref_for_node_failed;
			}
			ref = binder_get_ref_for_node(target_proc, node);
			if (ref == NULL) {
				return_error = BR_FAILED_REPLY;
				goto err_binder_get_ref_for_node_failed;
			}
			if (fp->type == BINDER_TYPE_BINDER)
				fp->type = BINDER_TYPE_HANDLE;
			else
				fp->type = BINDER_TYPE_WEAK_HANDLE;
			fp->handle = ref->desc;
			binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo);
			......
 
		} break;
		......
		}
	}
 
	if (reply) {
		......
	} else if (!(t->flags & TF_ONE_WAY)) {
		BUG_ON(t->buffer->async_transaction != 0);
		t->need_reply = 1;
		t->from_parent = thread->transaction_stack;
		thread->transaction_stack = t;
	} else {
		......
	}
	t->work.type = BINDER_WORK_TRANSACTION;
	list_add_tail(&t->work.entry, target_list);
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
	list_add_tail(&tcomplete->entry, &thread->todo);
	if (target_wait)
		wake_up_interruptible(target_wait);
	return;
    ......
}

注意,這裏傳進來的參數reply爲0,tr->target.handle也爲0。因此,target_proc、target_thread、target_node、target_list和target_wait的值分別爲:

1
2
3
4
    target_node = binder_context_mgr_node;
    target_proc = target_node->proc;
    target_list = &target_proc->todo;
    target_wait = &target_proc->wait;

接着,分配了一個待處理事務t和一個待完成工作項tcomplete,並執行初始化工作:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
    /* TODO: reuse incoming transaction for reply */
    t = kzalloc(sizeof(*t), GFP_KERNEL);
    if (t == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_alloc_t_failed;
    }
    ......  
 
    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
    if (tcomplete == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_alloc_tcomplete_failed;
    }  
 
    ......  
 
    if (!reply && !(tr->flags & TF_ONE_WAY))
        t->from = thread;
    else
        t->from = NULL;
    t->sender_euid = proc->tsk->cred->euid;
    t->to_proc = target_proc;
    t->to_thread = target_thread;
    t->code = tr->code;
    t->flags = tr->flags;
    t->priority = task_nice(current);
    t->buffer = binder_alloc_buf(target_proc, tr->data_size,
        tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
    if (t->buffer == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_binder_alloc_buf_failed;
    }
    t->buffer->allow_user_free = 0;
    t->buffer->debug_id = t->debug_id;
    t->buffer->transaction = t;
    t->buffer->target_node = target_node;
    if (target_node)
        binder_inc_node(target_node, 1, 0, NULL);  
 
    offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));  
 
    if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {
        ......
        return_error = BR_FAILED_REPLY;
        goto err_copy_data_failed;
    }
    if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {
        ......
        return_error = BR_FAILED_REPLY;
        goto err_copy_data_failed;
    }

注意,這裏的事務t是要交給target_proc處理的,在這個場景之下,就是Service Manager了。因此,下面的語句:

1
2
    t->buffer = binder_alloc_buf(target_proc, tr->data_size,
            tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));

就是在Service Manager的進程空間中分配一塊內存來保存用戶傳進入的參數了:

1
2
3
4
5
6
7
8
9
10
    if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {
        ......
        return_error = BR_FAILED_REPLY;
        goto err_copy_data_failed;
    }
    if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {
        ......
        return_error = BR_FAILED_REPLY;
        goto err_copy_data_failed;
    }

由於現在target_node要被使用了,增加它的引用計數:

1
2
if (target_node)
        binder_inc_node(target_node, 1, 0, NULL);

接下去的for循環,就是用來處理傳輸數據中的Binder對象了。在我們的場景中,有一個類型爲BINDER_TYPE_BINDER的Binder實體MediaPlayerService:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
switch (fp->type) {
   case BINDER_TYPE_BINDER:
   case BINDER_TYPE_WEAK_BINDER: {
struct binder_ref *ref;
struct binder_node *node = binder_get_node(proc, fp->binder);
if (node == NULL) {
    node = binder_new_node(proc, fp->binder, fp->cookie);
    if (node == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_binder_new_node_failed;
    }
    node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
    node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
}
if (fp->cookie != node->cookie) {
    ......
    goto err_binder_get_ref_for_node_failed;
}
ref = binder_get_ref_for_node(target_proc, node);
if (ref == NULL) {
    return_error = BR_FAILED_REPLY;
    goto err_binder_get_ref_for_node_failed;
}
if (fp->type == BINDER_TYPE_BINDER)
    fp->type = BINDER_TYPE_HANDLE;
else
    fp->type = BINDER_TYPE_WEAK_HANDLE;
fp->handle = ref->desc;
binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo);
......  
 
} break;

由於是第一次在Binder驅動程序中傳輸這個MediaPlayerService,調用binder_get_node函數查詢這個Binder實體時,會返回空,於是binder_new_node在proc中新建一個,下次就可以直接使用了。

現在,由於要把這個Binder實體MediaPlayerService交給target_proc,也就是Service Manager來管理,也就是說Service Manager要引用這個MediaPlayerService了,於是通過binder_get_ref_for_node爲MediaPlayerService創建一個引用,並且通過binder_inc_ref來增加這個引用計數,防止這個引用還在使用過程當中就被銷燬。注意,到了這裏的時候,t->buffer中的flat_binder_obj的type已經改爲BINDER_TYPE_HANDLE,handle已經改爲ref->desc,跟原來不一樣了,因爲這個flat_binder_obj是最終是要傳給Service Manager的,而Service Manager只能夠通過句柄值來引用這個Binder實體。

最後,把待處理事務加入到target_list列表中去:

1
list_add_tail(&t->work.entry, target_list);

並且把待完成工作項加入到本線程的todo等待執行列表中去:

1
    list_add_tail(&tcomplete->entry, &thread->todo);

現在目標進程有事情可做了,於是喚醒它:

1
2
    if (target_wait)
        wake_up_interruptible(target_wait);

這裏就是要喚醒Service Manager進程了。回憶一下前面淺談Service Manager成爲Android進程間通信(IPC)機制Binder守護進程之路這篇文章,此時, Service Manager正在binder_thread_read函數中調用wait_event_interruptible進入休眠狀態。

這裏我們先忽略一下Service Manager被喚醒之後的場景,繼續MedaPlayerService的啓動過程,然後再回來。

回到binder_ioctl函數,bwr.read_size > 0爲true,於是進入binder_thread_read函數:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
    static int
    binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,
                       void  __user *buffer, int size, signed long *consumed, int non_block)
    {
        void __user *ptr = buffer + *consumed;
        void __user *end = buffer + size;  
 
        int ret = 0;
        int wait_for_proc_work;  
 
        if (*consumed == 0) {
            if (put_user(BR_NOOP, (uint32_t __user *)ptr))
                return -EFAULT;
            ptr += sizeof(uint32_t);
        }  
 
    retry:
        wait_for_proc_work = thread->transaction_stack == NULL && list_empty(&thread->todo);  
 
        .......  
 
        if (wait_for_proc_work) {
            .......
        } else {
            if (non_block) {
                if (!binder_has_thread_work(thread))
                    ret = -EAGAIN;
            } else
                ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));
        }  
 
        ......  
 
        while (1) {
            uint32_t cmd;
            struct binder_transaction_data tr;
            struct binder_work *w;
            struct binder_transaction *t = NULL;  
 
            if (!list_empty(&thread->todo))
                w = list_first_entry(&thread->todo, struct binder_work, entry);
            else if (!list_empty(&proc->todo) && wait_for_proc_work)
                w = list_first_entry(&proc->todo, struct binder_work, entry);
            else {
                if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */
                    goto retry;
                break;
            }  
 
            if (end - ptr < sizeof(tr) + 4)
                break;  
 
            switch (w->type) {
            ......
            case BINDER_WORK_TRANSACTION_COMPLETE: {
                cmd = BR_TRANSACTION_COMPLETE;
                if (put_user(cmd, (uint32_t __user *)ptr))
                    return -EFAULT;
                ptr += sizeof(uint32_t);  
 
                binder_stat_br(proc, thread, cmd);
                if (binder_debug_mask & BINDER_DEBUG_TRANSACTION_COMPLETE)
                    printk(KERN_INFO "binder: %d:%d BR_TRANSACTION_COMPLETE\n",
                    proc->pid, thread->pid);  
 
                list_del(&w->entry);
                kfree(w);
                binder_stats.obj_deleted[BINDER_STAT_TRANSACTION_COMPLETE]++;
                                                   } break;
            ......
            }  
 
            if (!t)
                continue;  
 
            ......
        }  
 
    done:
        ......
        return 0;
    }

這裏,thread->transaction_stack和thread->todo均不爲空,於是wait_for_proc_work爲false,由於binder_has_thread_work的時候,返回true,這裏因爲thread->todo不爲空,因此,線程雖然調用了wait_event_interruptible,但是不會睡眠,於是繼續往下執行。

由於thread->todo不爲空,執行下列語句:

1
2
    if (!list_empty(&thread->todo))
         w = list_first_entry(&thread->todo, struct binder_work, entry);

w->type爲BINDER_WORK_TRANSACTION_COMPLETE,這是在上面的binder_transaction函數設置的,於是執行:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
 switch (w->type) {
   ......
   case BINDER_WORK_TRANSACTION_COMPLETE: {
cmd = BR_TRANSACTION_COMPLETE;
if (put_user(cmd, (uint32_t __user *)ptr))
    return -EFAULT;
ptr += sizeof(uint32_t);  
 
       ......
list_del(&w->entry);
kfree(w);  
 
} break;
......
   }

這裏就將w從thread->todo刪除了。由於這裏t爲空,重新執行while循環,這時由於已經沒有事情可做了,最後就返回到binder_ioctl函數中。注間,這裏一共往用戶傳進來的緩衝區buffer寫入了兩個整數,分別是BR_NOOP和BR_TRANSACTION_COMPLETE。

binder_ioctl函數返回到用戶空間之前,把數據消耗情況拷貝回用戶空間中:

1
2
3
4
    if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
        ret = -EFAULT;
        goto err;
    }

最後返回到IPCThreadState::talkWithDriver函數中,執行下面語句:

1
2
3
4
5
6
7
8
9
10
11
 if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < (ssize_t)mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
<pre>            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
} ...... return NO_ERROR; }

首先是把mOut的數據清空:

1
mOut.setDataSize(0);

然後設置已經讀取的內容的大小:

1
2
    mIn.setDataSize(bwr.read_consumed);
    mIn.setDataPosition(0);

然後返回到IPCThreadState::waitForResponse函數中。在IPCThreadState::waitForResponse函數,先是從mIn讀出一個整數,這個便是BR_NOOP了,這是一個空操作,什麼也不做。然後繼續進入IPCThreadState::talkWithDriver函數中。
這時候,下面語句執行後:

1
    const bool needRead = mIn.dataPosition() &gt;= mIn.dataSize();

needRead爲false,因爲在mIn中,尚有一個整數BR_TRANSACTION_COMPLETE未讀出。

這時候,下面語句執行後:

1
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

outAvail等於0。因此,最後bwr.write_size和bwr.read_size均爲0,IPCThreadState::talkWithDriver函數什麼也不做,直接返回到IPCThreadState::waitForResponse函數中。在IPCThreadState::waitForResponse函數,又繼續從mIn讀出一個整數,這個便是BR_TRANSACTION_COMPLETE:

1
2
3
4
5
6
    switch (cmd) {
    case BR_TRANSACTION_COMPLETE:
           if (!reply &amp;&amp; !acquireResult) goto finish;
           break;
    ......
    }

reply不爲NULL,因此,IPCThreadState::waitForResponse的循環沒有結束,繼續執行,又進入到IPCThreadState::talkWithDrive中。

這次,needRead就爲true了,而outAvail仍爲0,所以bwr.read_size不爲0,bwr.write_size爲0。於是通過:

1
ioctl(mProcess-&gt;mDriverFD, BINDER_WRITE_READ, &amp;bwr)

進入到Binder驅動程序中的binder_ioctl函數中。由於bwr.write_size爲0,bwr.read_size不爲0,這次直接就進入到binder_thread_read函數中。這時候,thread->transaction_stack不等於0,但是thread->todo爲空,於是線程就通過:

1
    wait_event_interruptible(thread-&gt;wait, binder_has_thread_work(thread));

進入睡眠狀態,等待Service Manager來喚醒了。

現在,我們可以回到Service Manager被喚醒的過程了。我們接着前面淺談Service Manager成爲Android進程間通信(IPC)機制Binder守護進程之路這篇文章的最後,繼續描述。此時, Service Manager正在binder_thread_read函數中調用wait_event_interruptible_exclusive進入休眠狀態。上面被MediaPlayerService啓動後進程喚醒後,繼續執行binder_thread_read函數:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
    static int
    binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,
                       void  __user *buffer, int size, signed long *consumed, int non_block)
    {
        void __user *ptr = buffer + *consumed;
        void __user *end = buffer + size;  
 
        int ret = 0;
        int wait_for_proc_work;  
 
        if (*consumed == 0) {
            if (put_user(BR_NOOP, (uint32_t __user *)ptr))
                return -EFAULT;
            ptr += sizeof(uint32_t);
        }  
 
    retry:
        wait_for_proc_work = thread-&gt;transaction_stack == NULL &amp;&amp; list_empty(&amp;thread-&gt;todo);  
 
        ......  
 
        if (wait_for_proc_work) {
            ......
            if (non_block) {
                if (!binder_has_proc_work(proc, thread))
                    ret = -EAGAIN;
            } else
                ret = wait_event_interruptible_exclusive(proc-&gt;wait, binder_has_proc_work(proc, thread));
        } else {
            ......
        }  
 
        ......  
 
        while (1) {
            uint32_t cmd;
            struct binder_transaction_data tr;
            struct binder_work *w;
            struct binder_transaction *t = NULL;  
 
            if (!list_empty(&amp;thread-&gt;todo))
                w = list_first_entry(&amp;thread-&gt;todo, struct binder_work, entry);
            else if (!list_empty(&amp;proc-&gt;todo) &amp;&amp; wait_for_proc_work)
                w = list_first_entry(&amp;proc-&gt;todo, struct binder_work, entry);
            else {
                if (ptr - buffer == 4 &amp;&amp; !(thread-&gt;looper &amp; BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */
                    goto retry;
                break;
            }  
 
            if (end - ptr &lt; sizeof(tr) + 4)
                break;  
 
            switch (w-&gt;type) {
            case BINDER_WORK_TRANSACTION: {
                t = container_of(w, struct binder_transaction, work);
                                          } break;
            ......
            }  
 
            if (!t)
                continue;  
 
            BUG_ON(t-&gt;buffer == NULL);
            if (t-&gt;buffer-&gt;target_node) {
                struct binder_node *target_node = t-&gt;buffer-&gt;target_node;
                tr.target.ptr = target_node-&gt;ptr;
                tr.cookie =  target_node-&gt;cookie;
                ......
                cmd = BR_TRANSACTION;
            } else {
                ......
            }
            tr.code = t-&gt;code;
            tr.flags = t-&gt;flags;
            tr.sender_euid = t-&gt;sender_euid;  
 
            if (t-&gt;from) {
                struct task_struct *sender = t-&gt;from-&gt;proc-&gt;tsk;
                tr.sender_pid = task_tgid_nr_ns(sender, current-&gt;nsproxy-&gt;pid_ns);
            } else {
                tr.sender_pid = 0;
            }  
 
            tr.data_size = t-&gt;buffer-&gt;data_size;
            tr.offsets_size = t-&gt;buffer-&gt;offsets_size;
            tr.data.ptr.buffer = (void *)t-&gt;buffer-&gt;data + proc-&gt;user_buffer_offset;
            tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t-&gt;buffer-&gt;data_size, sizeof(void *));  
 
            if (put_user(cmd, (uint32_t __user *)ptr))
                return -EFAULT;
            ptr += sizeof(uint32_t);
            if (copy_to_user(ptr, &amp;tr, sizeof(tr)))
                return -EFAULT;
            ptr += sizeof(tr);  
 
            ......  
 
            list_del(&amp;t-&gt;work.entry);
            t-&gt;buffer-&gt;allow_user_free = 1;
            if (cmd == BR_TRANSACTION &amp;&amp; !(t-&gt;flags &amp; TF_ONE_WAY)) {
                t-&gt;to_parent = thread-&gt;transaction_stack;
                t-&gt;to_thread = thread;
                thread-&gt;transaction_stack = t;
            } else {
                t-&gt;buffer-&gt;transaction = NULL;
                kfree(t);
                binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;
            }
            break;
        }  
 
    done:  
 
        ......
        return 0;
    }

Service Manager被喚醒之後,就進入while循環開始處理事務了。這裏wait_for_proc_work等於1,並且proc->todo不爲空,所以從proc->todo列表中得到第一個工作項:

1
    w = list_first_entry(&amp;proc-&gt;todo, struct binder_work, entry);

從上面的描述中,我們知道,這個工作項的類型爲BINDER_WORK_TRANSACTION,於是通過下面語句得到事務項:

1
    t = container_of(w, struct binder_transaction, work);

接着就是把事務項t中的數據拷貝到本地局部變量struct binder_transaction_data tr中去了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
    if (t-&gt;buffer-&gt;target_node) {
        struct binder_node *target_node = t-&gt;buffer-&gt;target_node;
        tr.target.ptr = target_node-&gt;ptr;
        tr.cookie =  target_node-&gt;cookie;
        ......
        cmd = BR_TRANSACTION;
    } else {
        ......
    }
    tr.code = t-&gt;code;
    tr.flags = t-&gt;flags;
    tr.sender_euid = t-&gt;sender_euid;  
 
    if (t-&gt;from) {
        struct task_struct *sender = t-&gt;from-&gt;proc-&gt;tsk;
        tr.sender_pid = task_tgid_nr_ns(sender, current-&gt;nsproxy-&gt;pid_ns);
    } else {
        tr.sender_pid = 0;
    }  
 
    tr.data_size = t-&gt;buffer-&gt;data_size;
    tr.offsets_size = t-&gt;buffer-&gt;offsets_size;
    tr.data.ptr.buffer = (void *)t-&gt;buffer-&gt;data + proc-&gt;user_buffer_offset;
    tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t-&gt;buffer-&gt;data_size, sizeof(void *));

這裏有一個非常重要的地方,是Binder進程間通信機制的精髓所在:

1
2
    tr.data.ptr.buffer = (void *)t-&gt;buffer-&gt;data + proc-&gt;user_buffer_offset;
    tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t-&gt;buffer-&gt;data_size, sizeof(void *));

t->buffer->data所指向的地址是內核空間的,現在要把數據返回給Service Manager進程的用戶空間,而Service Manager進程的用戶空間是不能訪問內核空間的數據的,所以這裏要作一下處理。怎麼處理呢?我們在學面嚮對象語言的時候,對象的拷貝有深拷貝和淺拷貝之分,深拷貝是把另外分配一塊新內存,然後把原始對象的內容搬過去,淺拷貝是並沒有爲新對象分配一塊新空間,而只是分配一個引用,而個引用指向原始對象。Binder機制用的是類似淺拷貝的方法,通過在用戶空間分配一個虛擬地址,然後讓這個用戶空間虛擬地址與 t->buffer->data這個內核空間虛擬地址指向同一個物理地址,這樣就可以實現淺拷貝了。怎麼樣用戶空間和內核空間的虛擬地址同時指向同一個物理地址呢?請參考前面一篇文章淺談Service Manager成爲Android進程間通信(IPC)機制Binder守護進程之路,那裏有詳細描述。這裏只要將t->buffer->data加上一個偏移值proc->user_buffer_offset就可以得到t->buffer->data對應的用戶空間虛擬地址了。調整了tr.data.ptr.buffer的值之後,不要忘記也要一起調整tr.data.ptr.offsets的值。

接着就是把tr的內容拷貝到用戶傳進來的緩衝區去了,指針ptr指向這個用戶緩衝區的地址:

1
2
3
4
5
6
    if (put_user(cmd, (uint32_t __user *)ptr))
        return -EFAULT;
    ptr += sizeof(uint32_t);
    if (copy_to_user(ptr, &amp;tr, sizeof(tr)))
        return -EFAULT;
    ptr += sizeof(tr);

這裏可以看出,這裏只是對作tr.data.ptr.bufferr和tr.data.ptr.offsets的內容作了淺拷貝。

最後,由於已經處理了這個事務,要把它從todo列表中刪除:

1
2
3
4
5
6
7
8
9
10
11
    list_del(&amp;t-&gt;work.entry);
    t-&gt;buffer-&gt;allow_user_free = 1;
    if (cmd == BR_TRANSACTION &amp;&amp; !(t-&gt;flags &amp; TF_ONE_WAY)) {
        t-&gt;to_parent = thread-&gt;transaction_stack;
        t-&gt;to_thread = thread;
        thread-&gt;transaction_stack = t;
    } else {
        t-&gt;buffer-&gt;transaction = NULL;
        kfree(t);
        binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;
    }

注意,這裏的cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)爲true,表明這個事務雖然在驅動程序中已經處理完了,但是它仍然要等待Service Manager完成之後,給驅動程序一個確認,也就是需要等待回覆,於是把當前事務t放在thread->transaction_stack隊列的頭部:

1
2
3
    t-&gt;to_parent = thread-&gt;transaction_stack;
    t-&gt;to_thread = thread;
    thread-&gt;transaction_stack = t;

如果cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)爲false,那就不需要等待回覆了,直接把事務t刪掉。

這個while最後通過一個break跳了出來,最後返回到binder_ioctl函數中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
    static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
    {
        int ret;
        struct binder_proc *proc = filp-&gt;private_data;
        struct binder_thread *thread;
        unsigned int size = _IOC_SIZE(cmd);
        void __user *ubuf = (void __user *)arg;  
 
        ......  
 
        switch (cmd) {
        case BINDER_WRITE_READ: {
            struct binder_write_read bwr;
            if (size != sizeof(struct binder_write_read)) {
                ret = -EINVAL;
                goto err;
            }
            if (copy_from_user(&amp;bwr, ubuf, sizeof(bwr))) {
                ret = -EFAULT;
                goto err;
            }
            ......
            if (bwr.read_size &gt; 0) {
                ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &amp;bwr.read_consumed, filp-&gt;f_flags &amp; O_NONBLOCK);
                if (!list_empty(&amp;proc-&gt;todo))
                    wake_up_interruptible(&amp;proc-&gt;wait);
                if (ret &lt; 0) {
                    if (copy_to_user(ubuf, &amp;bwr, sizeof(bwr)))
                        ret = -EFAULT;
                    goto err;
                }
            }
            ......
            if (copy_to_user(ubuf, &amp;bwr, sizeof(bwr))) {
                ret = -EFAULT;
                goto err;
            }
            break;
            }
        ......
        default:
            ret = -EINVAL;
            goto err;
        }
        ret = 0;
    err:
        ......
        return ret;
    }

從binder_thread_read返回來後,再看看proc->todo是否還有事務等待處理,如果是,就把睡眠在proc->wait隊列的線程喚醒來處理。最後,把本地變量struct binder_write_read bwr的內容拷貝回到用戶傳進來的緩衝區中,就返回了。

這裏就是返回到frameworks/base/cmds/servicemanager/binder.c文件中的binder_loop函數了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
    void binder_loop(struct binder_state *bs, binder_handler func)
    {
        int res;
        struct binder_write_read bwr;
        unsigned readbuf[32];  
 
        bwr.write_size = 0;
        bwr.write_consumed = 0;
        bwr.write_buffer = 0;  
 
        readbuf[0] = BC_ENTER_LOOPER;
        binder_write(bs, readbuf, sizeof(unsigned));  
 
        for (;;) {
            bwr.read_size = sizeof(readbuf);
            bwr.read_consumed = 0;
            bwr.read_buffer = (unsigned) readbuf;  
 
            res = ioctl(bs-&gt;fd, BINDER_WRITE_READ, &amp;bwr);  
 
            if (res &lt; 0) {
                LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
                break;
            }  
 
            res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
            if (res == 0) {
                LOGE("binder_loop: unexpected reply?!\n");
                break;
            }
            if (res &lt; 0) {
                LOGE("binder_loop: io error %d %s\n", res, strerror(errno));
                break;
            }
        }
    }

返回來的數據都放在readbuf中,接着調用binder_parse進行解析:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
    int binder_parse(struct binder_state *bs, struct binder_io *bio,
                     uint32_t *ptr, uint32_t size, binder_handler func)
    {
        int r = 1;
        uint32_t *end = ptr + (size / 4);  
 
        while (ptr &lt; end) {
            uint32_t cmd = *ptr++;
            ......
            case BR_TRANSACTION: {
                struct binder_txn *txn = (void *) ptr;
                if ((end - ptr) * sizeof(uint32_t) &lt; sizeof(struct binder_txn)) {
                    LOGE("parse: txn too small!\n");
                    return -1;
                }
                binder_dump_txn(txn);
                if (func) {
                    unsigned rdata[256/4];
                    struct binder_io msg;
                    struct binder_io reply;
                    int res;  
 
                    bio_init(&amp;reply, rdata, sizeof(rdata), 4);
                    bio_init_from_txn(&amp;msg, txn);
                    res = func(bs, txn, &amp;msg, &amp;reply);
                    binder_send_reply(bs, &amp;reply, txn-&gt;data, res);
                }
                ptr += sizeof(*txn) / sizeof(uint32_t);
                break;
                                 }
            ......
            default:
                LOGE("parse: OOPS %d\n", cmd);
                return -1;
            }
        }  
 
        return r;
    }

首先把從Binder驅動程序讀出來的數據轉換爲一個struct binder_txn結構體,保存在txn本地變量中,struct binder_txn定義在frameworks/base/cmds/servicemanager/binder.h文件中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
    struct binder_txn
    {
        void *target;
        void *cookie;
        uint32_t code;
        uint32_t flags;  
 
        uint32_t sender_pid;
        uint32_t sender_euid;  
 
        uint32_t data_size;
        uint32_t offs_size;
        void *data;
        void *offs;
    };

函數中還用到了另外一個數據結構struct binder_io,也是定義在frameworks/base/cmds/servicemanager/binder.h文件中:

1
2
3
4
5
6
7
8
9
10
11
12
    struct binder_io
    {
        char *data;            /* pointer to read/write from */
        uint32_t *offs;        /* array of offsets */
        uint32_t data_avail;   /* bytes available in data buffer */
        uint32_t offs_avail;   /* entries available in offsets array */  
 
        char *data0;           /* start of data buffer */
        uint32_t *offs0;       /* start of offsets buffer */
        uint32_t flags;
        uint32_t unused;
    };

接着往下看,函數調bio_init來初始化reply變量:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
    void bio_init(struct binder_io *bio, void *data,
                  uint32_t maxdata, uint32_t maxoffs)
    {
        uint32_t n = maxoffs * sizeof(uint32_t);  
 
        if (n &gt; maxdata) {
            bio-&gt;flags = BIO_F_OVERFLOW;
            bio-&gt;data_avail = 0;
            bio-&gt;offs_avail = 0;
            return;
        }  
 
        bio-&gt;data = bio-&gt;data0 = data + n;
        bio-&gt;offs = bio-&gt;offs0 = data;
        bio-&gt;data_avail = maxdata - n;
        bio-&gt;offs_avail = maxoffs;
        bio-&gt;flags = 0;
    }

接着又調用bio_init_from_txn來初始化msg變量:

1
2
3
4
5
6
7
8
    void bio_init_from_txn(struct binder_io *bio, struct binder_txn *txn)
    {
        bio-&gt;data = bio-&gt;data0 = txn-&gt;data;
        bio-&gt;offs = bio-&gt;offs0 = txn-&gt;offs;
        bio-&gt;data_avail = txn-&gt;data_size;
        bio-&gt;offs_avail = txn-&gt;offs_size / 4;
        bio-&gt;flags = BIO_F_SHARED;
    }

最後,真正進行處理的函數是從參數中傳進來的函數指針func,這裏就是定義在frameworks/base/cmds/servicemanager/service_manager.c文件中的svcmgr_handler函數:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
    int svcmgr_handler(struct binder_state *bs,
                       struct binder_txn *txn,
                       struct binder_io *msg,
                       struct binder_io *reply)
    {
        struct svcinfo *si;
        uint16_t *s;
        unsigned len;
        void *ptr;
        uint32_t strict_policy;  
 
        if (txn-&gt;target != svcmgr_handle)
            return -1;  
 
        // Equivalent to Parcel::enforceInterface(), reading the RPC
        // header with the strict mode policy mask and the interface name.
        // Note that we ignore the strict_policy and don't propagate it
        // further (since we do no outbound RPCs anyway).
        strict_policy = bio_get_uint32(msg);
        s = bio_get_string16(msg, &amp;len);
        if ((len != (sizeof(svcmgr_id) / 2)) ||
            memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
                fprintf(stderr,"invalid id %s\n", str8(s));
                return -1;
        }  
 
        switch(txn-&gt;code) {
        ......
        case SVC_MGR_ADD_SERVICE:
            s = bio_get_string16(msg, &amp;len);
            ptr = bio_get_ref(msg);
            if (do_add_service(bs, s, len, ptr, txn-&gt;sender_euid))
                return -1;
            break;
        ......
        }  
 
        bio_put_uint32(reply, 0);
        return 0;
    }

回憶一下,在BpServiceManager::addService時,傳給Binder驅動程序的參數爲:

1
2
3
4
    writeInt32(IPCThreadState::self()-&gt;getStrictModePolicy() | STRICT_MODE_PENALTY_GATHER);
    writeString16("android.os.IServiceManager");
    writeString16("media.player");
    writeStrongBinder(new MediaPlayerService());

這裏的語句:

1
2
3
4
    strict_policy = bio_get_uint32(msg);
    s = bio_get_string16(msg, &amp;len);
    s = bio_get_string16(msg, &amp;len);
    ptr = bio_get_ref(msg);

就是依次把它們讀取出來了,這裏,我們只要看一下bio_get_ref的實現。先看一個數據結構struct binder_obj的定義:

1
2
3
4
5
6
7
    struct binder_object
    {
        uint32_t type;
        uint32_t flags;
        void *pointer;
        void *cookie;
    };
1
2
3
這個結構體其實就是對應struct flat_binder_obj的。
 
        接着看bio_get_ref實現:

void *bio_get_ref(struct binder_io *bio)
{
struct binder_object *obj;

obj = _bio_get_obj(bio);
if (!obj)
return 0;

if (obj->type == BINDER_TYPE_HANDLE)
return obj->pointer;

return 0;
}

1
2
3
_bio_get_obj這個函數就不跟進去看了,它的作用就是從binder_io中取得第一個還沒取獲取過的binder_object。在這個場景下,就是我們最開始傳過來代表MediaPlayerService的flat_binder_obj了,這個原始的flat_binder_obj的type爲BINDER_TYPE_BINDER,binder爲指向MediaPlayerService的弱引用的地址。在前面我們說過,在Binder驅動驅動程序裏面,會把這個flat_binder_obj的type改爲BINDER_TYPE_HANDLE,handle改爲一個句柄值。這裏的handle值就等於obj-&gt;pointer的值。
 
        回到svcmgr_handler函數,調用do_add_service進一步處理:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
    int do_add_service(struct binder_state *bs,
                       uint16_t *s, unsigned len,
                       void *ptr, unsigned uid)
    {
        struct svcinfo *si;
    //    LOGI("add_service('%s',%p) uid=%d\n", str8(s), ptr, uid);  
 
        if (!ptr || (len == 0) || (len &gt; 127))
            return -1;  
 
        if (!svc_can_register(uid, s)) {
            LOGE("add_service('%s',%p) uid=%d - PERMISSION DENIED\n",
                 str8(s), ptr, uid);
            return -1;
        }  
 
        si = find_svc(s, len);
        if (si) {
            if (si-&gt;ptr) {
                LOGE("add_service('%s',%p) uid=%d - ALREADY REGISTERED\n",
                     str8(s), ptr, uid);
                return -1;
            }
            si-&gt;ptr = ptr;
        } else {
            si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
            if (!si) {
                LOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n",
                     str8(s), ptr, uid);
                return -1;
            }
            si-&gt;ptr = ptr;
            si-&gt;len = len;
            memcpy(si-&gt;name, s, (len + 1) * sizeof(uint16_t));
            si-&gt;name[len] = '\0';
            si-&gt;death.func = svcinfo_death;
            si-&gt;death.ptr = si;
            si-&gt;next = svclist;
            svclist = si;
        }  
 
        binder_acquire(bs, ptr);
        binder_link_to_death(bs, ptr, &amp;si-&gt;death);
        return 0;
    }

這個函數的實現很簡單,就是把MediaPlayerService這個Binder實體的引用寫到一個struct svcinfo結構體中,主要是它的名稱和句柄值,然後插入到鏈接svclist的頭部去。這樣,Client來向Service Manager查詢服務接口時,只要給定服務名稱,Service Manger就可以返回相應的句柄值了。

這個函數執行完成後,返回到svcmgr_handler函數,函數的最後,將一個錯誤碼0寫到reply變量中去,表示一切正常:

1
bio_put_uint32(reply, 0);

svcmgr_handler函數執行完成後,返回到binder_parse函數,執行下面語句:

1
binder_send_reply(bs, &amp;reply, txn-&gt;data, res);

我們看一下binder_send_reply的實現,從函數名就可以猜到它要做什麼了,告訴Binder驅動程序,它完成了Binder驅動程序交給它的任務了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
    void binder_send_reply(struct binder_state *bs,
                           struct binder_io *reply,
                           void *buffer_to_free,
                           int status)
    {
        struct {
            uint32_t cmd_free;
            void *buffer;
            uint32_t cmd_reply;
            struct binder_txn txn;
        } __attribute__((packed)) data;  
 
        data.cmd_free = BC_FREE_BUFFER;
        data.buffer = buffer_to_free;
        data.cmd_reply = BC_REPLY;
        data.txn.target = 0;
        data.txn.cookie = 0;
        data.txn.code = 0;
        if (status) {
            data.txn.flags = TF_STATUS_CODE;
            data.txn.data_size = sizeof(int);
            data.txn.offs_size = 0;
            data.txn.data = &amp;status;
            data.txn.offs = 0;
        } else {
            data.txn.flags = 0;
            data.txn.data_size = reply-&gt;data - reply-&gt;data0;
            data.txn.offs_size = ((char*) reply-&gt;offs) - ((char*) reply-&gt;offs0);
            data.txn.data = reply-&gt;data0;
            data.txn.offs = reply-&gt;offs0;
        }
        binder_write(bs, &amp;data, sizeof(data));
    }

從這裏可以看出,binder_send_reply告訴Binder驅動程序執行BC_FREE_BUFFER和BC_REPLY命令,前者釋放之前在binder_transaction分配的空間,地址爲buffer_to_free,buffer_to_free這個地址是Binder驅動程序把自己在內核空間用的地址轉換成用戶空間地址再傳給Service Manager的,所以Binder驅動程序拿到這個地址後,知道怎麼樣釋放這個空間;後者告訴MediaPlayerService,它的addService操作已經完成了,錯誤碼是0,保存在data.txn.data中。

再來看binder_write函數:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
    int binder_write(struct binder_state *bs, void *data, unsigned len)
    {
        struct binder_write_read bwr;
        int res;
        bwr.write_size = len;
        bwr.write_consumed = 0;
        bwr.write_buffer = (unsigned) data;
        bwr.read_size = 0;
        bwr.read_consumed = 0;
        bwr.read_buffer = 0;
        res = ioctl(bs-&gt;fd, BINDER_WRITE_READ, &amp;bwr);
        if (res &lt; 0) {
            fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                    strerror(errno));
        }
        return res;
    }

這裏可以看出,只有寫操作,沒有讀操作,即read_size爲0。

這裏又是一個ioctl的BINDER_WRITE_READ操作。直入到驅動程序的binder_ioctl函數後,執行BINDER_WRITE_READ命令,這裏就不累述了。

最後,從binder_ioctl執行到binder_thread_write函數,我們首先看第一個命令BC_FREE_BUFFER:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
    int
    binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
                        void __user *buffer, int size, signed long *consumed)
    {
        uint32_t cmd;
        void __user *ptr = buffer + *consumed;
        void __user *end = buffer + size;  
 
        while (ptr &lt; end &amp;&amp; thread-&gt;return_error == BR_OK) {
            if (get_user(cmd, (uint32_t __user *)ptr))
                return -EFAULT;
            ptr += sizeof(uint32_t);
            if (_IOC_NR(cmd) &lt; ARRAY_SIZE(binder_stats.bc)) {
                binder_stats.bc[_IOC_NR(cmd)]++;
                proc-&gt;stats.bc[_IOC_NR(cmd)]++;
                thread-&gt;stats.bc[_IOC_NR(cmd)]++;
            }
            switch (cmd) {
            ......
            case BC_FREE_BUFFER: {
                void __user *data_ptr;
                struct binder_buffer *buffer;  
 
                if (get_user(data_ptr, (void * __user *)ptr))
                    return -EFAULT;
                ptr += sizeof(void *);  
 
                buffer = binder_buffer_lookup(proc, data_ptr);
                if (buffer == NULL) {
                    binder_user_error("binder: %d:%d "
                        "BC_FREE_BUFFER u%p no match\n",
                        proc-&gt;pid, thread-&gt;pid, data_ptr);
                    break;
                }
                if (!buffer-&gt;allow_user_free) {
                    binder_user_error("binder: %d:%d "
                        "BC_FREE_BUFFER u%p matched "
                        "unreturned buffer\n",
                        proc-&gt;pid, thread-&gt;pid, data_ptr);
                    break;
                }
                if (binder_debug_mask &amp; BINDER_DEBUG_FREE_BUFFER)
                    printk(KERN_INFO "binder: %d:%d BC_FREE_BUFFER u%p found buffer %d for %s transaction\n",
                    proc-&gt;pid, thread-&gt;pid, data_ptr, buffer-&gt;debug_id,
                    buffer-&gt;transaction ? "active" : "finished");  
 
                if (buffer-&gt;transaction) {
                    buffer-&gt;transaction-&gt;buffer = NULL;
                    buffer-&gt;transaction = NULL;
                }
                if (buffer-&gt;async_transaction &amp;&amp; buffer-&gt;target_node) {
                    BUG_ON(!buffer-&gt;target_node-&gt;has_async_transaction);
                    if (list_empty(&amp;buffer-&gt;target_node-&gt;async_todo))
                        buffer-&gt;target_node-&gt;has_async_transaction = 0;
                    else
                        list_move_tail(buffer-&gt;target_node-&gt;async_todo.next, &amp;thread-&gt;todo);
                }
                binder_transaction_buffer_release(proc, buffer, NULL);
                binder_free_buf(proc, buffer);
                break;
                                 }  
 
            ......
            *consumed = ptr - buffer;
        }
        return 0;
    }

首先通過看這個語句:

1
get_user(data_ptr, (void * __user *)ptr)

這個是獲得要刪除的Buffer的用戶空間地址,接着通過下面這個語句來找到這個地址對應的struct binder_buffer信息:

1
buffer = binder_buffer_lookup(proc, data_ptr);

因爲這個空間是前面在binder_transaction裏面分配的,所以這裏一定能找到。

最後,就可以釋放這塊內存了:

1
2
    binder_transaction_buffer_release(proc, buffer, NULL);
    binder_free_buf(proc, buffer);

再來看另外一個命令BC_REPLY:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
    int
    binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
                        void __user *buffer, int size, signed long *consumed)
    {
        uint32_t cmd;
        void __user *ptr = buffer + *consumed;
        void __user *end = buffer + size;  
 
        while (ptr &lt; end &amp;&amp; thread-&gt;return_error == BR_OK) {
            if (get_user(cmd, (uint32_t __user *)ptr))
                return -EFAULT;
            ptr += sizeof(uint32_t);
            if (_IOC_NR(cmd) &lt; ARRAY_SIZE(binder_stats.bc)) {
                binder_stats.bc[_IOC_NR(cmd)]++;
                proc-&gt;stats.bc[_IOC_NR(cmd)]++;
                thread-&gt;stats.bc[_IOC_NR(cmd)]++;
            }
            switch (cmd) {
            ......
            case BC_TRANSACTION:
            case BC_REPLY: {
                struct binder_transaction_data tr;  
 
                if (copy_from_user(&amp;tr, ptr, sizeof(tr)))
                    return -EFAULT;
                ptr += sizeof(tr);
                binder_transaction(proc, thread, &amp;tr, cmd == BC_REPLY);
                break;
                           }  
 
            ......
            *consumed = ptr - buffer;
        }
        return 0;
    }

又再次進入到binder_transaction函數:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
    static void
    binder_transaction(struct binder_proc *proc, struct binder_thread *thread,
    struct binder_transaction_data *tr, int reply)
    {
        struct binder_transaction *t;
        struct binder_work *tcomplete;
        size_t *offp, *off_end;
        struct binder_proc *target_proc;
        struct binder_thread *target_thread = NULL;
        struct binder_node *target_node = NULL;
        struct list_head *target_list;
        wait_queue_head_t *target_wait;
        struct binder_transaction *in_reply_to = NULL;
        struct binder_transaction_log_entry *e;
        uint32_t return_error;  
 
        ......  
 
        if (reply) {
            in_reply_to = thread-&gt;transaction_stack;
            if (in_reply_to == NULL) {
                ......
                return_error = BR_FAILED_REPLY;
                goto err_empty_call_stack;
            }
            binder_set_nice(in_reply_to-&gt;saved_priority);
            if (in_reply_to-&gt;to_thread != thread) {
                .......
                goto err_bad_call_stack;
            }
            thread-&gt;transaction_stack = in_reply_to-&gt;to_parent;
            target_thread = in_reply_to-&gt;from;
            if (target_thread == NULL) {
                return_error = BR_DEAD_REPLY;
                goto err_dead_binder;
            }
            if (target_thread-&gt;transaction_stack != in_reply_to) {
                ......
                return_error = BR_FAILED_REPLY;
                in_reply_to = NULL;
                target_thread = NULL;
                goto err_dead_binder;
            }
            target_proc = target_thread-&gt;proc;
        } else {
            ......
        }
        if (target_thread) {
            e-&gt;to_thread = target_thread-&gt;pid;
            target_list = &amp;target_thread-&gt;todo;
            target_wait = &amp;target_thread-&gt;wait;
        } else {
            ......
        }  
 
        /* TODO: reuse incoming transaction for reply */
        t = kzalloc(sizeof(*t), GFP_KERNEL);
        if (t == NULL) {
            return_error = BR_FAILED_REPLY;
            goto err_alloc_t_failed;
        }  
 
        tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
        if (tcomplete == NULL) {
            return_error = BR_FAILED_REPLY;
            goto err_alloc_tcomplete_failed;
        }  
 
        if (!reply &amp;&amp; !(tr-&gt;flags &amp; TF_ONE_WAY))
            t-&gt;from = thread;
        else
            t-&gt;from = NULL;
        t-&gt;sender_euid = proc-&gt;tsk-&gt;cred-&gt;euid;
        t-&gt;to_proc = target_proc;
        t-&gt;to_thread = target_thread;
        t-&gt;code = tr-&gt;code;
        t-&gt;flags = tr-&gt;flags;
        t-&gt;priority = task_nice(current);
        t-&gt;buffer = binder_alloc_buf(target_proc, tr-&gt;data_size,
            tr-&gt;offsets_size, !reply &amp;&amp; (t-&gt;flags &amp; TF_ONE_WAY));
        if (t-&gt;buffer == NULL) {
            return_error = BR_FAILED_REPLY;
            goto err_binder_alloc_buf_failed;
        }
        t-&gt;buffer-&gt;allow_user_free = 0;
        t-&gt;buffer-&gt;debug_id = t-&gt;debug_id;
        t-&gt;buffer-&gt;transaction = t;
        t-&gt;buffer-&gt;target_node = target_node;
        if (target_node)
            binder_inc_node(target_node, 1, 0, NULL);  
 
        offp = (size_t *)(t-&gt;buffer-&gt;data + ALIGN(tr-&gt;data_size, sizeof(void *)));  
 
        if (copy_from_user(t-&gt;buffer-&gt;data, tr-&gt;data.ptr.buffer, tr-&gt;data_size)) {
            binder_user_error("binder: %d:%d got transaction with invalid "
                "data ptr\n", proc-&gt;pid, thread-&gt;pid);
            return_error = BR_FAILED_REPLY;
            goto err_copy_data_failed;
        }
        if (copy_from_user(offp, tr-&gt;data.ptr.offsets, tr-&gt;offsets_size)) {
            binder_user_error("binder: %d:%d got transaction with invalid "
                "offsets ptr\n", proc-&gt;pid, thread-&gt;pid);
            return_error = BR_FAILED_REPLY;
            goto err_copy_data_failed;
        }  
 
        ......  
 
        if (reply) {
            BUG_ON(t-&gt;buffer-&gt;async_transaction != 0);
            binder_pop_transaction(target_thread, in_reply_to);
        } else if (!(t-&gt;flags &amp; TF_ONE_WAY)) {
            ......
        } else {
            ......
        }
        t-&gt;work.type = BINDER_WORK_TRANSACTION;
        list_add_tail(&amp;t-&gt;work.entry, target_list);
        tcomplete-&gt;type = BINDER_WORK_TRANSACTION_COMPLETE;
        list_add_tail(&amp;tcomplete-&gt;entry, &amp;thread-&gt;todo);
        if (target_wait)
            wake_up_interruptible(target_wait);
        return;
        ......
    }

注意,這裏的reply爲1,我們忽略掉其它無關代碼。

前面Service Manager正在binder_thread_read函數中被MediaPlayerService啓動後進程喚醒後,在最後會把當前處理完的事務放在thread->transaction_stack中:

1
2
3
4
5
    if (cmd == BR_TRANSACTION &amp;&amp; !(t-&gt;flags &amp; TF_ONE_WAY)) {
        t-&gt;to_parent = thread-&gt;transaction_stack;
        t-&gt;to_thread = thread;
        thread-&gt;transaction_stack = t;
    }

所以,這裏,首先是把它這個binder_transaction取回來,並且放在本地變量in_reply_to中:

1
    in_reply_to = thread-&gt;transaction_stack;

接着就可以通過in_reply_to得到最終發出這個事務請求的線程和進程:

1
2
    target_thread = in_reply_to-&gt;from;
    target_proc = target_thread-&gt;proc;

然後得到target_list和target_wait:

1
2
    target_list = &amp;target_thread-&gt;todo;
    target_wait = &amp;target_thread-&gt;wait;

下面這一段代碼:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
    /* TODO: reuse incoming transaction for reply */
    t = kzalloc(sizeof(*t), GFP_KERNEL);
    if (t == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_alloc_t_failed;
    }  
 
    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
    if (tcomplete == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_alloc_tcomplete_failed;
    }  
 
    if (!reply &amp;&amp; !(tr-&gt;flags &amp; TF_ONE_WAY))
        t-&gt;from = thread;
    else
        t-&gt;from = NULL;
    t-&gt;sender_euid = proc-&gt;tsk-&gt;cred-&gt;euid;
    t-&gt;to_proc = target_proc;
    t-&gt;to_thread = target_thread;
    t-&gt;code = tr-&gt;code;
    t-&gt;flags = tr-&gt;flags;
    t-&gt;priority = task_nice(current);
    t-&gt;buffer = binder_alloc_buf(target_proc, tr-&gt;data_size,
        tr-&gt;offsets_size, !reply &amp;&amp; (t-&gt;flags &amp; TF_ONE_WAY));
    if (t-&gt;buffer == NULL) {
        return_error = BR_FAILED_REPLY;
        goto err_binder_alloc_buf_failed;
    }
    t-&gt;buffer-&gt;allow_user_free = 0;
    t-&gt;buffer-&gt;debug_id = t-&gt;debug_id;
    t-&gt;buffer-&gt;transaction = t;
    t-&gt;buffer-&gt;target_node = target_node;
    if (target_node)
        binder_inc_node(target_node, 1, 0, NULL);  
 
    offp = (size_t *)(t-&gt;buffer-&gt;data + ALIGN(tr-&gt;data_size, sizeof(void *)));  
 
    if (copy_from_user(t-&gt;buffer-&gt;data, tr-&gt;data.ptr.buffer, tr-&gt;data_size)) {
        binder_user_error("binder: %d:%d got transaction with invalid "
            "data ptr\n", proc-&gt;pid, thread-&gt;pid);
        return_error = BR_FAILED_REPLY;
        goto err_copy_data_failed;
    }
    if (copy_from_user(offp, tr-&gt;data.ptr.offsets, tr-&gt;offsets_size)) {
        binder_user_error("binder: %d:%d got transaction with invalid "
            "offsets ptr\n", proc-&gt;pid, thread-&gt;pid);
        return_error = BR_FAILED_REPLY;
        goto err_copy_data_failed;
    }

我們在前面已經分析過了,這裏不再重複。但是有一點要注意的是,這裏target_node爲NULL,因此,t->buffer->target_node也爲NULL。

函數本來有一個for循環,用來處理數據中的Binder對象,這裏由於沒有Binder對象,所以就略過了。到了下面這句代碼:

1
binder_pop_transaction(target_thread, in_reply_to);

我們看看做了什麼事情:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
    static void
    binder_pop_transaction(
        struct binder_thread *target_thread, struct binder_transaction *t)
    {
        if (target_thread) {
            BUG_ON(target_thread-&gt;transaction_stack != t);
            BUG_ON(target_thread-&gt;transaction_stack-&gt;from != target_thread);
            target_thread-&gt;transaction_stack =
                target_thread-&gt;transaction_stack-&gt;from_parent;
            t-&gt;from = NULL;
        }
        t-&gt;need_reply = 0;
        if (t-&gt;buffer)
            t-&gt;buffer-&gt;transaction = NULL;
        kfree(t);
        binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;
    }

由於到了這裏,已經不需要in_reply_to這個transaction了,就把它刪掉。

回到binder_transaction函數:

1
2
3
4
    t-&gt;work.type = BINDER_WORK_TRANSACTION;
    list_add_tail(&amp;t-&gt;work.entry, target_list);
    tcomplete-&gt;type = BINDER_WORK_TRANSACTION_COMPLETE;
    list_add_tail(&amp;tcomplete-&gt;entry, &amp;thread-&gt;todo);

和前面一樣,分別把t和tcomplete分別放在target_list和thread->todo隊列中,這裏的target_list指的就是最初調用IServiceManager::addService的MediaPlayerService的Server主線程的的thread->todo隊列了,而thread->todo指的是Service Manager中用來回復IServiceManager::addService請求的線程。

最後,喚醒等待在target_wait隊列上的線程了,就是最初調用IServiceManager::addService的MediaPlayerService的Server主線程了,它最後在binder_thread_read函數中睡眠在thread->wait上,就是這裏的target_wait了:

1
2
    if (target_wait)
        wake_up_interruptible(target_wait);

這樣,Service Manger回覆調用IServiceManager::addService請求就算完成了,重新回到frameworks/base/cmds/servicemanager/binder.c文件中的binder_loop函數等待下一個Client請求的到來。事實上,Service Manger回到binder_loop函數再次執行ioctl函數時候,又會再次進入到binder_thread_read函數。這時個會發現thread->todo不爲空,這是因爲剛纔我們調用了:

1
    list_add_tail(&amp;tcomplete-&gt;entry, &amp;thread-&gt;todo);

把一個工作項tcompelete放在了在thread->todo中,這個tcompelete的type爲BINDER_WORK_TRANSACTION_COMPLETE,因此,Binder驅動程序會執行下面操作:

1
2
3
4
5
6
7
8
9
10
11
12
13
    switch (w-&gt;type) {
    case BINDER_WORK_TRANSACTION_COMPLETE: {
        cmd = BR_TRANSACTION_COMPLETE;
        if (put_user(cmd, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);  
 
        list_del(&amp;w-&gt;entry);
        kfree(w);  
 
        } break;
        ......
    }

binder_loop函數執行完這個ioctl調用後,纔會在下一次調用ioctl進入到Binder驅動程序進入休眠狀態,等待下一次Client的請求。

上面講到調用IServiceManager::addService的MediaPlayerService的Server主線程被喚醒了,於是,重新執行binder_thread_read函數:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
    static int
    binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,
                       void  __user *buffer, int size, signed long *consumed, int non_block)
    {
        void __user *ptr = buffer + *consumed;
        void __user *end = buffer + size;  
 
        int ret = 0;
        int wait_for_proc_work;  
 
        if (*consumed == 0) {
            if (put_user(BR_NOOP, (uint32_t __user *)ptr))
                return -EFAULT;
            ptr += sizeof(uint32_t);
        }  
 
    retry:
        wait_for_proc_work = thread-&gt;transaction_stack == NULL &amp;&amp; list_empty(&amp;thread-&gt;todo);  
 
        ......  
 
        if (wait_for_proc_work) {
            ......
        } else {
            if (non_block) {
                if (!binder_has_thread_work(thread))
                    ret = -EAGAIN;
            } else
                ret = wait_event_interruptible(thread-&gt;wait, binder_has_thread_work(thread));
        }  
 
        ......  
 
        while (1) {
            uint32_t cmd;
            struct binder_transaction_data tr;
            struct binder_work *w;
            struct binder_transaction *t = NULL;  
 
            if (!list_empty(&amp;thread-&gt;todo))
                w = list_first_entry(&amp;thread-&gt;todo, struct binder_work, entry);
            else if (!list_empty(&amp;proc-&gt;todo) &amp;&amp; wait_for_proc_work)
                w = list_first_entry(&amp;proc-&gt;todo, struct binder_work, entry);
            else {
                if (ptr - buffer == 4 &amp;&amp; !(thread-&gt;looper &amp; BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */
                    goto retry;
                break;
            }  
 
            ......  
 
            switch (w-&gt;type) {
            case BINDER_WORK_TRANSACTION: {
                t = container_of(w, struct binder_transaction, work);
                                          } break;
            ......
            }  
 
            if (!t)
                continue;  
 
            BUG_ON(t-&gt;buffer == NULL);
            if (t-&gt;buffer-&gt;target_node) {
                ......
            } else {
                tr.target.ptr = NULL;
                tr.cookie = NULL;
                cmd = BR_REPLY;
            }
            tr.code = t-&gt;code;
            tr.flags = t-&gt;flags;
            tr.sender_euid = t-&gt;sender_euid;  
 
            if (t-&gt;from) {
                ......
            } else {
                tr.sender_pid = 0;
            }  
 
            tr.data_size = t-&gt;buffer-&gt;data_size;
            tr.offsets_size = t-&gt;buffer-&gt;offsets_size;
            tr.data.ptr.buffer = (void *)t-&gt;buffer-&gt;data + proc-&gt;user_buffer_offset;
            tr.data.ptr.offsets = tr.data.ptr.buffer + ALIGN(t-&gt;buffer-&gt;data_size, sizeof(void *));  
 
            if (put_user(cmd, (uint32_t __user *)ptr))
                return -EFAULT;
            ptr += sizeof(uint32_t);
            if (copy_to_user(ptr, &amp;tr, sizeof(tr)))
                return -EFAULT;
            ptr += sizeof(tr);  
 
            ......  
 
            list_del(&amp;t-&gt;work.entry);
            t-&gt;buffer-&gt;allow_user_free = 1;
            if (cmd == BR_TRANSACTION &amp;&amp; !(t-&gt;flags &amp; TF_ONE_WAY)) {
                ......
            } else {
                t-&gt;buffer-&gt;transaction = NULL;
                kfree(t);
                binder_stats.obj_deleted[BINDER_STAT_TRANSACTION]++;
            }
            break;
        }  
 
    done:
        ......
        return 0;
    }

在while循環中,從thread->todo得到w,w->type爲BINDER_WORK_TRANSACTION,於是,得到t。從上面可以知道,Service Manager反回了一個0回來,寫在t->buffer->data裏面,現在把t->buffer->data加上proc->user_buffer_offset,得到用戶空間地址,保存在tr.data.ptr.buffer裏面,這樣用戶空間就可以訪問這個返回碼了。由於cmd不等於BR_TRANSACTION,這時就可以把t刪除掉了,因爲以後都不需要用了。

執行完這個函數後,就返回到binder_ioctl函數,執行下面語句,把數據返回給用戶空間:

1
2
3
4
    if (copy_to_user(ubuf, &amp;bwr, sizeof(bwr))) {
        ret = -EFAULT;
        goto err;
    }

接着返回到用戶空間IPCThreadState::talkWithDriver函數,最後返回到IPCThreadState::waitForResponse函數,最終執行到下面語句:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
    status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
    {
        int32_t cmd;
        int32_t err;  
 
        while (1) {
            if ((err=talkWithDriver()) &lt; NO_ERROR) break;  
 
            ......  
 
            cmd = mIn.readInt32();  
 
            ......  
 
            switch (cmd) {
            ......
            case BR_REPLY:
                {
                    binder_transaction_data tr;
                    err = mIn.read(&amp;tr, sizeof(tr));
                    LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                    if (err != NO_ERROR) goto finish;  
 
                    if (reply) {
                        if ((tr.flags &amp; TF_STATUS_CODE) == 0) {
                            reply-&gt;ipcSetDataReference(
                                reinterpret_cast(tr.data.ptr.buffer),
                                tr.data_size,
                                reinterpret_cast(tr.data.ptr.offsets),
                                tr.offsets_size/sizeof(size_t),
                                freeBuffer, this);
                        } else {
                            ......
                        }
                    } else {
                        ......
                    }
                }
                goto finish;  
 
            ......
            }
        }  
 
    finish:
        ......
        return err;
    }

注意,這裏的tr.flags等於0,這個是在上面的binder_send_reply函數裏設置的。最終把結果保存在reply了:

1
2
3
4
5
6
    reply-&gt;ipcSetDataReference(
           reinterpret_cast(tr.data.ptr.buffer),
           tr.data_size,
           reinterpret_cast(tr.data.ptr.offsets),
           tr.offsets_size/sizeof(size_t),
           freeBuffer, this);

這個函數我們就不看了,有興趣的讀者可以研究一下。

從這裏層層返回,最後回到MediaPlayerService::instantiate函數中。

至此,IServiceManager::addService終於執行完畢了。這個過程非常複雜,但是如果我們能夠深刻地理解這一過程,將能很好地理解Binder機制的設計思想和實現過程。這裏,對IServiceManager::addService過程中MediaPlayerService、ServiceManager和BinderDriver之間的交互作一個小結:

回到frameworks/base/media/mediaserver/main_mediaserver.cpp文件中的main函數,接下去還要執行下面兩個函數:

1
2
    ProcessState::self()-&gt;startThreadPool();
    IPCThreadState::self()-&gt;joinThreadPool();

首先看ProcessState::startThreadPool函數的實現:

1
2
3
4
5
6
7
8
    void ProcessState::startThreadPool()
    {
        AutoMutex _l(mLock);
        if (!mThreadPoolStarted) {
            mThreadPoolStarted = true;
            spawnPooledThread(true);
        }
    }

這裏調用spwanPooledThread:

1
2
3
4
5
6
7
8
9
10
11
    void ProcessState::spawnPooledThread(bool isMain)
    {
        if (mThreadPoolStarted) {
            int32_t s = android_atomic_add(1, &amp;mThreadPoolSeq);
            char buf[32];
            sprintf(buf, "Binder Thread #%d", s);
            LOGV("Spawning new pooled thread, name=%s\n", buf);
            sp t = new PoolThread(isMain);
            t-&gt;run(buf);
        }
    }

這裏主要是創建一個線程,PoolThread繼續Thread類,Thread類定義在frameworks/base/libs/utils/Threads.cpp文件中,其run函數最終調用子類的threadLoop函數,這裏即爲PoolThread::threadLoop函數:

1
2
3
4
5
    virtual bool threadLoop()
    {
        IPCThreadState::self()-&gt;joinThreadPool(mIsMain);
        return false;
    }

這裏和frameworks/base/media/mediaserver/main_mediaserver.cpp文件中的main函數一樣,最終都是調用了IPCThreadState::joinThreadPool函數,它們的區別是,一個參數是true,一個是默認值false。我們來看一下這個函數的實現:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
    void IPCThreadState::joinThreadPool(bool isMain)
    {
        LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());  
 
        mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);  
 
        ......  
 
        status_t result;
        do {
            int32_t cmd;  
 
            .......  
 
            // now get the next command to be processed, waiting if necessary
            result = talkWithDriver();
            if (result &gt;= NO_ERROR) {
                size_t IN = mIn.dataAvail();
                if (IN &lt; sizeof(int32_t)) continue;
                cmd = mIn.readInt32();
                ......
                }  
 
                result = executeCommand(cmd);
            }  
 
            ......
        } while (result != -ECONNREFUSED &amp;&amp; result != -EBADF);  
 
        .......  
 
        mOut.writeInt32(BC_EXIT_LOOPER);
        talkWithDriver(false);
    }

這個函數最終是在一個無窮循環中,通過調用talkWithDriver函數來和Binder驅動程序進行交互,實際上就是調用talkWithDriver來等待Client的請求,然後再調用executeCommand來處理請求,而在executeCommand函數中,最終會調用BBinder::transact來真正處理Client的請求:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
    status_t IPCThreadState::executeCommand(int32_t cmd)
    {
        BBinder* obj;
        RefBase::weakref_type* refs;
        status_t result = NO_ERROR;  
 
        switch (cmd) {
        ......  
 
        case BR_TRANSACTION:
            {
                binder_transaction_data tr;
                result = mIn.read(&amp;tr, sizeof(tr));  
 
                ......  
 
                Parcel reply;  
 
                ......  
 
                if (tr.target.ptr) {
                    sp b((BBinder*)tr.cookie);
                    const status_t error = b-&gt;transact(tr.code, buffer, &amp;reply, tr.flags);
                    if (error &lt; NO_ERROR) reply.setError(error);  
 
                } else {
                    const status_t error = the_context_object-&gt;transact(tr.code, buffer, &amp;reply, tr.flags);
                    if (error &lt; NO_ERROR) reply.setError(error);
                }  
 
                ......
            }
            break;  
 
        .......
        }  
 
        if (result != NO_ERROR) {
            mLastError = result;
        }  
 
        return result;
    }

接下來再看一下BBinder::transact的實現:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
    status_t BBinder::transact(
        uint32_t code, const Parcel&amp; data, Parcel* reply, uint32_t flags)
    {
        data.setDataPosition(0);  
 
        status_t err = NO_ERROR;
        switch (code) {
            case PING_TRANSACTION:
                reply-&gt;writeInt32(pingBinder());
                break;
            default:
                err = onTransact(code, data, reply, flags);
                break;
        }  
 
        if (reply != NULL) {
            reply-&gt;setDataPosition(0);
        }  
 
        return err;
    }

最終會調用onTransact函數來處理。在這個場景中,BnMediaPlayerService繼承了BBinder類,並且重載了onTransact函數,因此,這裏實際上是調用了BnMediaPlayerService::onTransact函數,這個函數定義在frameworks/base/libs/media/libmedia/IMediaPlayerService.cpp文件中:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
    status_t BnMediaPlayerService::onTransact(
        uint32_t code, const Parcel&amp; data, Parcel* reply, uint32_t flags)
    {
        switch(code) {
            case CREATE_URL: {
                ......
                             } break;
            case CREATE_FD: {
                ......
                            } break;
            case DECODE_URL: {
                ......
                             } break;
            case DECODE_FD: {
                ......
                            } break;
            case CREATE_MEDIA_RECORDER: {
                ......
                                        } break;
            case CREATE_METADATA_RETRIEVER: {
                ......
                                            } break;
            case GET_OMX: {
                ......
                          } break;
            default:
                return BBinder::onTransact(code, data, reply, flags);
        }
    }
至此,我們就以MediaPlayerService爲例,完整地介紹了Android系統進程間通信Binder機制中的Server啓動過程。Server啓動起來之後,就會在一個無窮循環中等待Client的請求了。在下一篇文章中,我們將介紹Client如何通過Service Manager遠程接口來獲得Server遠程接口,進而調用Server遠程接口來使用Server提供的服務,敬請關注。
原文鏈接:http://blog.csdn.net/luoshengyang/article/details/6629298
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章