Binder通信簡介:
Linux系統中進程間通信的方式有:socket, named pipe,message queque, signal,share
memory。Java系統中的進程間通信方式有socket, named
pipe等,android應用程序理所當然可以應用JAVA的IPC機制實現進程間的通信,但我查看android的源碼,在同一終端上的應用軟件的通
信幾乎看不到這些IPC通信方式,取而代之的是Binder通信。Google爲什麼要採用這種方式呢,這取決於Binder通信方式的高效率。
Binder通信是通過linux的binder driver來實現的,Binder通信操作類似線程遷移(thread
migration),兩個進程間IPC看起來就象是一個進程進入另一個進程執行代碼然後帶着執行的結果返回。Binder的用戶空間爲每一個進程維護着
一個可用的線程池,線程池用於處理到來的IPC以及執行進程本地消息,Binder通信是同步而不是異步。
Android中的Binder通信是基於Service與Client的,所有需要IBinder通信的進程都必須創建一個IBinder接口,系統中
有一個進程管理所有的system
service,Android不允許用戶添加非授權的System
service,當然現在源碼開發了,我們可以修改一些代碼來實現添加底層system
Service的目的。對用戶程序來說,我們也要創建server,或者Service用於進程間通信,這裏有一個
ActivityManagerService管理JAVA應用層所有的service創建與連接(connect),disconnect,所有的
Activity也是通過這個service來啓動,加載的。ActivityManagerService也是加載在Systems
Servcie中的。
Android虛擬機啓動之前系統會先啓動service Manager進程,service
Manager打開binder驅動,並通知binder kernel驅動程序這個進程將作爲System Service
Manager,然後該進程將進入一個循環,等待處理來自其他進程的數據。用戶創建一個System
service後,通過defaultServiceManager得到一個遠程ServiceManager的接口,通過這個接口我們可以調用addService函數將System
service添加到Service
Manager進程中,然後client可以通過getService獲取到需要連接的目的Service的IBinder對象,這個IBinder是Service的BBinder在binder
kernel的一個參考,所以service IBinder 在binder
kernel中不會存在相同的兩個IBinder對象,每一個Client進程同樣需要打開Binder驅動程序。對用戶程序而言,我們獲得這個對象就可以通過binder
kernel訪問service對象中的方法。Client與Service在不同的進程中,通過這種方式實現了類似線程間的遷移的通信方式,對用戶程序而言當調用Service返回的IBinder接口後,訪問Service中的方法就如同調用自己的函數。
下圖爲client與Service建立連接的示意圖
首先從ServiceManager註冊過程來逐步分析上述過程是如何實現的。
ServiceMananger進程註冊過程源碼分析:
Service Manager Process(Service_manager.c):
Service_manager爲其他進程的Service提供管理,這個服務程序必須在Android
Runtime起來之前運行,否則Android JAVA Vm ActivityManagerService無法註冊。
int main(int argc, char **argv)
{
struct
binder_state *bs;
void *svcmgr
= BINDER_SERVICE_MANAGER;
bs = binder_open(128*1024); //打開/dev/binder 驅動
if
(binder_become_context_manager(bs)) {//註冊爲service manager in binder
kernel
LOGE("cannot become context manager (%s)/n",
strerror(errno));
return -1;
}
svcmgr_handle = svcmgr;
binder_loop(bs, svcmgr_handler);
return
0;
}
首先打開binder的驅動程序然後通過binder_become_context_manager函數調用ioctl告訴Binder
Kernel驅動程序這是一個服務管理進程,然後調用binder_loop等待來自其他進程的數據。BINDER_SERVICE_MANAGER是服務管理進程的句柄,它的定義是:
#define BINDER_SERVICE_MANAGER ((void*) 0)
如果客戶端進程獲取Service時所使用的句柄與此不符,Service
Manager將不接受Client的請求。客戶端如何設置這個句柄在下面會介紹。
CameraSerivce服務的註冊(Main_mediaservice.c)
int main(int argc, char** argv)
{
sp<ProcessState>
proc(ProcessState::self());
sp<IServiceManager> sm =
defaultServiceManager();
LOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate();
//Audio 服務
MediaPlayerService::instantiate();
//mediaPlayer服務
CameraService::instantiate();
//Camera 服務
ProcessState::self()->startThreadPool();
//爲進程開啓緩衝池
IPCThreadState::self()->joinThreadPool();
//將進程加入到緩衝池
}
CameraService.cpp
void CameraService::instantiate() {
defaultServiceManager()->addService(
String16("media.camera"), new CameraService());
}
創建CameraService服務對象並添加到ServiceManager進程中。
client獲取remote IServiceManager IBinder接口:
sp<IServiceManager>
defaultServiceManager()
{
if
(gDefaultServiceManager != NULL) return
gDefaultServiceManager;
{
AutoMutex _l(gDefaultServiceManagerLock);
if (gDefaultServiceManager == NULL) {
gDefaultServiceManager =
interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
}
}
return
gDefaultServiceManager;
}
任何一個進程在第一次調用defaultServiceManager的時候gDefaultServiceManager值爲Null,所以該進程會通
過ProcessState::self得到ProcessState實例。ProcessState將打開Binder驅動。
ProcessState.cpp
sp<ProcessState>
ProcessState::self()
{
if (gProcess
!= NULL) return gProcess;
AutoMutex
_l(gProcessMutex);
if (gProcess
== NULL) gProcess = new ProcessState;
return
gProcess;
}
ProcessState::ProcessState()
: mDriverFD(open_driver()) //打開/dev/binder驅動
...........................
{
}
sp<IBinder>
ProcessState::getContextObject(const
sp<IBinder>&
caller)
{
if
(supportsProcesses()) {
return getStrongProxyForHandle(0);
} else
{
return getContextObject(String16("default"), caller);
}
}
Android是支持Binder驅動的所以程序會調用getStrongProxyForHandle。這裏handle爲0,正好與Service_manager中的BINDER_SERVICE_MANAGER一致。
sp<IBinder>
ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex
_l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e !=
NULL) {
// We need to create a new BpBinder if there isn't currently one,
OR we
// are unable to acquire a weak reference on this current one. See
comment
// in getWeakProxyForHandle() for more info about this.
IBinder* b = e->binder; //第一次調用該函數b爲Null
if (b == NULL ||
!e->refs->attemptIncWeak(this))
{
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs =
b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a
primary
// reference to the remote proxy when this team doesn't have
one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}
return
result;
}
第一次調用的時候b爲Null所以會爲b生成一BpBinder對象:
BpBinder::BpBinder(int32_t handle)
:
mHandle(handle)
,
mAlive(1)
,
mObitsSent(0)
,
mObituaries(NULL)
{
LOGV("Creating BpBinder %p handle %d/n", this, mHandle);
extendObjectLifetime(OBJECT_LIFETIME_WEAK);
IPCThreadState::self()->incWeakHandle(handle);
}
void IPCThreadState::incWeakHandle(int32_t handle)
{
LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)/n",
handle);
mOut.writeInt32(BC_INCREFS);
mOut.writeInt32(handle);
}
getContextObject返回了一個BpBinder對象。
interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
template<typename INTERFACE>
inline sp<INTERFACE>
interface_cast(const
sp<IBinder>&
obj)
{
return
INTERFACE::asInterface(obj);
}
將這個宏擴展後最終得到的是:
sp<IServiceManager>
IServiceManager::asInterface(const
sp<IBinder>&
obj)
{
sp<IServiceManager>
intr;
if (obj != NULL)
{
intr =
static_cast<IServiceManager*>(
obj->queryLocalInterface(
IServiceManager::descriptor).get());
if (intr == NULL)
{
intr = new
BpServiceManager(obj);
}
}
return intr;
}
返回一個BpServiceManager對象,這裏obj就是前面我們創建的BpBInder對象。
client獲取Service的遠程IBinder接口
以CameraService爲例(camera.cpp):
const
sp<ICameraService>&
Camera::getCameraService()
{
Mutex::Autolock _l(mLock);
if
(mCameraService.get() == 0) {
sp<IServiceManager> sm =
defaultServiceManager();
sp<IBinder> binder;
do {
binder =
sm->getService(String16("media.camera"));
if (binder != 0)
break;
LOGW("CameraService not published, waiting...");
usleep(500000); // 0.5 s
} while(true);
if (mDeathNotifier == NULL) {
mDeathNotifier = new DeathNotifier();
}
binder->linkToDeath(mDeathNotifier);
mCameraService =
interface_cast<ICameraService>(binder);
}
LOGE_IF(mCameraService==0, "no CameraService!?");
return
mCameraService;
}
由前面的分析可知sm是BpCameraService對象
://應該爲BpServiceManager對象
virtual sp<IBinder> getService(const
String16& name) const
{
unsigned n;
for (n = 0; n < 5; n++){
sp<IBinder> svc =
checkService(name);
if (svc != NULL) return svc;
LOGI("Waiting for sevice %s.../n", String8(name).string());
sleep(1);
}
return NULL;
}
virtual
sp<IBinder> checkService( const
String16& name) const
{
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
remote()->transact(CHECK_SERVICE_TRANSACTION, data,
&reply);
return reply.readStrongBinder();
}
這裏的remote就是我們前面得到BpBinder對象。所以checkService將調用BpBinder中的transact函數:
status_t BpBinder::transact(
uint32_t
code, const Parcel& data, Parcel* reply, uint32_t
flags)
{
// Once a
binder has died, it will never come back to life.
if (mAlive)
{
status_t status =
IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return
DEAD_OBJECT;
}
mHandle爲0,BpBinder繼續往下調用IPCThreadState:transact函數將數據發給與mHandle相關聯的Service
Manager Process。
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
............................................................
if (err ==
NO_ERROR) {
LOG_ONEWAY(">>>>
SEND from pid %d uid %d %s", getpid(), getuid(),
(flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE
WAY");
err = writeTransactionData(BC_TRANSACTION, flags, handle, code,
data, NULL);
}
if (err !=
NO_ERROR) {
if (reply) reply->setError(err);
return (mLastError = err);
}
if ((flags
& TF_ONE_WAY) == 0) {
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
..............................
return
err;
}
通過writeTransactionData構造要發送的數據
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t
binderFlags,
int32_t
handle, uint32_t code, const Parcel& data,
status_t* statusBuffer)
{
binder_transaction_data tr;
tr.target.handle = handle; //這個handle將傳遞到service_manager
tr.code =
code;
tr.flags =
bindrFlags;
。。。。。。。。。。。。。。
}
waitForResponse將調用talkWithDriver與對Binder kernel進行讀寫操作。當Binder
kernel接收到數據後,service_mananger線程的ThreadPool就會啓動,service_manager查找到CameraService服務後調用binder_send_reply,將返回的數據寫入Binder
kernel,Binder kernel。
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t
*acquireResult)
{
int32_t
cmd;
int32_t
err;
while (1)
{
if ((err=talkWithDriver()) < NO_ERROR) break;
..............................................
}
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
............................................
#if defined(HAVE_ANDROID_OS)
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ,
&bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
...................................................
}
通過上面的ioctl系統函數中BINDER_WRITE_READ對binder kernel進行讀寫。
轉自: http://blog.sina.com.cn/s/blog_55b1b0d50100fdfp.html