http://blog.csdn.net/andyhuabing/article/details/7489776
GraphicBuffer 是 Surface 系統中用於GDI內存共享緩衝區管理類,封裝了與硬件相關的細節,從而簡化應用層的處理邏輯
SurfaceFlinger是個服務端,而每個請求服務的應用程序都對應一個Client端,Surface繪圖由Client進行,而由SurfaceFlinger對所有Client繪製的圖合成進行輸出,那麼這兩者是如何共享這塊圖形緩衝區的內存呢?簡要之就是利用mmap/ummap,那麼這些在android系統中是如何構架完成的呢?
frameworks\base\include\ui\GraphicBuffer.h 類定義:
class GraphicBuffer
: public EGLNativeBase<
android_native_buffer_t,
GraphicBuffer,
LightRefBase<GraphicBuffer> >, public Flattenable
EGLNativeBase 是一個模板類:
template <typename NATIVE_TYPE, typename TYPE, typename REF>
class EGLNativeBase : public NATIVE_TYPE, public REF
類 GraphicBuffer 繼承LightRefBase支持輕量級引用計數控制
派生 Flattenable 用於數據序列化給Binder進行傳輸
我們來看下 android_native_buffer.h 文件,這個 android_native_buffer_t 結構:
typedef struct android_native_buffer_t
{
#ifdef __cplusplus
android_native_buffer_t() {
common.magic = ANDROID_NATIVE_BUFFER_MAGIC;
common.version = sizeof(android_native_buffer_t);
memset(common.reserved, 0, sizeof(common.reserved));
}
#endif
struct android_native_base_t common;
int width;
int height;
int stride;
int format;
int usage;
void* reserved[2];
buffer_handle_t handle;
void* reserved_proc[8];
} android_native_buffer_t;
注意這裏有個關鍵的變量: buffer_handle_t handle; 這個就是顯示內存分配與管理的私有數據結構
1、 native_handle_t 對 private_handle_t 的包裹
typedef struct
{
int version; /* sizeof(native_handle_t) */
int numFds; /* number of file-descriptors at &data[0] */
int numInts; /* number of ints at &data[numFds] */
int data[0]; /* numFds + numInts ints */ 這裏是利用GCC的無定參數傳遞的寫法
} native_handle_t;
/* keep the old definition for backward source-compatibility */
typedef native_handle_t native_handle;
typedef const native_handle* buffer_handle_t;
native_handle_t 是上層抽象的用於進程間傳遞的數據結構,對於 Gralloc 而言其內容就是:
data[0] 指向具體對象的內容,其中:
static const int sNumInts = 8;
static const int sNumFds = 1;
sNumFds=1表示有一個文件句柄:fd
sNumInts= 8表示後面跟了8個INT型的數據:magic,flags,size,offset,base,lockState,writeOwner,pid;
由於在上層系統不要關心buffer_handle_t中data的具體內容。在進程間傳遞buffer_handle_t(native_handle_t)
句柄是其實是將這個句柄內容傳遞到Client端。在客戶端通過Binder讀取readNativeHandle @Parcel.cpp新生成一個native_handle。
native_handle* Parcel::readNativeHandle() const
{
int numFds, numInts;
err = readInt32(&numFds);
err = readInt32(&numInts);
native_handle* h = native_handle_create(numFds, numInts);
for (int i=0 ; err==NO_ERROR && i<numFds ; i++) {
h->data[i] = dup(readFileDescriptor());
if (h->data[i] < 0) err = BAD_VALUE;
}
err = read(h->data + numFds, sizeof(int)*numInts);
...
}
這裏構造客戶端的native_handle時,對於fd進行dup處理(不同進程),其它的直接讀取複製使用.
magic,flags,size,offset,base,lockState,writeOwner,pid 等複製到了客戶端,從而爲緩衝區共享獲取到相應的信息
對於fd的寫入binder特殊標誌 BINDER_TYPE_FD:告訴Binder驅動這是一個fd描述符
status_t Parcel::writeFileDescriptor(int fd)
{
flat_binder_object obj;
obj.type = BINDER_TYPE_FD;
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
obj.handle = fd;
obj.cookie = (void*)0;
return writeObject(obj, true);
}
2、GraphicBuffer 內存分配
三種分配方式:
GraphicBuffer();
// creates w * h buffer
GraphicBuffer(uint32_t w, uint32_t h, PixelFormat format, uint32_t usage);
// create a buffer from an existing handle
GraphicBuffer(uint32_t w, uint32_t h, PixelFormat format, uint32_t usage,
uint32_t stride, native_handle_t* handle, bool keepOwnership);
其實最終都是通過函數:initSize
status_t GraphicBuffer::initSize(uint32_t w, uint32_t h, PixelFormat format,
uint32_t reqUsage)
{
if (format == PIXEL_FORMAT_RGBX_8888)
format = PIXEL_FORMAT_RGBA_8888;
GraphicBufferAllocator& allocator = GraphicBufferAllocator::get();
status_t err = allocator.alloc(w, h, format, reqUsage, &handle, &stride);
if (err == NO_ERROR) {
this->width = w;
this->height = h;
this->format = format;
this->usage = reqUsage;
mVStride = 0;
}
return err;
}
利用 GraphicBufferAllocator 類分配內存:
首先加載 libGralloc.hwXX.so 動態庫,分配一塊用於顯示的內存,屏蔽掉不同硬件平臺的區別。
GraphicBufferAllocator::GraphicBufferAllocator()
: mAllocDev(0)
{
hw_module_t const* module;
int err = hw_get_module(GRALLOC_HARDWARE_MODULE_ID, &module);
LOGE_IF(err, "FATAL: can't find the %s module", GRALLOC_HARDWARE_MODULE_ID);
if (err == 0) {
gralloc_open(module, &mAllocDev);
}
}
分配方式有兩種:
status_t GraphicBufferAllocator::alloc(uint32_t w, uint32_t h, PixelFormat format,
int usage, buffer_handle_t* handle, int32_t* stride)
{
if (usage & GRALLOC_USAGE_HW_MASK) {
err = mAllocDev->alloc(mAllocDev, w, h, format, usage, handle, stride);
} else {
err = sw_gralloc_handle_t::alloc(w, h, format, usage, handle, stride);
}
...
}
具體的內存分配方式如下:
3、共享句柄的傳遞
frameworks\base\libs\surfaceflinger_client\ISurface.cpp
客戶端請求處理:BpSurface 類:
virtual sp<GraphicBuffer> requestBuffer(int bufferIdx, int usage)
{
Parcel data, reply;
data.writeInterfaceToken(ISurface::getInterfaceDescriptor());
data.writeInt32(bufferIdx);
data.writeInt32(usage);
remote()->transact(REQUEST_BUFFER, data, &reply);
sp<GraphicBuffer> buffer = new GraphicBuffer();
reply.read(*buffer);
return buffer;
}
這裏利用 sp<GraphicBuffer> buffer = new GraphicBuffer(); 然後reply.read(*buffer)將數據利用 unflatten反序化到這個buffer中並返回這個本地new出來的GraphicBuffer對象,而這個數據是在哪裏寫入進去的呢?
服務端呼應處理: BnSurface 類:
status_t BnSurface::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case REQUEST_BUFFER: {
CHECK_INTERFACE(ISurface, data, reply);
int bufferIdx = data.readInt32();
int usage = data.readInt32();
sp<GraphicBuffer> buffer(requestBuffer(bufferIdx, usage));
if (buffer == NULL)
return BAD_VALUE;
return reply->write(*buffer);
}
requestBuffer函數服務端調用流程:
requestBuffer @ surfaceflinger\Layer.cpp
sp<GraphicBuffer> Layer::requestBuffer(int index, int usage)
{
buffer = new GraphicBuffer(w, h, mFormat, effectiveUsage);
...
return buffer;
}
如此的話,客戶端利用new 的 GraphicBuffer() 對象從 Parcel中讀取 native_handle 對象及其內容,而在服務端由同樣由 requestBuffer 請求返回一個真正的GraphicBuffer對象。那麼這兩個數據如何序列化傳遞的呢?
flatten @ GraphicBuffer.cpp
status_t GraphicBuffer::flatten(void* buffer, size_t size,
int fds[], size_t count) const
{
...
if (handle) {
buf[6] = handle->numFds;
buf[7] = handle->numInts;
native_handle_t const* const h = handle;
memcpy(fds, h->data, h->numFds*sizeof(int));
memcpy(&buf[8], h->data + h->numFds, h->numInts*sizeof(int));
}
flatten的職能就是將GraphicBuffer的handle變量信息寫到Parcel句中,接收端利用unflatten讀取
status_t GraphicBuffer::unflatten(void const* buffer, size_t size,
int fds[], size_t count)
{
native_handle* h = native_handle_create(numFds, numInts);
memcpy(h->data, fds, numFds*sizeof(int));
memcpy(h->data + numFds, &buf[8], numInts*sizeof(int));
handle = h;
}
經過以上操作,在客戶端構造了一個對等的 GraphicBuffer對象,下面將繼續講兩者如何操作相同的內存塊
4、共享內存的管理 -- Graphic Mapper 功能
兩個進程間如何共享內存,如何獲取到共享內存?Mapper就是幹這個得。需要利用到兩個信息:共享緩衝區設備句柄,分配時的偏移量.客戶端需要操作一塊共享內存時,首先利用 registerBuffer 註冊一個 buffer_handle_t,然後利用lock函數獲取緩衝區首地址進行繪圖,即利用lock及unlock對內存進行映射使用。
重要的代碼如下:mapper.cpp
static int gralloc_map(gralloc_module_t const* module,
buffer_handle_t handle,
void** vaddr){
private_handle_t* hnd = (private_handle_t*)handle;
void* mappedAddress = mmap(0, size,
PROT_READ|PROT_WRITE, MAP_SHARED, hnd->fd, 0);
if (mappedAddress == MAP_FAILED) {
LOGE("Could not mmap %s", strerror(errno));
return -errno;
}
hnd->base = intptr_t(mappedAddress) + hnd->offset;
*vaddr = (void*)hnd->base;
return 0;
}
static int gralloc_unmap(gralloc_module_t const* module,
buffer_handle_t handle){
private_handle_t* hnd = (private_handle_t*)handle;
void* base = (void*)hnd->base;
size_t size = hnd->size;
munmap(base, size);
hnd->base = 0;
return 0;
}
利用buffer_handle_t與private_handle_t句柄完成共享進程數據的共享:
總結:
Android在該節使用了共享內存的方式來管理與顯示相關的緩衝區,他設計成了兩層,上層是緩衝區管理的代理機構GraphicBuffer,
及其相關的native_buffer_t,下層是具體的緩衝區的分配管理及其緩衝區本身。上層的對象是可以在經常間通過Binder傳遞的,而在進程間並不是傳遞緩衝區本身,而是使用mmap來獲取指向共同物理內存的映射地址。
- 頂
- 13
- 踩
- 0
- 猜你在找
/**
* Sets the {@link SurfaceHolder} to use for displaying the video
* portion of the media.
*
* Either a surface holder or surface must be set if a display or video sink
* is needed. Not calling this method or {@link #setSurface(Surface)}
* when playing back a video will result in only the audio track being played.
* A null surface holder or surface will result in only the audio track being
* played.
*
* @param sh the SurfaceHolder to use for video display
*/
public static final int KEY_PARAMETER_VIDEO_POSITION_INFO = 2000;
public void setDisplay(SurfaceHolder sh) {
通過SurfaceView的大小指定視頻大小的。其實就是將圖形層挖了一個“洞”將視頻透出來了。