OpenGL ES 入門 (二) OpenGL ES編程模型:模擬—— 譯自《Beginning Android Games》

OpenGL ES編程模型:模擬

 

    OpenGL ES 總的來說是一個3D繪圖編程API,就以它有一個非常容易理解的編程模型而言,我們用簡單的圖解去模擬它。

    OpenGL ES 工作方式類似於照相機。如果你想要拍一張某個地點的照片,你就必須先到拍照地點去。一個場景包涵很多對象,比如說有張桌子,桌子上面放着很多東西。它們都有一個相對於你相機的座標和方向,還有不同的材質和紋理。比如說玻璃是透明的,帶一點反射,桌子可能是由木頭做成的,雜誌上可能會有某個政客的最新照片等等。有些對象也可能會在場景中移動(e.g你不能擺脫一隻飛行的果蠅..)。你的相機也會有它的屬性,比如說焦距,視野,圖片分辨率和將來取得照片的大小,還有相機在場景中得座標和方向(相對於某個原點)。即使場景中的對象和相機都是移動的,在你按下快門的那一瞬以也會得到一張靜止的照片(在這裏我們忽視快門的速度,快門速度可能導致圖片模糊)。在按下快門的一瞬間,所有對象都是靜止不動的,然後對象的座標,方向,紋理,材質,光照等配置屬性會映射在圖片上。圖7-1顯示了一個靜止的場景,場景中包含了相機,燈光,和三個不同材質的對象。

    每個對象都有它相對於場景原點的座標,方向。相機在圖中用眼睛表示,同樣的相機也有相對於場景原點的座標。在圖中的三棱錐被稱作爲視野容積或者是視野平頭三棱錐,他們表示相機能容納多少場景,和相機相對於場景原點的座標。

    我們可以直接的把這些場景映射到OpenGL ES當中,但是在這之前我們必須定義一些東西:

    1、Object(又叫做模型):他們通常包含2-4個內容:它們的幾何形狀,以及他們的顏色,紋理和材質。幾何形狀由三角形(OpenGL的幾何主要是用三角形拼出來的)的集合組成。每一個三角形在3D空間中都有3個定點,所以我們要有x,y和z座標軸來定義相對於座標系統原點的座標,就像圖7-1。注意z軸的正方向是指向我們的。顏色我們通常使用RGB模式。紋理和材質就有些複雜了,稍後我們會討論它。

    2、Lights:OpenGL ES 允許我們使用不同的屬性去定義各種不同的燈光效果。他們只是一些數學上的對象和/或在3D空間中的方向,加上類似於顏色等屬性所達到的效果。

    3、Camera:相機也是一個在3D空間中包含了座標和方向的數學對象。另外它可以使用參數去定義有我們可以看到的圖像有多大(包含場景的內容多少),就像一個真正的相機。所有東西加在一起就成爲了一個視野平頭錐形(視野區域如圖7-1)。透過相機我們可以看到任何在三角錐裏的對象。同樣的我們不可以看到在相機外的事物。

    4、Viewport:它定義了最終呈現給我們的圖片的大小和像素。可以把它想象爲放入相機中的膠片,或者用數碼相機拍攝後最終得到的圖片的像素。

    鑑於這一切,OpenGL ES能夠在Camera中構造一個場景的2Dbitmap。注意我們定義的所有東西都是在3D空間中的。那麼我們如何才能夠把OpenGL ES使用在二維世界中呢?

 

接下來是Projections(投影)

 

附上原文:

The Programming Model: An Analogy
OpenGL ES is in general a 3D graphics programming API. As such it has a pretty nice
and (hopefully) easy-to-understand programming model that we can illustrate with a
simple analogy.


Think of OpenGL ES as working like a camera. To take a picture you have to first go to
the scene you want to photograph. Your scene is composed of objects—say, a table
with more objects on it. They all have a position and orientation relative to your camera,
as well as different materials and textures. Glass is translucent and a little reflective, a
table is probably made out of wood, a magazine has the latest photo of some politician
on it, and so on. Some of the objects might even move around (e.g., a fruit fly you can’t
get rid of). Your camera also has some properties, such as focal length, field of view,
image resolution and size the photo will be taken at, and its own position and orientation
within the world (relative to some origin). Even if both objects and the camera are
moving, when you press the button to take the photo you catch a still image of the
scene (for now we’ll neglect the shutter speed, which might cause a blurry image). For that infinitely small moment everything stands still and is well defined, and the picture
reflects exactly all those configurations of positions, orientations, textures, materials,
and lighting. Figure 7–1 shows an abstract scene with a camera, a light, and three
objects with different materials.

 

Each object has a position and orientation relative to the scene’s origin. The camera,
indicated by the eye, also has a position in relation to the scene’s origin. The pyramid in
Figure 7–1 is the so-called view volume or view frustum, which shows how much of the
scene the camera captures and how the camera is oriented. The little white ball with the
rays is our light source in the scene, which also has a position relative to the origin. 
We can directly map this scene to OpenGL ES, but to do so we need to define a couple
of things:


  Objects (aka models): These are generally composed of two four: their
geometry, as well as their color, texture, and material. The geometry is
specified as a set of triangles. Each triangle is composed of three
points in 3D space, so we have x-, y-, and z coordinates defined
relative to the coordinate system origin, as in Figure 7–1. Note that the
z-axis points toward us. The color is usually specified as an RGBtriple, as we are already used to. Textures and materials are little bit
more involved. We’ll get to those later on.


Lights: OpenGL ES offers us a couple of different light types with
various attributes. They are just mathematical objects with a position
and/or direction in 3D space, plus attributes such as color.


Camera: This is also a mathematical object that has a position and
orientation in 3D space. Additionally it has parameters that govern how
much of the image we see, similar to a real camera. All this things
together define a view volume, or view frustum (indicated as the
pyramid with the top cut off in Figure 7–1). Anything inside this
pyramid can be seen by the camera; anything outside will not make it
into the final picture.


Viewport: This defines the size and resolution of the final image. Think
of it as the type of film you put into your analog camera or the image
resolution you get for pictures taken with your digital camera.
Given all this, OpenGL ES can construct a 2D bitmap of our scene from the point of view
of the camera. Notice that we define everything in 3D space. So how can OpenGL ES
map that to two dimensions?

發佈了22 篇原創文章 · 獲贊 5 · 訪問量 8萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章