• MONDAY, APRIL 20, 2009 ...OpenGL ES From the Ground Up, Part 2: A Look at Simple Drawing Okay, there's still a lot of theory to talk about, but before we spend too much time getting bogge
 MONDAY, APRIL 20, 2009

OpenGL ES From the Ground Up, Part 2: A Look at Simple Drawing

Okay, there's still a lot of theory to talk about, but before we spend too much time getting bogged down in complex math or difficult concepts, let's get our feet wet with some very basic drawing in OpenGL ES.

If you haven't already done so, grab a copy of
my Empty OpenGL Xcode project template. We'll use this template as a starting point rather than Apple's provided one. You can install it by copying the unzipped folder to this location:

/Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Project Templates/Application/
This template is designed for a full-screen OpenGL application, and has an OpenGL view and a corresponding view controller. The view is designed to be pretty hands-off and you shouldn't need to touch it most of the time. It handles some of the gnarley stuff we'll be talking about later on, like buffer swapping, but it calls out to its controller class for two things.

First, it calls out to the controller once when the view is being setup. The view controller's
setupView: method gets called once to let the controller add any setup work it needs to do. This is where you would set up your viewport, add lights, and do other setup relevant to your project. For today, ignore that method. There's a very basic setup already in place that will let you do simple drawing. Which brings us to the other method.

The controller's
drawView: method will get called at regular intervals based on the value of a constant called
kRenderingFrequency. The initial value of this is
15.0, which means that
drawView: will get called fifteen times a second. If you want to change the rendering frequency, you can find this constant defined in the file called
ConstantsAndMacros.h.

For our first trick, let's add the following code to the existing
drawView: method in
GLViewController.m:

- (void)drawView:(GLView*)view;
{
Vertex3D    vertex1 = Vertex3DMake(0.0, 1.0, -3.0);
Vertex3D    vertex2 = Vertex3DMake(1.0, 0.0, -3.0);
Vertex3D    vertex3 = Vertex3DMake(-1.0, 0.0, -3.0);
Triangle3D  triangle = Triangle3DMake(vertex1, vertex2, vertex3);

glClearColor(0.7, 0.7, 0.7, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(1.0, 0.0, 0.0, 1.0);
glVertexPointer(3, GL_FLOAT, 0, &triangle);
glDrawArrays(GL_TRIANGLES, 0, 9);
glDisableClientState(GL_VERTEX_ARRAY);
}

Before we talk about what's going on, go ahead and run it, and you should get something that looks like this:

It's a simple method; you could probably figure out what's going on if you try, but let's walk through it together. Since our method draws a triangle, we need three vertices, so we create three of those
Vertex3D objects we talked about in the
previous posting in this series:

Vertex3D    vertex1 = Vertex3DMake(0.0, 1.0, -3.0);
Vertex3D    vertex2 = Vertex3DMake(1.0, 0.0, -3.0);
Vertex3D    vertex3 = Vertex3DMake(-1.0, 0.0, -3.0);

You should notice that the z value for all three vertices is the same, and that that value (-3.0) is "behind" the origin. Because we haven't done anything to change it, we're looking into our virtual world as if we were standing on the origin, which is the default starting location. By placing the triangle at a z-position of -3, we ensure that we can see it on the screen.

After that, we create a
Triangle3D object made up of those three vertices.

Triangle3D  triangle = Triangle3DMake(vertex1, vertex2, vertex3);

Now, that's pretty easy code to understand, right? But, behind the scenes, what it looks like to the computer is an array of 9
GLfloats. We could have accomplished the same thing by doing this:

GLfloat  triangle[] = {0.0, 1.0, -3.0, 1.0, 0.0, -3.0, -1.0, 0.0, -3.0};

Well, not quite exactly the same thing - there's one very minor but important difference. In our first example, we have to pass the address of our
Triangle3Dobject into OpenGL (e.g.
&triangle), but in the second example using the array, we'd simply pass in the array, because C arrays are pointers. But, don't worry too much about that, because this example will be the last time we declare a
Triangle3D object this way. I'll explain why in a moment, but let's finish going through our code. The next thing we do is load the identity matrix. I'll devote at least one whole posting to transformation matrices, what they are and how they are used, but just think of this call as a "reset button" for OpenGL. It gets rid of any rotations, movement, or other changes to the virtual world and puts us back at the origin standing upright.

After that, we tell OpenGL that all drawing should be done over a light grey background. OpenGL generally expects colors to be defined using four clamped values. Remember from the previous post, clamped floats are floating point values that run from 0.0 to 1.0. So, we define colors by their red, green, and blue components, along with another component called alpha, which defines how much of what's behind the color shows through. Don't worry about alpha for now - for the time being, we'll just always set alpha to 1.0, which defines an opaque color.

glClearColor(0.7, 0.7, 0.7, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

To define white in OpenGL, we'd pass 1.0 for all four components. To define an opaque black, we'd pass 0.0 for the red, green, and blue components and 1.0 for alpha. The second line of code in that last example is the one that actually tells OpenGL to clear out anything that's been drawn before and erases everything to the clear color.

You're probably wondering what the two arguments to the
glClear() call are. Well, again, we don't want to get too far ahead of ourselves, but those are constants that refer to values stored in a bitfield. OpenGL maintains a number of
buffers, which are just chunks of memory used for different aspects of drawing. By logical or'ing these two particular values together, we tell OpenGL to clear two different buffers - the
color buffer and the
depth buffer. The color buffer stores the color for each pixel of the current frame. This is basically what you see on the screen. The depth buffer (sometimes also called the "z-buffer") holds information about how close or near the viewer each potential pixel is. It uses this information to determine whether a pixel needs to be drawn or not. Those are the two buffers you'll see most often in OpenGL. There are others, such as the stencil buffer and the accumulation buffer, but we're not going to talk about those, at least for a while. For now, just remember that before you draw a frame, you need to clear these two buffers so that the previous contents doesn't mess things up for you.

After that, we enable one of OpenGL's features called
vertex arrays. This feature could probably just be turned on once in the
setupView: method, but as a general rule, I like to enable and disable the functionality I use. You never know when another piece of code might be doing things differently. if you turn what you need on and then off again, the chances of problems are greatly reduced. In this example, say we had another class that didn't use vertex arrays to draw, but used vertex buffer objects instead. If either of the chunks of code left something enabled or didn't explicitly enable something they needed, one or both of the methods could end up with unexpected results.

glEnableClientState(GL_VERTEX_ARRAY);

The next thing we do is set the color that we're going to draw in. This line of code sets the drawing color to a bright red.

glColor4f(1.0, 0.0, 0.0, 1.0);

Now, all drawing done until another call to
glColor4f() will be done using the color red. There are some exceptions to that, such as code that draws a textured shape, but basically, setting the color like this sets the color for all calls that follow it.

Since we're drawing with vertex arrays, we have to tell OpenGL where the array of vertices is. Remember, an array of vertices is just a C array of
GLfloats, with each set of three values representing one vertex. We created a
Triangle3D object, but in memory, it's exactly the same as nine consecutive
GLfloats, so we can just pass in the address of the
triangle.

glVertexPointer(3, GL_FLOAT, 0, &triangle);

The first parameter to
glVertexPointer() indicates how many
GLfloats represent each vertex. You can pass either 2 or 3 here depending on whether you're doing two-dimensional or three-dimensional drawing. Even though our object exists in a plane, we're drawing it in a three-dimensional virtual world and have defined it using three values per vertex, so we pass in
3 here. Next, we pass in an
enum that tells OpenGL that our vertices are made up of
GLfloats. They don't have to be - OpenGL ES is quite happy to let you use most any datatype in a vertex array, but it's rare to see anything other than
GL_FLOAT. The next parameter... well, don't worry about the next parameter. That's a topic for future discussion. For now, it will always, always, always be
0. In a future posting, I'll show you how to use this parameter to interleave different types of data about the same object into a single data structure, but that's heavier juju than I'm ready to talk about now.

After that, we tell OpenGL to draw triangles between the vertices in the array we previously submitted.

glDrawArrays(GL_TRIANGLES, 0, 9);

As you probably guessed, the first parameter is an
enum that tells OpenGL what to draw. Although OpenGL ES doesn't support quads or any other polygon besides triangles, it does still support a variety of drawing modes, including the ability do draw points, lines, line loops, triangle strips, and triangle fans. We'll talk about the various drawing modes later. For now, let's just stick with triangles.

Finally, we disable the one feature that we enabled earlier so we don't mess up other code elsewhere. Again, there is no other code in this example, but usually when you're using OpenGL, drawing is potentially happening from multiple objects.

glDisableClientState(GL_VERTEX_ARRAY);

And that's it. It works. It's not very impressive, and it's not very efficient, but it works. You're drawing in OpenGL. Yay! A certain number of times every second, this method is getting called, and it's drawing. Don't believe me? Add the following bold code to the method and run it again:

- (void)drawView:(GLView*)view;
{
static      GLfloat rotation = 0.0;

Vertex3D    vertex1 = Vertex3DMake(0.0, 1.0, -3.0);
Vertex3D    vertex2 = Vertex3DMake(1.0, 0.0, -3.0);
Vertex3D    vertex3 = Vertex3DMake(-1.0, 0.0, -3.0);
Triangle3D  triangle = Triangle3DMake(vertex1, vertex2, vertex3);

glRotatef(rotation, 0.0, 0.0, 1.0);
glClearColor(0.7, 0.7, 0.7, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(1.0, 0.0, 0.0, 1.0);
glVertexPointer(3, GL_FLOAT, 0, &triangle);
glDrawArrays(GL_TRIANGLES, 0, 9);
glDisableClientState(GL_VERTEX_ARRAY);

rotation+= 0.5;
}

When you run it again, the triangle should slowly revolve around the origin. Don't worry too much about the mechanics of rotation, I just wanted to show you that your drawing code was getting called many times a second.

What if we want to draw a square? Well, OpenGL ES doesn't have squares, so we have to define a square out of triangles. That's easy enough to do - a square can be created out of two right triangles. How do we tweak the code above to draw two triangles, rather than one? Can we create two
Triangle3Ds and submit those? Well, yeah, we could. But that would be inefficient. It would be better if we submitted both triangles as part of the same vertex array. We can do that by declaring an array of
Triangle3D objects, or by allocating a chunk of memory that happens to be the same size as two
Triangle3Ds or eighteen
GLfloats.

Here's one way:

- (void)drawView:(GLView*)view;
{
Triangle3D  triangle[2];
triangle[0].v1 = Vertex3DMake(0.0, 1.0, -3.0);
triangle[0].v2 = Vertex3DMake(1.0, 0.0, -3.0);
triangle[0].v3 = Vertex3DMake(-1.0, 0.0, -3.0);
triangle[1].v1 = Vertex3DMake(-1.0, 0.0, -3.0);
triangle[1].v2 = Vertex3DMake(1.0, 0.0, -3.0);
triangle[1].v3 = Vertex3DMake(0.0, -1.0, -3.0);

glClearColor(0.7, 0.7, 0.7, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(1.0, 0.0, 0.0, 1.0);
glVertexPointer(3, GL_FLOAT, 0, &triangle);
glDrawArrays(GL_TRIANGLES, 0, 18);
glDisableClientState(GL_VERTEX_ARRAY);
}

Run it now, and you should get something like this:

That code is less than ideal, however, because we're allocating our geometry on the stack, and we're causing a additional memory to be used because our
Vertex3DMake() method creates a new
Vertex3D on the stack, and then copies the values into the array.

For a simple example like this, that works fine, but in more complex cases, the geometry for defining 3D objects will be large enough that you don't want to be allocating it on the stack and you don't want to be allocating memory more than once for a given vertex, so it's a good idea to get in the habit of allocating your vertices on the heap by using our old friend
malloc() (although I sometimes like to use
calloc() instead because by setting all the values to zero, some errors are easier to track down). First, we need a function to set the values of an existing vertex instead of creating a new one the way
Vertex3DMake() does. This'll work:

static inline void Vertex3DSet(Vertex3D *vertex, CGFloat inX, CGFloat inY, CGFloat inZ)
{
vertex->x = inX;
vertex->y = inY;
vertex->z = inZ;
}

Now, here's the exact same code re-written to allocate the two triangles on the heap using this new function:

- (void)drawView:(GLView*)view;
{
Triangle3D  *triangles = malloc(sizeof(Triangle3D) * 2);

Vertex3DSet(&triangles[0].v1, 0.0, 1.0, -3.0);
Vertex3DSet(&triangles[0].v2, 1.0, 0.0, -3.0);
Vertex3DSet(&triangles[0].v3, -1.0, 0.0, -3.0);
Vertex3DSet(&triangles[1].v1, -1.0, 0.0, -3.0);
Vertex3DSet(&triangles[1].v2, 1.0, 0.0, -3.0);
Vertex3DSet(&triangles[1].v3, 0.0, -1.0, -3.0);

glClearColor(0.7, 0.7, 0.7, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(1.0, 0.0, 0.0, 1.0);
glVertexPointer(3, GL_FLOAT, 0, triangles);
glDrawArrays(GL_TRIANGLES, 0, 18);
glDisableClientState(GL_VERTEX_ARRAY);

if (triangles != NULL)
free(triangles);
}

Okay, we've covered a lot of ground, but let's got a little further. Remember how I said that OpenGL ES has more than one drawing mode? Well, this square shape that currently requires 6 vertices (18
GLfloats) to draw can actually be drawn with just four vertices (12
GLfloats) using the drawing mode known as
triangle strips (
GL_TRIANGLE_STRIP).

Here's the basic idea behind a triangle strip: the first triangle in the strip is made up of the first three vertices (indexes 0, 1, 2). The second triangle is made up of two of the vertices from the previous triangle along with the next vertex in the array, and so on through the array. This picture might make more sense - the first triangle is vertices 1, 2, 3, the next is vertices 2, 3, 4, etc.:

So, our square can be made like this:

The code to do it this way, looks like this:

- (void)drawView:(GLView*)view;
{
Vertex3D  *vertices = malloc(sizeof(Vertex3D) * 4);

Vertex3DSet(&vertices[0], 0.0, 1.0, -3.0);
Vertex3DSet(&vertices[1], 1.0, 0.0, -3.0);
Vertex3DSet(&vertices[2], -1.0, 0.0, -3.0);
Vertex3DSet(&vertices[3], 0.0, -1.0, -3.0);

glClearColor(0.7, 0.7, 0.7, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glColor4f(1.0, 0.0, 0.0, 1.0);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 12);
glDisableClientState(GL_VERTEX_ARRAY);

if (vertices != NULL)
free(vertices);
}

Let's go to the first code sample to see something. Remember how we drew that first triangle? We used
glColor4f() to set a color and said that it would set the color for all calls that follow. That means that every object defined in a vertex array has to be drawn in the same color? What? That's pretty limiting, isn't it?

Well, no. Just as OpenGL ES will allow you to pass vertices all at once in an array, it will also let you pass in a
color array to specify the color to be used for each vertex. If you choose to use a color array, you need to have one color (four
GLfloats) for each vertex. Color arrays have to be turned on using

glEnableClientState(GL_COLOR_ARRAY);

Otherwise, the process is basically the same as passing in vertex arrays. We can use the same trick be defining a
Color3D struct that contains four
GLfloatmembers. Here's how you could pass in a different color for each array of that original triangle we drew:

- (void)drawView:(GLView*)view;
{
Vertex3D    vertex1 = Vertex3DMake(0.0, 1.0, -3.0);
Vertex3D    vertex2 = Vertex3DMake(1.0, 0.0, -3.0);
Vertex3D    vertex3 = Vertex3DMake(-1.0, 0.0, -3.0);
Triangle3D  triangle = Triangle3DMake(vertex1, vertex2, vertex3);

Color3D     *colors = malloc(sizeof(Color3D) * 3);
Color3DSet(&colors[0], 1.0, 0.0, 0.0, 1.0);
Color3DSet(&colors[1], 0.0, 1.0, 0.0, 1.0);
Color3DSet(&colors[2], 0.0, 0.0, 1.0, 1.0);

glClearColor(0.7, 0.7, 0.7, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glColor4f(1.0, 0.0, 0.0, 1.0);
glVertexPointer(3, GL_FLOAT, 0, &triangle);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLES, 0, 9);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);

if (colors != NULL)
free(colors);
}

If you run that, it should create a triangle that looks like this:

Let's look at one more thing today. One problem with the way we've been doing things is that if a vertex is used more than once (except in conjoining triangles in a triangle strip or triangle fan), you have to pass the same vertex into OpenGL multiple times. That's not a big deal, but you generally want to minimize the amount of data you're pushing into OpenGL, so pushing the same 4-byte floating point value over and over is less than ideal. In some meshes, a vertex could conceivably be used in seven or more different triangles, so your vertex array could be many times larger than it needs to be.

When dealing with these complex geometries, there's a way to avoid sending the same vertex multiple times, and it's by using something called
elements to refer to vertices by their index in the vertex array. How this works is you'd create a vertex array that has each vertex once and only once. Then, you'd create another array of integers using the smallest unsigned integer datatype that will hold the number of unique vertices you have. In other words, if your vertex array has less than 256 vertices, then you would create an array of
GLubytes, if it's more than 256, but less than 65,536, use
GLushort. You build your triangles (or other shape) in this second array by referring to the vertices in the first array by their index. So, if you create vertices array with 12 vertices, then you refer to the first vertex in the array, you refer to as
0. You submit your vertices exactly the same way you did before, but instead of calling
glDrawArrays(), you call a different function called
glDrawElements() and pass in the integer array.

Let's finish our tutorial with a real, honest-to-goodness 3D shape: an icosahedron. Everybody else does cubes, but we're going to be geeky and do a twenty-sided die (sans numbers). Replace
drawView: with this new version:

- (void)drawView:(GLView*)view;
{

static GLfloat rot = 0.0;

// This is the same result as using Vertex3D, just faster to type and
// can be made const this way
static const Vertex3D vertices[]= {
{0, -0.525731, 0.850651},             // vertices[0]
{0.850651, 0, 0.525731},              // vertices[1]
{0.850651, 0, -0.525731},             // vertices[2]
{-0.850651, 0, -0.525731},            // vertices[3]
{-0.850651, 0, 0.525731},             // vertices[4]
{-0.525731, 0.850651, 0},             // vertices[5]
{0.525731, 0.850651, 0},              // vertices[6]
{0.525731, -0.850651, 0},             // vertices[7]
{-0.525731, -0.850651, 0},            // vertices[8]
{0, -0.525731, -0.850651},            // vertices[9]
{0, 0.525731, -0.850651},             // vertices[10]
{0, 0.525731, 0.850651}               // vertices[11]
};

static const Color3D colors[] = {
{1.0, 0.0, 0.0, 1.0},
{1.0, 0.5, 0.0, 1.0},
{1.0, 1.0, 0.0, 1.0},
{0.5, 1.0, 0.0, 1.0},
{0.0, 1.0, 0.0, 1.0},
{0.0, 1.0, 0.5, 1.0},
{0.0, 1.0, 1.0, 1.0},
{0.0, 0.5, 1.0, 1.0},
{0.0, 0.0, 1.0, 1.0},
{0.5, 0.0, 1.0, 1.0},
{1.0, 0.0, 1.0, 1.0},
{1.0, 0.0, 0.5, 1.0}
};

static const GLubyte icosahedronFaces[] = {
1, 2, 6,
1, 7, 2,
3, 4, 5,
4, 3, 8,
6, 5, 11,
5, 6, 10,
9, 10, 2,
10, 9, 3,
7, 8, 9,
8, 7, 0,
11, 0, 1,
0, 11, 4,
6, 2, 10,
1, 6, 11,
3, 5, 10,
5, 4, 11,
2, 7, 9,
7, 1, 0,
3, 9, 8,
4, 8, 0,
};

glTranslatef(0.0f,0.0f,-3.0f);
glRotatef(rot,1.0f,1.0f,1.0f);
glClearColor(0.7, 0.7, 0.7, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);

glDrawElements(GL_TRIANGLES, 60, GL_UNSIGNED_BYTE, icosahedronFaces);

glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
static NSTimeInterval lastDrawTime;
if (lastDrawTime)
{
NSTimeInterval timeSinceLastDraw = [NSDate timeIntervalSinceReferenceDate] - lastDrawTime;
rot+=50 * timeSinceLastDraw;
}
lastDrawTime = [NSDate timeIntervalSinceReferenceDate];
}

Before we talk about what's going on, let's run it and see the pretty shape spin:

It's not completely 3D looking because there are no lights and even if we had lights, we haven't told OpenGL what it needs to know to calculate how light should reflect off of our shape (that's a topic for a future posting - but if you want to, you can read some existing posts on the topic
here and
here).

So, what did we do here? First, we created a static variable to track the rotation of the object.

static GLfloat rot = 0.0;

Then we defined our vertex array. We did it a little differently than before, but the result is the same. Since our geometry is not changing at all, we can make it
const rather than allocating and deallocating memory every frame, and provide the values between curley braces:

static const Vertex3D vertices[]= {
{0, -0.525731, 0.850651},             // vertices[0]
{0.850651, 0, 0.525731},              // vertices[1]
{0.850651, 0, -0.525731},             // vertices[2]
{-0.850651, 0, -0.525731},            // vertices[3]
{-0.850651, 0, 0.525731},             // vertices[4]
{-0.525731, 0.850651, 0},             // vertices[5]
{0.525731, 0.850651, 0},              // vertices[6]
{0.525731, -0.850651, 0},             // vertices[7]
{-0.525731, -0.850651, 0},            // vertices[8]
{0, -0.525731, -0.850651},            // vertices[9]
{0, 0.525731, -0.850651},             // vertices[10]
{0, 0.525731, 0.850651}               // vertices[11]
};

Then we create a color array the same way. This creates an array of
Color3D objects, one for each of the vertices in the previous array:

static const Color3D colors[] = {
{1.0, 0.0, 0.0, 1.0},
{1.0, 0.5, 0.0, 1.0},
{1.0, 1.0, 0.0, 1.0},
{0.5, 1.0, 0.0, 1.0},
{0.0, 1.0, 0.0, 1.0},
{0.0, 1.0, 0.5, 1.0},
{0.0, 1.0, 1.0, 1.0},
{0.0, 0.5, 1.0, 1.0},
{0.0, 0.0, 1.0, 1.0},
{0.5, 0.0, 1.0, 1.0},
{1.0, 0.0, 1.0, 1.0},
{1.0, 0.0, 0.5, 1.0}
};

Finally, we create the array that actually defines the shape of the icosahedron. Those twelve vertices above, by themselves, don't describe this shape. OpenGL needs to know how to connect them, so for that, we create an array of integers (in this case
GLubytes) that point to the vertices that make up each triangle.

static const GLubyte icosahedronFaces[] = {
1, 2, 6,
1, 7, 2,
3, 4, 5,
4, 3, 8,
6, 5, 11,
5, 6, 10,
9, 10, 2,
10, 9, 3,
7, 8, 9,
8, 7, 0,
11, 0, 1,
0, 11, 4,
6, 2, 10,
1, 6, 11,
3, 5, 10,
5, 4, 11,
2, 7, 9,
7, 1, 0,
3, 9, 8,
4, 8, 0,
};

So, the first three numbers in
icosahedronFaces are 1,2,6, which means to draw a triangle between the vertices at indices 1 (
0.850651, 0, 0.525731), 2 (
0.850651, 0, 0.525731), and 6 (
0.525731, 0.850651, 0).

The next chunk is nothing new, we just load the identity matrix (reset all transformations), move the shape away from the camera and rotate it, set the background color, clear the buffers, enable vertex and color arrays, then feed OpenGL our vertex array. All that is just like in some of the earlier examples.

glTranslatef(0.0f,0.0f,-3.0f);
glRotatef(rot,1.0f,1.0f,1.0f);
glClearColor(0.7, 0.7, 0.7, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glColor4f(1.0, 0.0, 0.0, 1.0);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);

But, then, we don't draw
glDrawArrays(). We call
glDrawElements():

glDrawElements(GL_TRIANGLES, 60, GL_UNSIGNED_BYTE, icosahedronFaces);

After that, we just disable everything, and then increment the rotation variable based on how much time has elapsed since the last frame was drawn:

glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
static NSTimeInterval lastDrawTime;
if (lastDrawTime)
{
NSTimeInterval timeSinceLastDraw = [NSDate timeIntervalSinceReferenceDate] - lastDrawTime;
rot+=50 * timeSinceLastDraw;
}
lastDrawTime = [NSDate timeIntervalSinceReferenceDate];

So, remember: if you provide the vertices in the right order to be drawn, you use
glDrawArrays(), but if you provide an array of vertices and then a separate array of indices to identify the order they need to be drawn in, then you use
glDrawElements().

Okay, that's enough for today. I covered a lot more ground than I intended to, and I probably got ahead of myself here, but hopefully this was helpful. In the next installment, we go back to conceptual stuff.

Please feel free to play with the drawing code, add more polygons, change colors, etc. There's a lot more to drawing in OpenGL than we covered here, but you've now seen the basic idea behind drawing 3D objects on the iPhone: You create a chunk of memory to hold all the vertices, pass the vertex array into OpenGL, and then tell it to draw those vertices.

POSTED BY JEFF LAMARCHE
AT 8:01 PM

LABELS: GAME PROGRAMMING, IPHONE SDK, OPENGL ES


展开全文
• FRIDAY, APRIL 17, 2009 ...OpenGL ES From the Ground Up, Part 1: Basic Concepts I've done a number of postings on programming OpenGL ES for the iPhone, but most of the posts I've done have
 FRIDAY, APRIL 17, 2009

OpenGL ES From the Ground Up, Part 1: Basic Concepts

I've done a number of postings on programming OpenGL ES for the iPhone, but most of the posts I've done have been targeted at people who already know at least a little bit about 3D programming.

If you haven't already done so, grab a copy of my Empty OpenGL Xcode project template. We'll use
this template as a starting point rather than Apple's provided one. You can install it by copying the unzipped folder to this location:

/Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Project Templates/Application/ There are a number of good tutorials and books on OpenGL. Unfortunately, there aren't very many on OpenGL ES, and none (at least as I write this) that are specifically designed for learning 3D programming on the iPhone. Because most available material for learning OpenGL starts out teaching using what's called
direct mode, which is part of the functionality of OpenGL that's not in OpenGL ES, it can be really hard for an iPhone dev with no 3D background to get up and running using existing books and tutorials. I've had a number of people request it, so I've decided to start a series of blog posts designed for the absolute 3D beginner. This is the first in that series. If you've read and understood my previous OpenGL postings, you will probably find this series to be a little too basic.

OpenGL Datatypes
The first thing we'll talk about are OpenGL's datatypes. Because OpenGL is a cross-platform API, and the size of datatypes can vary depending on the programming language being used as well as the underlying processor (64-bit vs. 32-bit vs 16-bit), OpenGL declares its own custom datatypes. When passing values into OpenGL, you should always use these OpenGL datatypes to make sure that you are passing values of the right size or precision. Failure to do so could cause unexpected results or slowdowns caused by data conversion at runtime. Every implementation of OpenGL, regardless of platform or language, declares the standard OpenGL datatypes in such a way that they will be the same size on every platform, making porting OpenGL code from one platform to another easier.

Here are the OpenGL ES datatypes:

GLenum: An unsigned integer used for GL-specific enumerations. Most commonly used to tell OpenGL the type of data stored in an array passed by pointer (e.g. GL_FLOAT) to indicate that the array is made up of GLfloats.GLboolean: Used to hold a single boolean values. OpenGL ES also declares its own true and false values (GL_TRUE and GL_FALSE) to avoid platform and language differences. When passing booleans into OpenGL, use these rather than YES or NO (though it won't hurt if you accidentally use YES orTRUE since they are actually defined the same. But, it's good form to use the GL-defined values.GLbitfield: These are four-byte integers used to pack multiple boolean values (up to 32) into a single variable using bitwise operators. We'll discuss this more the first time we use a bitfield variable, but you can read up on the basic idea over at wikipediaGLbyte: A signed, one-byte integer capable of holding a value from -128 to 127GLshort: A signed two-byte integer capable of holding a value between −32,768 to 32,767GLint: A signed four-byte integer capable of holding a value between −2,147,483,648 and 2,147,483,647GLsizei: A signed, four-byte integer used to represent the size (in bytes) of data, similar to size_t in C.GLubyte: An unsigned, one-byte integer capable of holding a value between 0 and 255.GLushort: An unsigned, two-byte integer capable of holding a value between 0 and 65,535GLuint: An unsigned, four-byte integer capable of holding a value between 0 and 4,294,967,295GLfloat: A four-byte precision IEEE 754-1985 floating point variable.GLclampf: This is also a four-byte precision floating point variable, but when OpenGL uses GLclampf, it is indicating that the value of this particular variable should always be between 0.0 and 1.0.GLvoid: A void value used to indicate that a function has no return value, or takes no arguments.GLfixed: Fixed point numbers are a way of storing real numbers using integers. This was a common optimization in 3D systems used because most computer processors are much faster at doing math with integers than with floating-point variables. Because the iPhone has vector processors that OpenGL uses to do fast floating-point math, we will not be discussing fixed-point arithmetic or the GLfixed datatype.GLclampx: Another fixed-point variable, used to represent real numbers between 0.0 and 1.0 using fixed-point arithmetic. Like GLfixed, we won't be using or discussing this datatype.
OpenGL ES (at least the version used on the iPhone) does not support any 8-byte (64-bit) datatypes such as
long or
double. OpenGL does have these larger datatypes, but given the screen size of most embedded devices, and the types of applications you are likely to be writing for them, the decision was made to exclude them from OpenGL ES under the assumption that there would be little need for them, and that their use could have a detrimental effect on performance.

The Point or Vertex
The atomic unit in 3D graphics is called the
point or
vertex. These represent a single spot in three dimensional space and are used to build more complex objects. Polygons are built out of these points, and objects are built out of multiple polygons. Although regular OpenGL supports many types of polygons, OpenGL ES only supports the use of three-sided polygon, (aka triangles).

If you remember back to high-school geometry, you probably remember something called
Cartesian Coordinates. The basic idea is that you select an arbitrary point in space and call it the
origin. You can then designate any point in space by referencing the origin and using three numbers, one for each of the three dimensions, which are represented by three imaginary lines running through the origin. The imaginary line running from left to right is called the x-axis. Traveling along the x-axis, as you go to the right along the x axis, the value gets higher and as you go to the left, they get lower. Left of the origin are negative x values, and to the right are positive x values. The other two axes work exactly the same way. Going up along the y axis, the value of y increases, and going down, it decreases. Values above the origin have a positive y value, and those below the origin have a negative y value. With z, as objects move away from the viewer, the value gets lower, and as they move toward the viewer (or continue behind the viewer), values get higher. Points that are in front of the origin have a positive z value, and those that are behind the origin have a negative z value. The following illustration might make help those words make a little more sense:

Note: Core Graphics, which is another framework for doing graphics on the iPhone uses a slightly different coordinate system in that the y axis decreases as it goes up from the origin, and increases as it goes down.

The value that increases or decreases along these axes are in an arbitrary scale - they don't represent any real measurement, like feet, inches, or meters. You can select any scale that makes sense for your own programs. If you want to design a game where each unit is a foot, you can do that. if you want to make each unit a micron, you can do that as well. OpenGL doesn't care what they represent to the end user, it just thinks of them as units, and make sure they are all equal distances.

Since any object's location in three-dimensional space can be represented by three values, an object's position is generally represented in OpenGL by the use of three
GLfloat variables, usually using an array of three floats, where the first item in the array (index 0) is the x position, the second (index 1) is the y position, and the third (index 2) is the z position. Here's a very simple example of creating a vertex for use in OpenGL ES:

GLfloat vertex[3];
vertex[0] = 10.0;       // x
vertex[1] = 23.75;      // y
vertex[2] = -12.532;    // z

In OpenGL ES, you generally submit all the vertices that make up some or all of the objects in your scene as a
vertex array. A vertex array is simply an array of values (usually
GLfloats) that contains the vertex data for some or all of the objects in the world. We'll see how that process works in the next post in this series, but the thing to remember about vertex arrays is that their size is based on the number of vertices being submitted multiplied by either three (for drawing in three-dimensional space) or two (for drawing in two-dimensional space). So, a vertex array that holds six triangles in three-dimensional space would consist of an array of 54
GLfloats, because each triangle has three vertices, and each vertex has three coordinates and 6 x 3 x 3 = 54.

Dealing with all these
GLfloats can be a pain, however, because you're constantly having to multiply things in your head and try to think of these arrays in terms of the vertices and polygons that they represent. Fortunately, there's an easier way. We can define a data structure to hold a single vertex, like this:

typedef struct {
GLfloat x;
GLfloat y;
GLfloat z;
} Vertex3D;

By doing this, our code becomes much more readable:

Vertex3D vertex;
vertex.x = 10.0;
vertex.y = 23.75;
vertex.z = -12.532;
Now, because our
Vertex3D struct is comprised of three
GLfloats, passing a pointer to a
Vertex3D is exactly the same as passing a pointer to an array of three
GLfloats. There's no difference to the computer; both have the same size and the same number of bytes in the same order as OpenGL expects them. Grouping the data into these data structures just makes it easier for us as the programmer to visualize and deal with the data. If you download my Xcode template from the beginning of this article, this data structure and the supporting functions I'm going to be discussing next have already been defined in the file named
OpenGLCommon.h. There is also an inline function for creating single vertices:

static inline Vertex3D Vertex3DMake(CGFloat inX, CGFloat inY, CGFloat inZ)
{
Vertex3D ret;
ret.x = inX;
ret.y = inY;
ret.z = inZ;
return ret;
}

If you remember back to geometry (or maybe you don't, which is okay), the distance between two points on a plane is calculated using this formula:

We can implement this formula to calculate the straight-line distance between any two points in three-dimensional space with this simple inline function:

static inline GLfloat Vertex3DCalculateDistanceBetweenVertices (Vertex3D first, Vertex3D second)
{
GLfloat deltaX = second.x - first.x;
GLfloat deltaY = second.y - first.y;
GLfloat deltaZ = second.z - first.z;
return sqrtf(deltaX*deltaX + deltaY*deltaY + deltaZ*deltaZ );
};

Triangles
Since OpenGL ES only supports triangles, we can also create a data structure to group three vertices into a single triangle object.

typedef struct {
Vertex3D v1;
Vertex3D v2;
Vertex3D v3;
} Triangle3D;

Again, a single
Triangle3D is exactly the same as an array of nine
GLfloats, it's just easier for us to deal with it in our code because we can build objects out of vertices and triangles rather than out of arrays of
GLfloats.

There are a few more things you need to know about triangles, however. In OpenGL, there is a concept known as
winding, which just means that the order in which the vertices are drawn matters. Unlike objects in the real world, polygons in OpenGL do not generally have two sides to them. They have one side, which is considered the
front face, and a triangle can only be seen if its front face if facing the viewer. While it is possible to configure OpenGL to treat polygons as two-sided, by default, triangles have only one visible side. By knowing which is the front or visible side of the polygon, OpenGL is able to do half the amount of calculations per polygon that it would have to do if both sides were visible.

Although there are times when a polygon will stand on its own, and you might very well want the back drawn, usually a triangle is part of a larger object, and one side of the polygon will be facing the inside of the object and will never be seen. The side that isn't drawn is called a
backface, and OpenGL determines which is the front face to be drawn and which is the backface by looking at the drawing order of the vertices. The front face is the one that would be drawn by following the vertices in counter-clockwise order (by default, it can be changed). Since OpenGL can determine easily which triangles are visible to the user, it can use a process called
Backface Culling to avoid doing work for polygons that aren't facing the front of the viewport and, therefore, can't be seen. We'll discuss the viewport in the next posting, but you can think of it as the virtual camera, or virtual window looking into the OpenGL world.

In the illustration above, the cyan triangle on the left is a backface and won't be drawn because the order that the vertices would be drawn in relation to the viewer is clockwise. On the other hand, the triangle on the right is a frontface that will be drawn because the order of the vertices is counter-clockwise in relation to the viewer.

In the next posting in this series, we'll look at setting up the virtual world in OpenGL and do some simple drawing using
Vertex3D and
Triangle3D. In the post after that, we'll look at
transformations which are a way of using linear algebra to move objects around in the virtual world.

POSTED BY JEFF LAMARCHE
AT 10:17 AM

LABELS: COCOA TOUCH, GAME PROGRAMMING, IPHONE SDK, OPENGL ES


展开全文
• FRIDAY, MAY 1, 2009 ...OpenGL ES From the Ground Up, Part 4: Let There Be Light! Continuing on with OpenGL ES for the iPhone, let's talk about light. So far, we haven't done anything with
 FRIDAY, MAY 1, 2009

OpenGL ES From the Ground Up, Part 4: Let There Be Light!

Continuing on with OpenGL ES for the iPhone, let's talk about light. So far, we haven't done anything with light. Fortunately, OpenGL still lets us see what's going on if we don't configure any lights. It just provides a very flat overall lighting so that we can see stuff. But without defined lights, things have a tendency to look rather flat, as you saw in apps we wrote back in
part 2.

Shade Models Before we get into how OpenGL ES deals with light, it's important to know that OpenGL ES actually defines two
GL_FLAT and
GL_SMOOTH. We're not even going to really talk about
GL_FLAT, because you'd only use it if you wanted your apps to look like they came from the 1990s:

GL_FLAT rendering of an icosahedron. State of the art real-time rendering... 15 years ago.

GL_FLAT treats every pixel on a given triangle exactly the same in terms of lighting. Every pixel on a polygon will have the same color, shade, everything. It gives enough visual cues to have some three-dimensionality and it's computationally a lot cheaper than calculating every pixel differently, but flat shaded objects tend not to look very real. Now, there might be a certain retro value to using
GL_FLAT at times, but to make your 3D objects look as realistic as possible you're going to want to go with the
GL_SMOOTH drawing mode, which uses a smooth but fairly fast shading algorithm called
GL_SMOOTH is the default value.

The only reason I bring it up is at all is simply because some of the stuff discussed toward the end of this article doesn't work exactly the same if you're using
GL_FLAT shading. Given how antiquated and unnecessary
GL_FLAT is for most purposes, I didn't want to spend time covering it.

Enabling Lights For purposes of this article, I'm going to assume that you're continuing on with the final project from Chapter 2, the flat-looking spinning icosahedron. If you didn't create the project then, and want to follow along,
you can grab the Xcode project here.

The first thing we need to do is enable lights. By default, manually-specified lights are disabled. Let's turn that feature on now. Go into
GLViewController.mand add the code in bold below to the
setupView: method:

-(void)setupView:(GLView*)view
{
const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0;
GLfloat size;
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
CGRect rect = view.bounds;
glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size /
(rect.size.width / rect.size.height), zNear, zFar);
glViewport(0, 0, rect.size.width, rect.size.height);
glMatrixMode(GL_MODELVIEW);

glEnable(GL_LIGHTING);

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
}
Typically, lights are something you will enable once during setup, and not touch again. It's not a feature you should be turning on and off before and after drawing. There might be some odd cases where you turned them on or off during your program's execution, but most of the time you just enable when your app launches. That single line of code is all it takes to enable lighting in OpenGL ES. Any idea what happens if we run the program now?

Enabling Lights Egads! We've enabled lighting, but haven't created any lights. Everything we draw except the grey color we used to clear the buffers will be rendered as absolute pitch black. That's not much of an improvement , is it? Let's add a light to the scene.

The way lights are enabled is a little odd. OpenGL ES allows you to create up to 8 light sources. There is a constant that corresponds to each of these possible lights, which run from
GL_LIGHT0 to
GL_LIGHT7. You can use any combination of these five lights, though it is customary to start with
GL_LIGHT0 for your first light source, and then go to
GL_LIGHT1 for the next, and so on as you need more lights. Here's how you would "turn on" the first light,
GL_LIGHT0>:

glEnable(GL_LIGHT0);
Once you've enabled a light, you have to specify some attributes of that light. For starters, there are three different component colors that are used to define a light. Let's look at those first.

The Three Components of Light In OpenGL ES, lights are made up of three component colors called the
ambient component, the
diffuse component, and the
specular component. It may seem odd that we're using a color to specify components of a light, but it actually works quite well because it allows you to specify both the color and relative intensity of each component of the light at the same time. A bright white light would be defined as white (
{1.0, 1.0, 1.0, 1.0}), where a dim white light would be specified as a shade of grey (
{0.3, 0.3, 0.3 1.0}). You can also give your light a color cast by varying the percentages of the red, green, and blue components.

The following illustration shows what effect each component has on the resulting image.

Source image from Wikimedia Commons, modified as allowed by license.

The specular component defines direct light that is very direct and focused and which tends to reflect back to the viewer such that it forms a "hot spot" or shine on the object. The size of the spot can vary based on a number of factors, but if you see an area of more pronounced light such as the two white dots on the yellow sphere above, that's generally going to be coming from the specular component of one or more lights.

The diffuse component defines flat, fairly even directional light that shines on the side of objects that faces the light source.

Ambient light is light with no apparent source. It is light that has bounced off of so many things that its original source can't be identified. The ambient component defines light that shines evenly off of all sides of all objects in the scene.

Ambient Component The more ambient component your light has, the less dramatic the lighting effects you achieve will be. Ambient components of all light are multiplied together, meaning that the scene's overall ambient light will be the combined ambient light from all of the enabled light sources. If you're using more than one light, you may wish to specify your ambient component on only one of them and leave the ambient component on all the others at black (
{0.0, 0.0, 0.0, 1.0}) so that it's easier to adjust the amount of ambient light in the scene, because you'll only have to adjust it in one place.

Here's how you would specify the ambient component of a light to a very dim white light:

const GLfloat light0Ambient[] = {0.05, 0.05, 0.05, 1.0};
glLightfv(GL_LIGHT0, GL_AMBIENT, light0Ambient);
Using a low ambient value like this will make the lighting in your scene more dramatic, but it also means that the side of an object that is not facing a light, or objects that have other objects between them and the light will not be seen very well in your scene.

Diffuse Component The second component of light that you can specify in OpenGL ES is the
diffuse component. In the real world, diffuse light would be light, for example, that has passed through a light fabric, or bounced off of a white wall. The rays of diffuse light have scattered, which gives a softer appearance than direct light with less chance of hot spots or shininess. If you've ever watched a professional photographer using studio lights, you've probably seen them using
softboxesor reflecting their lights into umbrellas. Both passing through a light material like white cloth and reflecting off of a light colored material will diffuse the light to give a more pleasing photograph. In OpenGL ES, the diffuse component is similar, in that it is light that reflects very evenly off of the object. It's not the same as ambient, however, because it's directional light, so only the sides of an object that is facing the light source will reflect diffuse light whereas all polygons in the scene will be hit by ambient light.

Here's an example of specifying the diffuse component for the first light in a scene:

const GLfloat light0Diffuse[] = {0.5, 0.5, 0.5, 1.0};
glLightfv(GL_LIGHT0, GL_DIFFUSE, light0Diffuse);

Specular Component Finally, we get to the specular component. This is the component of light that is very direct and tends to bounce back to the view in the form of hot spots and glare. If you wanted to give the appearance of a spotlight, you would specify a very high specular component, and very low diffuse and ambient components (you'd also define some other parameters, as you'll see in a moment).

Note: the specular value of the light is only one factor in determining the size of the specular highlight, as you'll see in the next installment. The first version of this blog posting incorrectly had shininess as a light attribute, it's actually a material attribute only, sorry about that. Shininess is something we'll discuss next time when we start creating materials

Here's an example of setting the specular component:

const GLfloat light0Specular[] = {0.7, 0.7, 0.7, 1.0};

Position There's another important attribute of a light that needs to be set, which is the light source's position in 3D space. This doesn't affect the ambient component, but the other two components are directional in nature and can only be used in light calculations if OpenGL knows where the light is in relation to the objects in the scene. We can specify the light's position like this:

const GLfloat light0Position[] = {10.0, 10.0, 10.0, 0.0};
glLightfv(GL_LIGHT0, GL_POSITION, light0Position);
That position would place the first light behind the viewer a little to the right and a little ways up (or a lot to the right and up if the OpenGL units represent large values in your virtual world).

Those are the attributes that you'll set for nearly all lights. If you don't set a value for one of the components, it will default that component to black
{0.0, 0.0, 0.0, 1.0}. If you don't define a position, it will fall at the origin, which isn't usually what you want.

You might be wondering what the alpha component does for lights. The answer for ambient and specular light is
not a darn thing. It is used, however, in the calculations for diffuse light to determine how the light reflects back. We'll discuss how how that works after we start talking about materials because both material and light values go into that equation. We'll discuss materials next time, so for now, don't worry too much about alpha, just set it 1.0. Changing it won't have any affect on the program we're writing in this installment, but it could, at least for the diffuse component, in future installments.

There are some other light components you might optionally consider setting.

Creating a Spotlight If you want to create a directional spot light - a light that points in a particular direction and only shines light in a certain angle of view, in essence, creating a light that shines in the shape of a frustum (remember what that is?) as opposed to shining out in all directions like a bare light bulb would, then you need to set two additional parameters. Setting
GL_SPOT_DIRECTION allows you to identify the direction the light is pointing in. Setting
GL_SPOT_CUTOFF defines the spread of the light, similar to the angle used to calculate the field of vision in the last installment. A narrow angle creates a very tight spotlight, a broader angle creates something more like a floodlight.

Specifying Light Direction The way
GL_SPOT_DIRECTION works is that you specify an x, y, and z component to define the direction it's pointing in. The light isn't aimed at the point in space you define, however. The three coordinates you provide are a
vector, not a vertex. Now, this is a subtle but important distinction. A vector is represented by a data structure that's identical to a vertex - it takes three
GLfloats, one for each of the three Cartesian axes, to define a vector. However, the numbers are used to represent a direction of travel rather than a point in space.

Now, of course, everybody knows that it takes two points to define a line segment, so how can a single point in space possibly identify a direction? It's because there's a second implied point that's used as the starting point, which is the origin. If you draw a line from the origin to the point defined in the vector, that's the direction represented by the vector. Vectors can also be used to represent velocity or distance, with a point further away from the origin representing a faster velocity or further distance. In most uses in OpenGL, the distance from the origin is not actually used. In fact, in most cases where we're going to use a vector, we're going to
normalize that vector so that it has a length of 1.0. Normalizing the vectors. We'll talk about vectors more as we move on, but for now, if you want define a directional light, you have to create a vector to define the direction it is shining in. You might do that like this, which would create a light that is shining straight down the Z axis:

const GLfloat light0Direction = {0.0, 0.0, -1.0};
glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, light0Direction);
Now, what if you want light pointing at a specific object? That's actually pretty easy to figure out. Take the position of the light and the position of the object and feed those to the function
Vector3DMakeWithStartAndEndPoints() that can be found in
OpenGLCommon.h provided with my OpenGL project template, and it will return a normalized vector pointing from the light to the specified point. Then you can feed that in as the

Specifying Light Angle Specifying the direction of the light won't have any noticeable effect unless you limit the angle that the light is shining in. If you specify a cutoff angle, it's got to be less than 180° because the angle you specify for
GL_SPOT_CUTOFF defines how many degrees from the center line on BOTH sides of the centerline, so if you specify 45° you are actually create a spot light with a total field of 90°. That means the maximum value you can specify for
GL_SPOT_CUTOFF is 180°. Here's an illustration of the concept:

Here's how you would restrict the angle to a 90° field of vision (using as a 45° cutoff angle):

glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 45.0);

There are three more light attributes that can be set. They work together, and they are WAY beyond the scope of this blog posting, however I may do a post in the future on
light attenuation, which is how light "falls off" as it moves further away from its source. You can create some very neat effects by playing with the attenuation values, but that will have to wait for a future post.

Putting it All Together Let's take all that we've learned and use it to set up a light in the
setupView: method. Replace your
setupView: method with this new one:

-(void)setupView:(GLView*)view
{
const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0;
GLfloat size;
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
CGRect rect = view.bounds;
glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size /
(rect.size.width / rect.size.height), zNear, zFar);
glViewport(0, 0, rect.size.width, rect.size.height);
glMatrixMode(GL_MODELVIEW);

// Enable lighting
glEnable(GL_LIGHTING);

// Turn the first light on
glEnable(GL_LIGHT0);

// Define the ambient component of the first light
const GLfloat light0Ambient[] = {0.1, 0.1, 0.1, 1.0};
glLightfv(GL_LIGHT0, GL_AMBIENT, light0Ambient);

// Define the diffuse component of the first light
const GLfloat light0Diffuse[] = {0.7, 0.7, 0.7, 1.0};
glLightfv(GL_LIGHT0, GL_DIFFUSE, light0Diffuse);

// Define the specular component and shininess of the first light
const GLfloat light0Specular[] = {0.7, 0.7, 0.7, 1.0};
const GLfloat light0Shininess = 0.4;
glLightfv(GL_LIGHT0, GL_SPECULAR, light0Specular);

// Define the position of the first light
const GLfloat light0Position[] = {0.0, 10.0, 10.0, 0.0};
glLightfv(GL_LIGHT0, GL_POSITION, light0Position);

// Define a direction vector for the light, this one points right down the Z axis
const GLfloat light0Direction[] = {0.0, 0.0, -1.0};
glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, light0Direction);

// Define a cutoff angle. This defines a 90° field of vision, since the cutoff
// is number of degrees to each side of an imaginary line drawn from the light's
// position along the vector supplied in GL_SPOT_DIRECTION above
glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 45.0);

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
}
Pretty straightforward, right? We just put all the pieces from above together, so we should be good to go, right? Well, try running it and see.

Aw, now, come on! That's not fair! We've got lights, why can't we see anything now? The shape pulses between black and grey, but it doesn't look any more like a 3D shape then it did before.

Don't Worry, It's Normal And by normal, I don't mean typical, or average. I mean normal in the math sense; to mean "perpendicular to". That's what we're missing. Normals. A surface normal (or polygon normal) is nothing more than a vector (or line) that is perpendicular to the surface of a given polygon. Here's a nice illustration of the concept (this one is from Wikipedia, not from me:)

OpenGL doesn't need to know the normals to render a shape, but it does need them when you start using directional lighting. OpenGL needs the surface normals to know how the light interacts with the individual polygons.

OpenGL requires us to provide a normal for each vertex we used. Now, calculating surface normals for triangles is pretty easy, it's simply the cross product of two sides of the triangle. In code, it might look like this:

static inline Vector3D Triangle3DCalculateSurfaceNormal(Triangle3D triangle)
{
Vector3D u = Vector3DMakeWithStartAndEndPoints(triangle.v2, triangle.v1);
Vector3D v = Vector3DMakeWithStartAndEndPoints(triangle.v3, triangle.v1);

Vector3D ret;
ret.x = (u.y * v.z) - (u.z * v.y);
ret.y = (u.z * v.x) - (u.x * v.z);
ret.z = (u.x * v.y) - (u.y * v.x);
return ret;
}

As you saw before,
Vector3DMakeWithStartAndEndPoints() takes two vertices and calculates a normalized vector from them. So, if it's so easy to calculate surface normals. So, why doesn't OpenGL ES do it for us? Well, two reasons. First and foremost, it's costly. There's quite a bit of flaoting point multiplication and division going on, plus a costly call to
sqrtf() for every single polygon.

Secondly, because we're using
GL_SMOOTH rendering, so OpenGL ES needs to know
vertex normals not the surface normals, which is what that function above calculates. Calculating vertex normals is even more costly because it requires you to calculate a vector that is the average of all the surface normals for all the polygons in which a vertex is used.

Let's look at an example (this is recycled from an earlier post, feel free to skip ahead if you're already comfortable with what vertex normals are)

That's not a cube, by the way, For simplicity, we're looking at a flat, two-dimensional mesh of six triangles. There are a total of seven vertices used to make the shape. That vertex marked A is shared by all six triangles, so the vertex normal for that vertex is the average of the surface normals for all seven triangles that it's part of. Averaging is done per vector element, so the x values are averaged, the y values are averaged, and the z values are averaged, and the resulting values are re-combined to make the average vector.

So, how do we get vectors for our icosahedron's vertices? Well, this is a simple enough shape that we could actually get away with calculating the vertex normals at run time without a noticeable delay. Usually, you won't be working with so few vertices and will be dealing with far more complex objects, and more of them. As a result, you want to avoid calculating normals at runtime except when there's no alternative. In this case, I decided to write a little command line program to loop through the vertices and triangle indices and calculate the vertex normal for each of the vertices in the icosahedron. This program dumped the results to the console as a C struct, and I just copied that into my OpenGL program.

Note: Most 3D programs will calculate normals for you, though be careful about using those - most 3D file formats store surface normals, not vertex normals, so you'll still usually be responsible at least for averaging the surface normals to create vertex normals. We'll look at creating and loading objects in later installments, or you can go read some of my
earlier posts about writing a loader for the Wavefront OBJ file format.
Here's the command line program I wrote to calculate the vertex normals for our icosahedron:

#import <Foundation/Foundation.h>
#import "OpenGLCommon.h"

int main (int argc, const char * argv[]) {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

NSMutableString *result = [NSMutableString string];

static const Vertex3D vertices[]= {
{0, -0.525731, 0.850651},             // vertices[0]
{0.850651, 0, 0.525731},              // vertices[1]
{0.850651, 0, -0.525731},             // vertices[2]
{-0.850651, 0, -0.525731},            // vertices[3]
{-0.850651, 0, 0.525731},             // vertices[4]
{-0.525731, 0.850651, 0},             // vertices[5]
{0.525731, 0.850651, 0},              // vertices[6]
{0.525731, -0.850651, 0},             // vertices[7]
{-0.525731, -0.850651, 0},            // vertices[8]
{0, -0.525731, -0.850651},            // vertices[9]
{0, 0.525731, -0.850651},             // vertices[10]
{0, 0.525731, 0.850651}               // vertices[11]
};

static const GLubyte icosahedronFaces[] = {
1, 2, 6,
1, 7, 2,
3, 4, 5,
4, 3, 8,
6, 5, 11,
5, 6, 10,
9, 10, 2,
10, 9, 3,
7, 8, 9,
8, 7, 0,
11, 0, 1,
0, 11, 4,
6, 2, 10,
1, 6, 11,
3, 5, 10,
5, 4, 11,
2, 7, 9,
7, 1, 0,
3, 9, 8,
4, 8, 0,
};

Vector3D *surfaceNormals = calloc(20, sizeof(Vector3D));

// Calculate the surface normal for each triangle

for (int i = 0; i < 20; i++)
{
Vertex3D vertex1 = vertices[icosahedronFaces[(i*3)]];
Vertex3D vertex2 = vertices[icosahedronFaces[(i*3)+1]];
Vertex3D vertex3 = vertices[icosahedronFaces[(i*3)+2]];
Triangle3D triangle = Triangle3DMake(vertex1, vertex2, vertex3);
Vector3D surfaceNormal = Triangle3DCalculateSurfaceNormal(triangle);
Vector3DNormalize(&surfaceNormal);
surfaceNormals[i] = surfaceNormal;
}

Vertex3D *normals = calloc(12, sizeof(Vertex3D));
[result appendString:@"static const Vector3D normals[] = {\n"];
for (int i = 0; i < 12; i++)
{
int faceCount = 0;
for (int j = 0; j < 20; j++)
{
BOOL contains = NO;
for (int k = 0; k < 3; k++)
{ if (icosahedronFaces[(j * 3) + k] == i) contains = YES; }
if (contains)
{ faceCount++; normals[i] = Vector3DAdd(normals[i], surfaceNormals[j]); }
}

normals[i].x /= (GLfloat)faceCount;
normals[i].y /= (GLfloat)faceCount;
normals[i].z /= (GLfloat)faceCount;
[result appendFormat:@"\t{%f, %f, %f},\n", normals[i].x, normals[i].y, normals[i].z];
}
[result appendString:@"};\n"];
NSLog(result);
[pool drain];
return 0;
}

A little crude, perhaps, but it gets the job done and allows us to pre-calculate the vertex normals so that we don't have to do that calculation more than once or at runtime. When the program is run, the output is this:

static const Vector3D normals[] = {
{0.000000, -0.417775, 0.675974},
{0.675973, 0.000000, 0.417775},
{0.675973, -0.000000, -0.417775},
{-0.675973, 0.000000, -0.417775},
{-0.675973, -0.000000, 0.417775},
{-0.417775, 0.675974, 0.000000},
{0.417775, 0.675973, -0.000000},
{0.417775, -0.675974, 0.000000},
{-0.417775, -0.675974, 0.000000},
{0.000000, -0.417775, -0.675973},
{0.000000, 0.417775, -0.675974},
{0.000000, 0.417775, 0.675973},
};

Specifying Vertex Normals You saw above the array of normals that we will provide to OpenGL. Before we can do that, though, we have to enable normal arrays. This is done with the following call:

glEnableClientState(GL_NORMAL_ARRAY);
The way we feed the normal array to OpenGL is using this call:

glNormalPointer(GL_FLOAT, 0, normals);
And that's all there is too it. Let's add these elements to our
drawSelf: method, which gives us this:

- (void)drawView:(GLView*)view;
{

static GLfloat rot = 0.0;

// This is the same result as using Vertex3D, just faster to type and
// can be made const this way
static const Vertex3D vertices[]= {
{0, -0.525731, 0.850651},             // vertices[0]
{0.850651, 0, 0.525731},              // vertices[1]
{0.850651, 0, -0.525731},             // vertices[2]
{-0.850651, 0, -0.525731},            // vertices[3]
{-0.850651, 0, 0.525731},             // vertices[4]
{-0.525731, 0.850651, 0},             // vertices[5]
{0.525731, 0.850651, 0},              // vertices[6]
{0.525731, -0.850651, 0},             // vertices[7]
{-0.525731, -0.850651, 0},            // vertices[8]
{0, -0.525731, -0.850651},            // vertices[9]
{0, 0.525731, -0.850651},             // vertices[10]
{0, 0.525731, 0.850651}               // vertices[11]
};

static const Color3D colors[] = {
{1.0, 0.0, 0.0, 1.0},
{1.0, 0.5, 0.0, 1.0},
{1.0, 1.0, 0.0, 1.0},
{0.5, 1.0, 0.0, 1.0},
{0.0, 1.0, 0.0, 1.0},
{0.0, 1.0, 0.5, 1.0},
{0.0, 1.0, 1.0, 1.0},
{0.0, 0.5, 1.0, 1.0},
{0.0, 0.0, 1.0, 1.0},
{0.5, 0.0, 1.0, 1.0},
{1.0, 0.0, 1.0, 1.0},
{1.0, 0.0, 0.5, 1.0}
};

static const GLubyte icosahedronFaces[] = {
1, 2, 6,
1, 7, 2,
3, 4, 5,
4, 3, 8,
6, 5, 11,
5, 6, 10,
9, 10, 2,
10, 9, 3,
7, 8, 9,
8, 7, 0,
11, 0, 1,
0, 11, 4,
6, 2, 10,
1, 6, 11,
3, 5, 10,
5, 4, 11,
2, 7, 9,
7, 1, 0,
3, 9, 8,
4, 8, 0,
};

static const Vector3D normals[] = {
{0.000000, -0.417775, 0.675974},
{0.675973, 0.000000, 0.417775},
{0.675973, -0.000000, -0.417775},
{-0.675973, 0.000000, -0.417775},
{-0.675973, -0.000000, 0.417775},
{-0.417775, 0.675974, 0.000000},
{0.417775, 0.675973, -0.000000},
{0.417775, -0.675974, 0.000000},
{-0.417775, -0.675974, 0.000000},
{0.000000, -0.417775, -0.675973},
{0.000000, 0.417775, -0.675974},
{0.000000, 0.417775, 0.675973},
};

glTranslatef(0.0f,0.0f,-3.0f);
glRotatef(rot,1.0f,1.0f,1.0f);
glClearColor(0.7, 0.7, 0.7, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glNormalPointer(GL_FLOAT, 0, normals);
glDrawElements(GL_TRIANGLES, 60, GL_UNSIGNED_BYTE, icosahedronFaces);

glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
static NSTimeInterval lastDrawTime;
if (lastDrawTime)
{
NSTimeInterval timeSinceLastDraw = [NSDate timeIntervalSinceReferenceDate] - lastDrawTime;
rot+=50 * timeSinceLastDraw;
}
lastDrawTime = [NSDate timeIntervalSinceReferenceDate];

}

Voilá, Almost Now, if we run it, we do indeed get a rotating shape that looks like a real, gosh-honest-to-goodness three-dimensional object.

But what happened to our colors?

That, my friends, is our segue into the next installment of this series: OpenGL ES Materials. When you're using lighting and smooth shading, OpenGL expects you to provide
materials (or textures, but we're not ready to segue there yet) for the polygons. Materials are more complex than the simple colors that we provided in the color array. Materials, like lights, are made of multiple components, and can be used to create a variety of different surface treatments. The actual appearance of objects is determined by the attributes of the scene's lights and the polygons' materials.

But, we don't want to leave off with a grey icosahedron, so in the meantime, I'll introduce you to another OpenGL ES configuration parameter that can be enabled:
GL_COLOR_MATERIAL. By enabling this option like so:

glEnable(GL_COLOR_MATERIAL);
OpenGL will use the provided color array to create simple materials for our polygons based on the color array, giving a result more like this:

If you don't feel like typing everything in (or copying and pasting), you can check out the final project
right here.

POSTED BY JEFF LAMARCHE
AT 8:28 AM

LABELS: GAME PROGRAMMING, OPENGL ES


展开全文
• THE CLOUD FROM COCOA TOUCH by Matt Long  Everything is moving toward the cloud and unless you’re building calculators, unit converters, or miniature golf score keepers your iPhone app ne


Everything is moving toward the cloud and unless you’re buildingcalculators, unit converters, or miniature golf score keepers youriPhone app needs to know how to get data from it. In this blog postI intend to demonstrate how to set up a simple server applicationand how to retrieve data from it and post data to it using CocoaTouch. I have chosen to use PHP on the server side because of it’ssimplicity and ubiquity, and because I’ve know it, somewhat. Youshould, however, be able to implement something similar using yourserver side language of choice.
In many cases when you go to access remote data, you do so througha web service API. While services based on such technologies asSOAP or XML-RPC are standards that provide reasonable methods forretrieving and updating data, REST seems to be the methodologygaining the most ground lately. For our purpose in this post Iwon’t get into great detail of how to implement a REST base webservice as, again, REST is not a specific implementation but rathera methodology. (Read up on it elsewhere if you don’t understandwhat this means). However, I will talk about it briefly so that youcan get on the right path for doing your own RESTimplementation.
What Is The Cloud
It seems that the term ‘cloud’ in this context has been around fora pretty long time, however, you can just think of it as, well, theInternet. If you access your data “in the cloud”, you are accessingyour data that is hosted on some server somewhere in the world. Theidea behind it being that you can always access it no matter whereyou are. If your data is not “in the cloud”, then it is hostedlocally only and only accessible from that location.
Amazon.com among other technology leaders has helped to make thismetaphor easier to understand. If you are familiar with Amazon S3(Simple Storage Service), then you know what it means to store yourdata “in the cloud”. Amazon has made it very cheap and very easy toaccess data that you store on their servers from anywhere in theworld. They provide a RESTful web service through which you cansecurely add, remove, and update data that you have stored there.If you have image or video assets, for example, that you find getaccessed a lot through your website, you may find that hostingthose files on S3 is cheaper than paying your own web host toprovide the bandwidth and storage for them. This is a perfectexample of what cloud computing is and provides.
On a more generic level, cloud computing can mean setting up yourown service and providing access to resources or data in the samefashion. The difference in this case, however, is that you aremanaging it all on your own. Let’s take a look at how you might dothis.
Sending Arbitrary Data
The simplest PHP script can really teach you a great deal aboutwhat is going on between client and server. Remember that since weare depending on the Apache web server to serve up our PHPresponses a huge portion of the work is already done. We don’t haveto worry about low-level networking APIs and sockets. Instead wecan create a simple connection using a URL and the NSURLConnectionclass and we’re most of the way there. Consider the following PHPcode sample.

1
2
3
<?php
print $HTTP_RAW_POST_DATA; ?> With one line of code (not including the tags), we have justimplemented an echo server. Whatever you send to this script on theserver will be sent right back to you. You should understand thoughthat it is the body ofthe request that gets stored in this$HTTP_RAW_POST_DATA variable.So what does that mean?
This really means that you can send anything you want as the bodyof the request and this script will store it in the$HTTP_RAW_POST_DATA and then print itback as the response. You just specify in your request the type ofdata you’re sending. Say you want to send raw XML, for example, youspecify”‘text/xml” as the content-type of the request that you willhand off to your NSURLConnection as in the following codesnippet. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 NSMutableURLRequest *request = [[NSMutableURLRequest alloc] initWithURL:[NSURL URLWithString:@"http://www.cimgf.com/testpost.php"]]; [request setHTTPMethod:@"POST"]; [request setValue:@"text/xml" forHTTPHeaderField:@"Content-type"]; NSString *xmlString = @"<data><item>Item 1</item><item>Item 2</item></data>"; [request setValue:[NSString stringWithFormat:@"%d", [xmlString length]] forHTTPHeaderField:@"Content-length"]; [request setHTTPBody:[xmlString dataUsingEncoding:NSUTF8StringEncoding]]; [[NSURLConnection alloc] initWithRequest:request delegate:self]; When this request finishes, the same XML inthe xmlString variablewill be sent right back to our application and will be available inour delegate method, -(void)connectionDidFinishLoading:(NSURLConnection*)connection; assuming we’vebeen appending the data to anNSMutableData object in our delegatemethod, -(void)connection:(NSURLConnection *)connectiondidReceiveData:(NSData *)d;. This example is just echoing back whatever we send, but if wewanted to get a little fancier, we could load and parse the XMLwith the PHP XML parser and respond back to the client withsomething more useful. What is also interesting about this code is that you can replacethe body of the request with any data type you’re interested inPOSTing to your server. If you post image data, for example, youcan save that image data on the server side using something likethis PHP script: 1 2 3 4 5 6 7 8 9 <?php$handle = fopen("image.png", "wb"); // write binary

fwrite($handle,$HTTP_RAW_POST_DATA);

fclose($handle); print "Received image file."; ?> Keep in mind that this is very primitive and it does no sanitychecking on the body data. You would need to add that in order toimplement any real world server side application. That being said,these few lines of code demonstrate how you can send most any dataas the request body and process it on the server side. OurObjective-C code from earlier modified to support sending a PNGimage would look like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 NSData *imageData = UIImagePNGRepresentation([UIImage imageNamed@"localimage.png"]); NSMutableURLRequest *request = [[NSMutableURLRequest alloc] initWithURL:[NSURL URLWithString:@"http://www.cimgf.com/testpostimage.php"]]; // Not a real URL. [request setHTTPMethod:@"POST"]; [request setValue:@"image/png" forHTTPHeaderField:@"Content-type"]; [request setValue:[NSString stringWithFormat:@"%d", [imageData length]] forHTTPHeaderField:@"Content-length"]; [request setHTTPBody:imageData]; [[NSURLConnection alloc] initWithRequest:request delegate:self]; The request we are sending will be asynchronous so our UI will nothang, however, there is no accurate progress monitoring capability,so you would need to implement that if you want to have an idea ofhow long it is going to take to post the image up to the webservice. See the section called Simplifying CloudAccess below to see one solution toproviding progress. Working With Web Forms A lot of web applications began life as web forms in which the usercan obtain a list of records or a detail record based on input theuser provides. You can post form data programmatically using afairly trivial implementation not much different from the code wedemonstrated above. In the previous section we discussed that youcan send any arbitrary data to a web service or script by placingthat data in the body of the request. This is also true for sendingform data. If you send form data, which is the default when you create anNSURLConnection object and use it to talk to your server, you willsee a string of key value pairs in the same format you wouldnormally see in a get request–somethinglike key1=value1&key2=value2&key3=value3,etc. This is what gets sent in the body for web form requests. Consider the following Bug Reporter web form. Yes, yes, I know–it’s beautiful. Looks like something you would seecirca 1995. Stick with me here as I’m trying to keep things simple.The form takes two fields and just formats the input and respondsto the user with what the user entered. This same form can also besubmitted using code similar to what we’ve already shown. Here ishow you would programmatically post data to this form using andNSURLConnection/NSURLRequest. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 NSMutableURLRequest *request = [[NSMutableURLRequest alloc] initWithURL: [NSURL URLWithString:@"http://www.cimgf.com/test/testform.php"]]; [request setHTTPMethod:@"POST"]; NSString *postString = @"go=1&name=Bad Bad Bug&description=This bug is really really super bad."; [request setValue:[NSString stringWithFormat:@"%d", [postString length]] forHTTPHeaderField:@"Content-length"]; [request setHTTPBody:[postString dataUsingEncoding:NSUTF8StringEncoding]]; [[NSURLConnection alloc] initWithRequest:request delegate:self]; Notice that we are passing a variablecalled go inthe postString variable. This tells our PHP script to see therequest as if the submit button was clicked. You’ll notice in thePHP script that we are checking whether the submit button wasclicked with the call to isset($_POST['go']). Take a look at thecomplete PHP web form script.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<html>
<body>
<h2>Fancy Bug Reporter</h2>
<hr />
<?php
if (isset($_POST['go'])) { // Form was posted print "User submitted bug: ".$_POST['name'] . ": " . $_POST['description']; } else { // Form was not posted, display form ?> <form method="POST" action="<?php echo$PHP_SELF; ?>">
Bug Name:<br /><input type="text" name="name" maxlength="100" /><br />
Bug Description:<br /><input type="text" name="description" maxlength="1000" size="80" /><br />
</form>
<?php
}
?>
</body>
</html>

When the request finishes posting to this form, you will have thesame HTML code in your data object as what you would see if youwere to view the source in the actual web page after posting theform.
Simplifying Cloud Access
Ben Copsey developeda networking library called ASIHTTPRequest thatis a really good replacement for NSURLConnection and relatedclasses. There are a couple reasons I prefer it to NSURLConnection.These include:
You can specify a delegate selector when you create your requestobject that will get called back when the request fails orsucceeds. This is really just a preference overNSURLConnection/NSURLRequest as it just seems simpler to me. If you do want to see accurate progress for either downloads oruploads, you can pass in a reference to a UIActivityIndicatorViewthat will be updated automatically Different types of classes are available for different types ofrequests. For example, to post form data to a web form, you useASIFormDataRequest instead of the more generic ASIHTTPRequest. Ithandles setting up the request body correctly and even enables youto post a file to the form if the form accepts one. While Ben still considers it experimental code,there is also support for access to Amazon’s S3. You createanASIS3Request objectand provide your S3 credentials to the request. Then you cancreate, update, or delete any assets you may be storing in your S3buckets.
Using the example I mentioned earlier of sending XML to our PHPecho script, the request would now look like the following usingthe ASIHTTPRequest object:

1
2
3
4
5
6
7
ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:url];
NSString *xmlString = @"<data><item>Item 1</item><item>Item 2</item></data>";
[request appendPostData:[xmlString dataUsingEncoding:NSUTF8StringEncoding]];
[request setDelegate:self];
[request setDidFinishSelector:@selector(requestFinished:)];
[request setDidFailSelector:@selector(requestFailed:)];
[request startAsynchronous];

Then you implement your delegate selector as in the following:

1
2
3
4
5
6
7
8
9
10
11
- (void) requestFinished:(ASIHTTPRequest *)request
{
NSString *response = [request responseString];
// response contains the data returned from the server.
}

- (void) requestFailed:(ASIHTTPRequest *)request
{
NSError *error = [request error];
// Do something with the error.
}

Similarly, if you want to submit data to a form like we did earlierusing the NSURLConnection/NSURLRequest combination, we can insteaduse the ASIFormDataRequest class.

1
2
3
4
5
6
7
ASIFormDataRequest *request = [ASIFormDataRequest requestWithURL:url];
[request setPostValue:@"1" forKey:@"go"];
[request setPostValue:@"This bug is really really super bad." forKey:@"description"];
[request setDidFinishSelector:@selector(requestFinished:)];
[request setDidFailSelector:@selector(requestFailed:)];
[request startAsynchronous];

When this request completes, we will have the formatted HTML in theresponseString of the ASIHTTPRequest object:

1
2
3
4
5
- (void) requestFinished:(ASIHTTPRequest *)request
{
NSString *response = [request responseString];
// response contains the HTML response from the form.
}

I have come to prefer using ASIHTTPRequest for network access,however, this is a matter of taste. It is a very clean and wellwritten library of classes, so I highly recommend it, but yourmileage may vary.
As I said at the beginning, I’m not going to go into a lot ofdetail about how to set up a REST web service as implementation ofthe REST philosophy isreally up to the developer. However, here are a few points abouthow I go about it:
Createindividual server side scripts for each of the functions you wantto implement. Instead of creating onemaster script, create one for each function you want to implement.This helps you maintain the code as when you need to update or fixsomething, it can be very targted.  Make heavyuse of mod_rewrite in your Apacheserver. Just use Apache. If you’re usingsomething else, I’m sorry but you’re on your own. The rewritemechanism that mod_rewrite provides enables you to take difficultto read URLs and make them easy to understand. WordPress, which weuse for this blog, changes a URL that looks like thishttp://www.cimgf.com/?p=235, into something that looks like this:http://www.cimgf.com/2010/01/28/fun-with-uibuttons-and-core-animation-layers/using mod_rewrite .   It also so happens to provide a great way to implement a REST webservice. Say you want to look up a bug by it’s unique ID in a bugtracker database. Your request would normally look like thishttp://www.cimgf.com/showbug.php?id=1234. The mod_rewrite moduleallows you to define rewrite rules that would change this URL tohttp://www.cimgf.com/bugs/1234. The rule to do this is very simple.

1
2
RewriteEngine On
RewriteRule ^bugs/([[:alnum:]]+)$/showbug.php?id=$1

The RewriteRule line uses a regular expression to define how totranslate from the uglier URL to the pretty one. This regex meansthe following: “look for the word bugs atthe beginning of the parameters string, followed by a forwardslash, followed by one or more numbers until you reach the end ofthe line.” Notice the $1 in the second section. If you are notfamiliar with perl compatible regular expressions, this simplymeans use whatever was found between the parens in the regex. Inour case the parens are “capturing” the ID of the bug the user isrequesting. The rewrite rule is then passing that along to thereal showbug.php script. You just add this to the .htaccess filein the directory where you are hosting your server side scripts andyou will be able to load the load your data using the REST lookingURL–assuming of course your Apache web server has mod_rewriteenabled. Even if you are using shared hosting, there is no excusefor a web host not to have this enabled. If it is not enabled, youneed to find a new web host. Drive yourweb service with a scripted backend that connects to a commoditydatabase like MySQL.Sanity check all of the input you getfrom your user by looking for vulnerabilities such as SQL injectionattacks, but then insert the data into the database using standardSQL calls. If you receive the data, as XML for example, you cansimply parse the XML into a DOM object and insert it from there.Better yet, use a (PHP) server side module that converts from XMLto SQL for you. Use aframework like Rails. Most of the folks Iknow who have used it have nothing to say but good aboutimplementing web services using Ruby on Rails. I can’t recommend itpersonally, but people I respect swear by it. It handles a lot ofthe heavy lifting of developing a web service by constructing theunderlying functionality for you after you have specified basicmeta data that defines the entities you want to manage. The down side of this option for me is that I need to learn yetanother programming language, Ruby. While I’m getting around to it,it hasn’t been on the top of my priority list. Use S3 forAssets. This is simply a suggestion,however, I think it’s a good one. If all you need is to accessassets from your iPhone app, keep a structured data documentaround, like an XML document in your S3 bucket. Let it map outwhere the assets are stored in your S3 buckets. Then all you haveto do is maintain that one XML document and upload a new one eachtime your content needs to change. It will define for your appwhere the various assets are located along with any otherinformation you might want to provide about that asset. Conclusion Networking code has gotten much simpler to implement in recentyears, so accessing your resources in the cloud is a great solutionto many application development problems. There are some greattools out there these days regardless of whether you are developingthe client or the server. If you need to access the cloud, use thevarious libraries available to you and keep your designs simple.Doing so will yield great results. If you have some suggestions for how to implement REST web servicesfor iPhone apps, leave them in the comments section below. Rememberthat all comments are moderated, so if you don’t see your commentappear right away, just be patient. We’ll approve it as soon aspossible. Until next time.  展开全文 • No programming experience is necessary as this book begins on the ground floor with basic programming concepts in Unity and builds to incorporating input from the real world to create interactive ... • 1. The Cognitive Revolution 1.1 An Animal of No Significance Word List: Word Interpretation Domain n.域 Kingdom n.界 Phylum n.门 Class n.纲 Order n.目... • 转自http://blog.mobiscroll.com/working-with-touch-events/ Working with touch events Posted on April 16, 2014 by Isti in R&D, Tutorials ...Last month I was attending the jQuery Europe co • ...The goal of Redis Virtual Memory (VM) is to swap infrequently-accessed data from RAM to disk, without drastically changing the performance characteristics of the database. Thi... • Unit 15A -Clearing in the Sky Clearing in the Sky Jesse Stuart "This is the way," said my father, pointing with his cane across the deep valley below us. "I want to show you something!" "Isn't it ... • of C Luo barb didn't touch the ball out, idol! Before rival fans as he sang the praises of the 9 day, Real Madrid just in Platt stadium 1-0 defeated the Spanish. The game C, crazy shot 11 feet still f • Unit 1 Text A Language Sense Enhancement ...1. the fierce resistance 2. the long march 3. the devastating enemy 4. bleak 5. launched 6. military might 7. mowed down 8. campaign 9. a painful l... • 所谓服务器，大致也就是这个样子了。接下来还需要实现一个协议，待续 写这个东西的初衷，是...touch$LOG_FILE log(){ msg=$1 echo "date '+%Y-%m-%d %H:%M:%S' -$msg" >> \$LOG_FILE return 0 }
• A monocycle is a cycle that runs on one wheel and the one we will be considering is a bit more special. It has a solid wheel colored with five different colors as shown in the figure: &nbsp;&...
• You can also assume all the buildings do not intersect or touch one another and the location of Dumbear and Jing is strictly outside any building. Note that we consider a building blocks their vision...
• Relativity: The Special and General Theory Albert Einstein Albert Einstein 2 Relativity: The Special and General Theory Table of Contents Written: 1916 Source: Relativity: The Special and General Theo...
• You should start from the ground and consider all of the possible exceptions and cases. Then, put all of them into codes... Then, debug,debug,debug ... ... Suddenly, you make it !!!  Well, No ...
• A monocycle is a cycle that runs on one wheel and the one we will be considering is a bit more special. It has a solid wheel colored with five different colors as shown in the figure: The colored...
• source： http://waitbutwhy.com/2015/11/the-cook-and-the-chef-musks-secret-sauce.html The Cook and the Chef: Musk’s Secret Sauce ...Welcome to the last post in the series on the world of Elon
• (If the bird accelerate or split after the bird reach the land,just output only one distance it reach the ground.) The answer should be rounded after the third decimal place. Sample Input 1 ...
• It is said It's a great game. Seven years to create world horizon And we often have to work on ...Ah, this is a little to the new dresses. Take talk Uh welcome everybody It's good to see. Th...
• 曲名：Runing Away 作者：Taska Black 、DROELOE 1 [00:00.000] Running with the speed of light ... 2 [00:03.081] Illuminate what's left behind ... 3 [00:05.815] My feet barely touch the ground ...
• <p>When a PC player is walking around, anyone looking at them from a PE client will see them jittering in and out of the ground. It seems to happen mostly on gravel but I've seen it happen on ...
• Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover Hans Moravec March 1980 Computer Science Department Stanford University (Ph.D. thesis) Preface The Stanford AI lab cart...
• (If the bird accelerate or split after the bird reach the land,just output only one distance it reach the ground.) The answer should be rounded after the third decimal place. Sample Input 1 ...
• Grunt Ground Zero Grunt is a popular task runner in the Node/JavaScript space. Any task you perform repeatedly is a good candidate to be automated via Grunt. On a typical development project you wou....
• (If the bird accelerate or split after the bird reach the land,just output only one distance it reach the ground.) The answer should be rounded after the third decimal place. Sample Input 1 ...
• UVA - 10047 The Monocycle（bfs） A monocycle is a cycle that runs on one wheel and the one we will be considering is a bit more special. It has a solid wheel colored with five different colors as ...

...