Windows Phone Entering the Exciting World of 3D Models #part2

1
316

Creating a terrain with texture mapping

In most of the modern outdoor games, such as Delta Force and Crysis, you will see the trees, rivers, mountains, and so on, all of which are dedicated to simulating the real world, as the game developer hopes to bring a realistic environment when you are playing. In order to achieve this aim, a key technique called Terrain Rendering is used. In the following recipe, you will learn how to use this technique in your game.

Getting ready

In this recipe, we will build the terrain model based on height map. In computer graphics, a heightmap or heightfield is a raster image used to store values, such as surface elevation data. A heightmap contains one channel interpreted as a distance of displacement or height from the floor of a surface and is sometimes visualized as luma of a grayscale image, with black representing minimum height and white representing maximum height. Before rendering, the terrain presentation application will process the grey image to get the grey value of each pixel, which will represent the vertex in the terrain model, then calculate the height depending on the value, with higher values having greater height, and vice versa. When the process is done, the application will have a set of terrain vertices with specified height (the Y-axis or the Z-axis value). Finally, the application should read and process the vertex set to render the terrain.

How to do it…

Follow the steps below to master the technique for creating a texture-mapped terrain:

  1. Create a Windows Phone Game project named TerrainGeneration, change Game1.cs to TerrainGenerationGame.cs. In the content project, add two images—Grass.dds and HeightMap.png. Then, we add a Content Pipeline Extension Project named TerrainProcessor to the solution, replacing ContentProcessor1. cs with TerrainProcessor.cs in the content pipeline library.
  2. Implement the TerrainProcessor class for the terrain processor in TerrainProcessor.cs. At the beginning, put the following code into the class field:
    [code]
    // Scale of the terrain
    const float terrainScale = 4;
    // The terrain height scale
    const float terrainHeightScale = 64;
    // The texture coordinate scale
    const float texCoordScale = 0.1f;
    // The texture file name
    const string terrainTexture = “grass.dds”;
    [/code]
  3. Next, the Process() method is the main method of the TerrainProcessor class:
    [code]
    // Generate the terrain mesh from the heightmap image
    public override ModelContent Process(Texture2DContent input,
    ContentProcessorContext context)
    {
    // Initialize a MeshBuilder
    MeshBuilder builder = MeshBuilder.StartMesh(“terrain”);
    // Define the data type of every pixel
    input.ConvertBitmapType(typeof(PixelBitmapContent<float>));
    // Get the bitmap object from the imported image.
    PixelBitmapContent<float> heightmap =
    (PixelBitmapContent<float>)input.Mipmaps[0];
    // Create the terrain vertices.
    for (int y = 0; y < heightmap.Height; y++)
    {
    for (int x = 0; x < heightmap.Width; x++)
    {
    Vector3 position;
    // Put the terrain in the center of game
    //world and scale it to the designated size
    position.X = (x – heightmap.Width / 2) *
    terrainScale;
    position.Z = (y – heightmap.Height / 2) *
    terrainScale;
    // Set the Y factor for the vertex
    position.Y = (heightmap.GetPixel(x, y) – 1) *
    terrainHeightScale;
    // Create the vertex in MeshBuilder
    builder.CreatePosition(position);
    }
    }
    // Create a vertex channel for holding texture coordinates.
    int texCoordId = builder.CreateVertexChannel<Vector2>(
    VertexChannelNames.TextureCoordinate(0));
    // Create a material and map it on the terrain
    // texture.
    BasicMaterialContent material = new BasicMaterialContent();
    // Get the full path of texture file
    string directory =
    Path.GetDirectoryName(input.Identity.SourceFilename);
    string texture = Path.Combine(directory, terrainTexture);
    // Set the texture to the meshbuilder
    material.Texture = new
    ExternalReference<TextureContent>(texture);
    // Set the material of mesh
    builder.SetMaterial(material);
    // Create the individual triangles that make up our terrain.
    for (int y = 0; y < heightmap.Height – 1; y++)
    {
    for (int x = 0; x < heightmap.Width – 1; x++)
    {
    // Draw a rectancle with two triangles, one at top
    // right, one at bottom-left
    AddVertex(builder, texCoordId, heightmap.Width,
    x, y);
    AddVertex(builder, texCoordId, heightmap.Width,
    x + 1, y);
    AddVertex(builder, texCoordId, heightmap.Width,
    x + 1, y + 1);
    AddVertex(builder, texCoordId, heightmap.Width,
    x, y);
    AddVertex(builder, texCoordId, heightmap.Width,
    x + 1, y + 1);
    AddVertex(builder, texCoordId, heightmap.Width,
    x, y + 1);
    }
    }
    // Finish creating the terrain mesh.
    MeshContent terrainMesh = builder.FinishMesh();
    // Convert the terrain from MeshContent to ModelContent
    return context.Convert<MeshContent,
    ModelContent>(terrainMesh, “ModelProcessor”);
    }
    [/code]
  4. From this step, we will render the terrain model to the screen in the game. In this step, we declare the terrain model in the TerrainGenerationGame class field:
    [code]
    // Terrain ModelModel terrain;
    // Camera view and projection matrices
    Matrix view;
    Matrix projection;
    [/code]
  5. Create the projection matrix in the Initialize() method with the following code:
    [code]
    projection =
    Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,1, 10000);
    [/code]
  6. Then, load the height map image. Insert the following code in to LoadContent():
    [code]
    terrain = Content.Load<Model>(“HeightMap”);
    [/code]
  7. Rotate the camera around a circle:
    [code]
    float time = (float)gameTime.TotalGameTime.TotalSeconds * 0.2f;
    // Rotate the camera around a circle
    float cameraX = (float)Math.Cos(time) * 64;
    float cameraY = (float)Math.Sin(time) * 64;
    Vector3 cameraPosition = new Vector3(cameraX, 0, cameraY);
    view =
    Matrix.CreateLookAt(cameraPosition,Vector3.Zero,Vector3.Up);
    [/code]
  8. Draw the terrain on-screen. First, we should define the DrawTerrain() method for drawing the terrain model.
    [code]
    void DrawTerrain(Matrix view, Matrix projection)
    {
    foreach (ModelMesh mesh in terrain.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.View = view;
    effect.Projection = projection;
    effect.AmbientLightColor = Color.White.ToVector3();
    effect.EnableDefaultLighting();
    // Set the specular lighting
    effect.SpecularColor = new Vector3(
    0.6f, 0.4f, 0.2f);
    effect.SpecularPower = 8;
    effect.FogEnabled = true;
    effect.FogColor = Color.White.ToVector3();
    effect.FogStart = 100;
    effect.FogEnd = 500;
    }
    mesh.Draw();
    }
    }
    [/code]
  9. Then, add the reference of the DrawTerrain() method in the Draw() method:
    [code]
    DrawTerrain(view, projection);
    [/code]
  10. The whole project is complete. Build and run the example. The application should run as shown in the following screenshots:
    texture mapping

How it works…

In step 2, the TerrainScale defines the size of the terrain in a 3D world; terrainHeightScale amplifies height change when generating the terrain; texCoordScale stands for displaying the portion of texture image when sampling; terrainTexture is the name of the texture file.

In step 3, since the Terrain Processor is to generate terrain mesh from an image, the input of the content processor is Texture2DContent, the output is ModelContent. The first line in the method body is to initialize the MeshBuilder. MeshBuilder is a helper class to ease the way to create a mesh object with internal MeshContent and GeometryContent classes. A general procedure to build a mesh consists of six steps:

  1. Call the StartMesh() method to instance a MeshBuilder object.
  2. Call the CreatePosition() method to fill the position’s buffer with data.
  3. Call the CreateVertexChannel()method to get the types of vertex channels and create a vertex data channel for use by the mesh. Typically, the data channel holds texture coordinates, normals, and other per-vertex data. A vertex channel is a list of arbitrary data with one value for each vertex. The types of vertex channels include:
    1. Binormal
    2. Color
    3. Normal
    4. Tangent
    5. TextureCoordinate
    6. Weights
  4. After building the position and vertex data channel buffers, start creating the triangles. For setting the data of each triangle use the SetMaterial() method. The SetVertexChannelData() method will set the individual vertex data of each triangle.
  5. Call AddVertex() to add a vertex into the index collection for forming a triangle. MeshBuilder supports triangle lists only. Therefore, calls to the AddTriangleVertex() method must occur in groups of three. That means the code snippet should look similar to the following:
    [code]
    // Create a Triangle
    AddTriangleVertex(…);
    AddTriangleVertex(…);
    AddTriangleVertex(…);
    [/code]
  6. In addition, MeshBuilder automatically determines which GeometryContent object receives the current triangle based on the state data. This data is set by the last calls to SetMaterial() and SetOpaqueData().
  7. Call the FinishMesh() method to finish the mesh building. All of the vertices in the mesh will be optimized with calls to the MergeDuplicateVertices() method for merging any duplicate vertices and to the CalculateNormals() method for computing the normals from the specified mesh.

So far, you have seen the procedure to create a mesh using MeshBuilder. Now, let’s go on looking into the Process() method. After creating the MeshBuilder object, we use the input.ConvertBitmapType() method to convert the image color information to float, because we want to use the different color values to determine the height of every vertex of the terrain mesh. The following for loop is used to set the position of every vertex, x – heightmap.Width / 2 and y – heightmap.Width / 2, define the X and Y positions to make the terrain model locate at the center in 3D world. The code (heightmap. GetPixel(x, y) is the key method to get the height data from image pixels. With this value, we could set the Y value of the vertex position. After defining the vertex position, we call the MeshBuilder.CreatePosition() to create the vertex position data in the MeshBuilder:

[code]
MeshBuilder.CreateVertexChannel<Vector2>(
VertexChannelNames.TextureCoordinate(0));
[/code]

This code creates a vertex texture coordinate channel for terrain mesh use. Then we get the texture file’s absolute filepath, set it to the terrain mesh material and assign the material to MeshBuilder. When the material is assigned, we begin building the triangles with texture based on the vertices defined earlier. In the following for loop, every step creates two triangles, a top right and bottom left. We will discuss the AddVertex() method later. When mesh triangles are created, we will call Mesh.FinishMesh(). Finally, call ContentProcessorContext.Convert(), which converts the MeshContent to ModelContent.

Now it’s time to explain the AddVertex() method:

[code]
// Adding a new triangle vertex to a MeshBuilder,
// along with an associated texture coordinate value.
static void AddVertex(MeshBuilder builder, int texCoordId, int w,
int x, int y)
{
// Set the vertex channel data to tell the MeshBuilder how to
// map the texture
builder.SetVertexChannelData(texCoordId,
new Vector2(x, y) * 0.1f);
// Add the triangle vertices to the indices array.
builder.AddTriangleVertex(x + y * w);
}
[/code]

The first MeshBuilder.SetVertexChannelData() method sets the location and portion of the texture coordinate for the specified vertex. MeshBuilder.AddTriangleVertex() adds the triangle vertex to the MeshBuilder indices buffer.

In step 7, the camera rotation meets the law as shown in the following diagram:

the camera rotation meets

P.X = CosA * Radius, P.Y = SinA * Radius

The previous formula is easy to understand, CosA is cosine value of angle A, it multiplies with the Radius to produce the horizontal value of X; Similarly, the SinA * Radius will produce the vertical value of Y. Since the formula is computing with angle A, the radius is constant when rotating around the center; the formula will generate a point set for representing a circle.

Customizing vertex formats

In XNA 4.0, a vertex format has a description about how the data stored in the vertex allows the system to easily locate the specified data. The XNA framework provides some built-in vertex formats, such as VertexPositionColor and VertexPositionNormalTexture format. Sometimes, these built-in vertex formats are limited for special effects such as particles with life limitation. At that moment, you will need to define a custom vertex format. In this recipe, you will learn how to define the custom vertex format.

How to do it…

Now let’s begin to program our sample application:

  1. Create a Windows Phone Game project named CustomVertexFormat, change Game1.cs to CustomVertexFormatGame.cs. Add a new class file CustomVertexPositionColor.cs to the project.
  2. Define the CustomVertexPositionColor class in the CustomVertexPositionColor.cs file:
    [code]
    // Define the CustomVertexPositionColor class
    public struct CustomVertexPositionColor : IVertexType
    {
    public Vector3 Position;
    public Color Color;
    public CustomVertexPositionColor(Vector3 Position,
    Color Color)
    {
    this.Position = Position;
    this.Color = Color;
    }
    // Define the vertex declaration
    public static readonly VertexDeclaration
    VertexDeclaration = new
    Microsoft.Xna.Framework.Graphics.VertexDeclaration
    (
    new VertexElement(0, VertexElementFormat.Vector3,
    VertexElementUsage.Position, 0),
    new VertexElement(12, VertexElementFormat.Color,
    VertexElementUsage.Color, 0)
    );
    // Override the VertexDeclaration attribute
    VertexDeclaration IVertexType.VertexDeclaration
    {
    get { return VertexDeclaration; }
    }
    }
    [/code]
  3. From this step, we will begin to use the CustomVertexPositionColor array to create a cubic and render it on the Windows Phone 7 screen. First, declare the variables in the field of the CustomVertexFormatGame class:
    [code]
    // CustomVertexPositionColor array
    CustomVertexPositionColor[] vertices;
    // VertexBuffer stores the custom vectex data
    VertexBuffer vertexBuffer;
    // BasicEffect for rendering the vertex array
    BasicEffect effect;
    // Camera position
    Vector3 cameraPosition;
    // Camera view matrix
    Matrix view;
    // Camera projection matrix
    Matrix projection;
    // The WireFrame render state
    static RasterizerState WireFrame = new RasterizerState
    {
    FillMode = FillMode.Solid,
    CullMode = CullMode.None
    };
    [/code]
  4. Define the faces of cubic and initialize the camera. Add the following code to the Initialize() method:
    [code]
    // Allocate the CustomVertexPositonColor array on memory
    vertices = new CustomVertexPositionColor[24];
    // Initialize the vertices of cubic front, right, left and
    // bottom faces.
    int i = 0;
    // Front Face
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, 0), Color.Blue);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, 10, 0), Color.Blue);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, 10, 0), Color.Blue);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, 10, 0), Color.Blue);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, 0), Color.Blue);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, 0), Color.Blue);
    // Right Face
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, 10, 0), Color.Red);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, 0), Color.Red);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, 10, -20), Color.Red);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, 0), Color.Red);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, -20), Color.Red);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, 10, -20), Color.Red);
    // Left Face
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, 10, 0), Color.Green);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, 0), Color.Green);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, 10, -20), Color.Green);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, 0), Color.Green);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, -20), Color.Green);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, 10, -20), Color.Green);
    // Bottom Face
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, 0), Color.Yellow);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, 0), Color.Yellow);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, -20), Color.Yellow);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, -20), Color.Yellow);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, 0), Color.Yellow);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, -20), Color.Yellow);
    // Initialze the vertex buffer for loading the vertex array
    vertexBuffer = new VertexBuffer(GraphicsDevice,
    CustomVertexPositionColor.VertexDeclaration,
    vertices.Length, BufferUsage.WriteOnly);
    // Set the vertex array data to vertex buffer
    vertexBuffer.SetData <CustomVertexPositionColor>(vertices);
    // Initialize the camera
    cameraPosition = new Vector3(0, 0, 100);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Initialize the basic effect for drawing
    effect = new BasicEffect(GraphicsDevice);
    [/code]
  5. Draw the cubic on the Windows Phone 7 screen. Insert the following code into Draw() method:
    [code]
    GraphicsDevice device = GraphicsDevice;
    // Set the render state
    device.BlendState = BlendState.Opaque;
    device.RasterizerState = WireFrame;
    // Rotate the cubic
    effect.World *=
    Matrix.CreateRotationY(MathHelper.ToRadians(1));
    // Set the basic effect parameters for drawing the cubic
    effect.View = view;
    effect.Projection = projection;
    effect.VertexColorEnabled = true;
    // Set the vertex buffer to device
    device.SetVertexBuffer(vertexBuffer);
    // Drawing the traingles of the cubic from vertex buffer on
    // screen
    foreach (EffectPass pass in effect.CurrentTechnique.Passes)
    {
    pass.Apply();
    // The count of triangles = vertices.Length / 3 = 24 / 3
    // = 8
    device.DrawPrimitives(PrimitiveType.TriangleList, 0, 8);
    }
    [/code]
  6. Now, build and run the application. It runs as shown in the following screenshots:
    Customizing vertex formats

How it works…

In step 2, the CustomVertexPositionColor derives from the IVertexType interface, which declares a VertexDeclaration that the class must override and implement the VertexDeclaration attribute to describe the layout custom vertex format data and their usages. The custom vertex format CustomVertexPositionColor is a customized version of the built-in vertex format VertexPositionColor; it also has the Position and Color data members. The class constructor CustomVertexPositionColor() is the key data member VertexDeclaration. Here, the VertexElement class defines the properties of Position and Color includes the offset in memory, VertexElementFormat and the vertex usage. The Position is a Vector3 object, that has three float parameters occupying 12 bytes. Because the Color variable is following the Position, the offset of Color should begin from the end of Position at the 12th byte in memory. Finally, the IVertexType. VertexDeclaration attribute will return the VertexDeclaration data when initializing the VertexBuffer or you can manually read it.

In step 3, the vertices is a CustomVertexPositionColor array that will store the vertices of the cubic faces; vertexBuffer stores the CustomVertexPositionColor array data for rendering; effect defines the rendering method. The following three variables: cameraPosition, view, and projection will be used to initialize the camera: WireFrame specifies the device render state, because the triangles of every face of cubic are composed of two triangles, we should disable the culling method that triangles could be seen from the back.

In step 4, as we want to draw four faces of cubic and every face is made up of two triangles, the amount of CustomVertexPositionColor is 4 * 2 * 3 = 24. After initializing the vertices of the triangles with position and color information, it is time to create the vertex buffer to store the defined vertex array and assign the vertex array to the vertex buffer for rendering. The next part of code is about establishing the camera and instancing the BasicEffect object.

In step 5, the code assigns the WireFrame defined in a class field to disable the culling that you could see the graphics in any perspective. The settings of effect are rotating the cubic and coloring the vertices. After that, the iteration of EffectPass collection is to draw the triangles of cubic on screen using GraphicDevice.DrawPrimitives(). Since the PrimitiveType is TriangleList, the third parameter of the DrawPrimitives() method is 8 and stands for the total count of triangles, which comes from the equation total vertex count / 3 = 24 / 3 = 8.

Calculating the normal vectors from a model vertex

In mathematics, normal is the vector perpendicular to a plane or a surface. In computer graphics, normal is often used to calculate the lighting, angle of tilting, and collision detection. In this recipe, you will learn how to calculate the normal from vertices.

Getting ready

The 3D model mesh is made up of triangles and every triangle in a plane has a normal vector and is stored in the vertex. (You can find out more information about normal vectors in any computer graphic or linear algebra books.) Some typical and realistic lighting techniques will use the average normal vector of a vertex shared by several triangles. To calculate the normal of a triangle is not hard. Suppose the triangle has three points: A, B, and C. Choose point A as the root, Vector AB equals B – A; Vector AC equals C – A, and normal Vector N is the cross product of the two vectors AB and AC. Our example will illustrate the actual working code.

How to do it…

The following steps will show you a handy way to get the normal vectors of a model:

  1. Create a Windows Phone Game project named NormalGeneration, change Game1.cs to NormalGenerationGame.cs.
  2. Add the GenerateNormalsForTriangleStrip()method for normal calculation to the NormalGenerationGame class:
    [code]
    private VertexPositionNormalTexture[]
    GenerateNormalsForTriangleStrip(VertexPositionNormalTextu
    re[]
    vertices, short[] indices)
    {
    // Set the Normal factor of every vertex
    for (int i = 0; i < vertices.Length; i++)
    vertices[i].Normal = new Vector3(0, 0, 0);
    // Compute the length of indices array
    int indiceLength = indices.Length;
    // The winding sign
    bool IsNormalUp = false;
    // Calculate the normal vector of every triangle
    for (int i = 2; i < indiceLenngth; i++)
    {
    Vector3 firstVec = vertices[indices[i – 1]].Position –
    vertices[indices[i]].Position;
    Vector3 secondVec = vertices[indices[i – 2]].Position –
    vertices[indices[i]].Position;
    Vector3 normal = Vector3.Cross(firstVec, secondVec);
    normal.Normalize();
    // Let the normal of every triangle face up
    if (IsNormalUp)
    normal *= -1;
    // Validate the normal vector
    if (!float.IsNaN(normal.X))
    {
    // Assign the generated normal vector to the
    // current triangle vertices
    vertices[indices[i]].Normal += normal;
    vertices[indices[i – 1]].Normal += normal;
    vertices[indices[i – 2]].Normal += normal;
    }
    // Swap the winding sign for the next triangle when
    // create mesh as TriangleStrip
    IsNormalUp = !IsNormalUp;
    }
    return vertices;
    }
    [/code]

How it works…

In step 2, this method receives the array of mesh vertices and indices. The indices tell the drawing system how to index and draw the triangles from vertices. The for loop starts from the third vertex. With the former two indices i – 1 and i – 2, they form a triangle and use the indices to create two vectors in the same plain representing two sides of the current triangle.

Then call the Vector3.Cross() method to compute the normal perpendicular to the triangle plain. After that, we should normalize the normal for accurate computing such as lighting. Since the indices are organized in TriangleStrip, every new added index will generate a new triangle, but the normal of the new triangle is opposite to the previous one. We should reverse the direction of the new normal by multiplying by -1 when IsNormalUp is true.

Next, we should validate the normal using float.IsNaN(), which returns a value indicating whether the specified number evaluates to a number. When two vectors used to compute the normal vector and the current triangle have completely the same direction, another vector, the cross product will return a Vector3 object with three NaN values, meanwhile the invalid data must be eliminated. Finally, the method returns the processed vertices with correct normal vectors.

Simulating an ocean on your CPU

The ocean simulation is an interesting and challenging topic in computer graphic rendering that has been covered in many papers and books. In Windows, it is easier to render a decent ocean or water body depending on the GPU HLSL or Cg languages. In Windows Phone 7 XNA, so far, the customized HLSL shader is not supported. The only way to solve the problem is to do the simulation on CPU of Windows Phone 7. In this recipe, you will learn to realize the ocean effect on Windows Phone 7 CPU.

How to do it…

The following steps demonstrate one approach to emulating an ocean on the Windows Phone CPU:

  1. Create a Windows Phone Game project named OceanGenerationCPU, change Game1.cs to OceanGenerationCPUGame.cs. Then add a new file Ocean.cs to the project and image file to the content project.
  2. Define the Ocean class in the Ocean.cs file. Add the following lines to the class field as a data member:
    [code]
    // The graphics device object
    GraphicsDevice device;
    // Ocean width and height
    int PlainWidth = 64;
    int PlainHeight = 64;
    // Random object for randomly generating wave height
    Random random = new Random();
    // BasicEffect for drawing the ocean
    BasicEffect basicEffect;
    // Texture2D object loads the water texture
    Texture2D texWater;
    // Ocean vertex buffer
    VertexBuffer oceanVertexBuffer;
    // Ocean vertices
    VertexPositionNormalTexture[] oceanVertices;
    // The index array of the ocean vertices
    short[] oceanIndices;
    // Ocean index buffer
    IndexBuffer oceanIndexBuffer;
    // The max height of wave
    int MaxHeight = 2;
    // The wind speed
    float Speed = 0.02f;
    // Wave directions
    protected int[] directions;
    [/code]
  3. Next, implement the Ocean constructor as follows:
    [code]
    public Ocean(Texture2D texWater, GraphicsDevice device)
    {
    this.device = device;
    this.texWater = texWater;
    basicEffect = new BasicEffect(device);
    // Create the ocean vertices
    oceanVertices = CreateOceanVertices();
    // Create the ocean indices
    oceanIndices = CreateOceanIndices();
    // Generate the normals of ocean vertices for lighting
    oceanVertices =
    GenerateNormalsForTriangleStrip(oceanVertices,
    oceanIndices);
    // Create the vertex buffer and index buffer to load the
    // ocean vertices and indices
    CreateBuffers(oceanVertices, oceanIndices);
    }
    [/code]
  4. Define the Update() method of the Ocean class.
    [code]
    // Update the ocean height for the waving effect
    public void Update(GameTime gameTime)
    {
    for (int i = 0; i < oceanVertices.Length; i++)
    {
    oceanVertices[i].Position.Y += directions[i] * Speed;
    // Change direction if Y component has ecxeeded the
    // limit
    if (Math.Abs(oceanVertices[i].Position.Y) > MaxHeight)
    {
    oceanVertices[i].Position.Y =
    Math.Sign(oceanVertices[i].Position.Y) *
    MaxHeight;
    directions[i] *= -1;
    }
    }
    oceanVertices =
    GenerateNormalsForTriangleStrip(oceanVertices,
    oceanIndices);
    }
    [/code]
  5. Implement the Draw() method of the Ocean class:
    [code]
    public void Draw(Matrix view, Matrix projection)
    {
    // Draw Ocean
    basicEffect.World = Matrix.Identity;
    basicEffect.View = view;
    basicEffect.Projection = projection;
    basicEffect.Texture = texWater;
    basicEffect.TextureEnabled = true;
    basicEffect.EnableDefaultLighting();
    basicEffect.AmbientLightColor = Color.Blue.ToVector3();
    basicEffect.SpecularColor = Color.White.ToVector3();
    foreach (EffectPass pass in
    basicEffect.CurrentTechnique.Passes)
    {
    pass.Apply();
    oceanVertexBuffer.SetData<VertexPositionNormalTexture>
    (oceanVertices);
    device.SetVertexBuffer(oceanVertexBuffer, 0);
    device.Indices = oceanIndexBuffer;
    device.DrawIndexedPrimitives(
    PrimitiveType.TriangleStrip, 0, 0, PlainWidth *
    PlainHeight, 0,
    PlainWidth * 2 * (PlainHeight – 1) – 2);
    // This is important, because you need to update the
    // vertices
    device.SetVertexBuffer(null);
    }
    }
    [/code]
  6. From this step, we will use the Ocean class to actually draw the ocean on the Windows Phone 7 CPU. Please add the following code to the OceanGenerationCPUGame class field:
    [code]
    // Ocean water texture
    Texture2D texWater;
    // Ocean object
    Ocean ocean;
    // Camera view and projection matrices
    Matrix view;
    Matrix projection;
    [/code]
  7. Initialize the camera in the Initialize() method:
    [code]
    Vector3 camPosition = new Vector3(80, 20, -100);
    view = Matrix.CreateLookAt(camPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000);
    [/code]
  8. Load the ocean water texture and initiate the ocean object. Insert the code into the LoadContent() method:
    [code]
    texWater = Content.Load<Texture2D>(“Water”);
    ocean = new Ocean(texWater, GraphicsDevice);
    [/code]
  9. Update the ocean state. Add the following line to the Update() method:
    [code]
    ocean.Update(gameTime);
    [/code]
  10. Draw the ocean on the Windows Phone 7 screen.
    [code]
    ocean.Draw(view, projection);
    [/code]
  11. Now, build and run the application. You will see the ocean as shown in the following screenshot:
    Simulating an ocean

How it works…

In step 2, PlainWidth and PlainHeight define the dimensions of the ocean; the random object will be used to generate the random height of every ocean vertex; texWater loads the ocean texture; oceanVertexBuffer will store the ocean vertices; oceanVertices is the VertexPositionNormalTexture array representing the entire ocean vertices; oceanIndices represents the indices of ocean vertices. It is a short array because XNA only supports the 16 bytes index format; oceanIndexBuffer is the IndexBuffer to store the ocean indices; directions indicates the waving direction of every vertex.

In step 3, for the constructor, we will discuss the CreateOceanVertices(), CreateOceanIndices(), GenerateNormalsForTriangleStrip(), and the CreateBuffers() method.

  1. Define the CreateOceanVertices() method:
    [code]
    // Create the ocean vertices
    private VertexPositionNormalTexture[] CreateOceanVertices()
    {
    // Creat the local ocean vertices
    VertexPositionNormalTexture[] oceanVertices =
    new VertexPositionNormalTexture[PlainWidth *
    PlainHeight];
    directions = new int[PlainHeight * PlainWidth];
    // Initialize the ocean vertices and wave direction array
    int i = 0;
    for (int z = 0; z < PlainHeight; z++)
    {
    for (int x = 0; x < PlainWidth; x++)
    {
    // Generate the vertex position with random
    // height
    Vector3 position = new
    Vector3(x, random.Next(0, 4), -z);
    Vector3 normal = new Vector3(0, 0, 0);
    Vector2 texCoord =
    new Vector2((float)x / PlainWidth,
    (float)z / PlainWidth);
    // Randomly set the direction of the vertex up or
    // down
    directions[i] = position.Y > 2 ? -1 : 1;
    // Set the position, normal and texCoord to every
    // element of ocean vertex array
    oceanVertices[i++] = new
    VertexPositionNormalTexture(position, normal,
    texCoord);
    }
    }
    return oceanVertices;
    }
    [/code]
    First you find the width and height of the ocean. Then, create an array that stores all the vertices for the ocean about PlainWidth * PlainHeight in total. After that, we use two for loops to initiate the necessary information for the ocean vertices. The height of the vertex is randomly generated, the normal is zero which will get changed in the Update() method; texCoord specifies how to map the texture on the ocean vertices so that every PlainWidth vertex will repeat the water texture. Next, the code on position.Y is to determine the wave direction of every ocean vertex. When all the needed information is done, the last line in the for loop is about initializing the vertices one by one.
  2. The CreateOceanIndices() method: when the ocean vertices are ready, it is time to define the indices array to build the ocean mesh in triangle strip mode.
    [code]
    // Create the ocean indices
    private short[] CreateOceanIndices()
    {
    // Define the resolution of ocean indices
    short width = (short)PlainWidth;
    short height = (short)PlainHeight;
    short[] oceanIndices =
    new short[(width) * 2 * (height – 1)];
    short i = 0;
    short z = 0;
    // Create the indices row by row
    while (z < height – 1)
    {
    for (int x = 0; x < width; x++)
    {
    oceanIndices[i++] = (short)(x + z * width);
    oceanIndices[i++] = (short)(x + (z + 1) * width);
    }
    z++;
    if (z < height – 1)
    {
    for (short x = (short)(width – 1); x >= 0; x–)
    {
    oceanIndices[i++] = (short)
    (x + (z + 1) * width);
    oceanIndices[i++] = (short)(x + z * width);
    }
    }
    z++;
    }
    return oceanIndices;
    }
    [/code]
  3. In this code, we store all the indices of the ocean vertices and in each row we define the PlainWidth * 2 triangles. Actually, every three rows of indices represent two rows of triangles, so we have PlainHeight – 1 rows of triangles. The total indices are PlainWidth * 2 * (PlainHeight – 1) in TriangleStrip drawing mode where each index indicates a new triangle based on the index and its previous two indices.

The z variable indicates the current row from 0, the first row created from left to right. Next, you increment z by one for moving to the next row created from right to left. You repeat the process until the rows are all built when z is equal to PlainHeight – 1.

  1. GenerateNormalsForTriangleStrip() method: we use the method to calculate the normal of each vertex of ocean mesh triangles. For a more detailed explanation, please refer to the Calculating the normal vectors from a model vertex recipe.
  2. CreateBuffers() method: this method is to create the vertex buffer for ocean vertices and index buffer for ocean indices for rendering the ocean on the Windows Phone 7 CPU. The code is as follows:
    [code]
    // Create the vertex buffer and index buffer for ocean
    // vertices and indices
    private void CreateBuffers(VertexPositionNormalTexture[]
    vertices, short[] indices)
    {
    oceanVertexBuffer = new VertexBuffer(device,
    VertexPositionNormalTexture.VertexDeclaration,
    vertices.Length, BufferUsage.WriteOnly);
    oceanVertexBuffer.SetData(vertices);
    oceanIndexBuffer = new IndexBuffer(device, typeof(short),
    indices.Length, BufferUsage.WriteOnly);
    oceanIndexBuffer.SetData(indices);
    }
    [/code]

In step 4, the code iterates all the ocean vertices and changes the height of every vertex. Once the absolute value of height is greater than MaxHeight, the direction of the vertex will be reversed to simulate the wave effect. After the ocean vertices are updated, we need to compute the vertex normal again since the vertex positions are different.

In step 5, when rendering the 3D object manually with mapping texture, the basicEffect. TextureEnabled should be true and set the Texture2D object to the BasicEffect. Texture attribute. Then, we open the light to highlight the ocean. Finally, the foreach loop is used to draw the ocean on the Windows Phone 7 CPU. Here, we should set the updated ocean vertices to the vertex buffer in every frame.

1 COMMENT

  1. Hi, thanks for this article. Very interesting stuff.

    Do you have the Grass.dds, HeightMap.png and other images used available to download so we can actually build the examples ?

Comments are closed.