Windows Phone Entering the Exciting World of 3D Models #part1

0
205

Controlling a model with the help of trackball rotation

To rotate a model from any direction in Windows Phone 7 could let the game player have extra choices to view the model. For programming, the trackball viewer will help the programmer to check whether the output model from the model software works well. In this recipe, you will learn how to control a model in trackball rotation.

How to do it…

Follow these steps to control a model in trackball rotation:

  1. Create a Windows Phone Game named ModelTrackBall, change Game1.cs to ModelTrackBallGame.cs. Then add the tree.fbx model file to the content project.
  2. Declare the variables for rotating and rendering the model in ModelTrackBall class field:
    [code]
    // Tree model
    Model modelTree;
    // Tree model world position
    Matrix worldTree = Matrix.Identity;
    // Camera Position
    Vector3 cameraPosition;
    // Camera look at target
    Vector3 cameraTarget;
    // Camera view matrix
    Matrix view;
    // Camera projection matrix
    Matrix projection;
    // Angle for trackball rotation
    Vector2 angle;
    [/code]
  3. Initialize the camera and enable the GestureType.FreeDrag. Add the code into the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(0, 40, 40);
    cameraTarget = Vector3.Zero + new Vector3(0, 10, 0);
    view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Instance the angle
    angle = new Vector2();
    // Enable the FreeDrag gesture
    TouchPanel.EnabledGestures = GestureType.FreeDrag;
    [/code]
  4. Rotate the tree model. Please insert the code into the Update() method:
    [code]
    // Check if the gesture is enabled or not
    if (TouchPanel.IsGestureAvailable)
    {
    // Read the on-going gesture
    GestureSample gesture = TouchPanel.ReadGesture();
    if (gesture.GestureType == GestureType.FreeDrag)
    {
    // If the gesture is FreeDrag, read the delta value
    // for model rotation
    angle.Y = gesture.Delta.X * 0.001f;
    angle.X = gesture.Delta.Y * 0.001f;
    }
    }
    // Rotate the tree model around axis Y
    worldTree *= Matrix.CreateRotationY(angle.Y);
    // Read the tree model around axis X
    worldTree *= Matrix.CreateRotationX(angle.X);
    [/code]
  5. Render the rotating tree model to the screen. First, we define the DrawModel() method:
    [code]
    // Draw the model on screen
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.World = transforms[mesh.ParentBone.Index] *
    world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  6. Then add the reference to the Draw() method:
    [code]
    DrawModel(modelTree, worldTree, view, projection);
    [/code]
  7. Build and run the application. It should run as shown in the following screenshots:
    trackball rotation

How it works…

In step 2, modelTree will load and store the tree model for rendering; worldTree represents the world position of the model tree. The following four variables, cameraPosition, cameraTarget, view, and projection are responsible for initializing and manipulating the camera; the last variable angle specifies the angle value when GestureType.FreeDrag takes place.

In step 3, we define the camera world position and look at the target in the first two lines. Then we create the view and projection matrices for camera view. After that, we have the code for initiating the angle object and enabling the FreeDrag gesture using TouchPanel. EnabledGesture.

In step 4, the first part of the code before rotation is to read the delta value for the FreeDrag gesture. We use TouchPanel.IsGestureAvailable to check whether the gestures are enabled. Then we call TouchPanel.ReadGesture() to get the on-going gesture. After that, we determine whether the gesture is FreeDrag or not. If so, then read Delta.X to angle.Y for rotating the model around the Y-axis and assign Delta.Y to angle.X for rotating around the X-axis. Once the latest angle value is known, it is time for rotating the tree model. We use Matrix.CreateRotationY and Matrix.CreateRotationX to rotate the tree model around the X- and Y-axes.

Translating the model in world coordinates

Translating the model in 3D world is a basic operation of Windows Phone 7 games; you can move the game object from one place to another. Jumping, running, or crawling is based on the translation. In this recipe, you will learn how to gain the knowledge necessary to do this.

How to do it…

The following steps will show you how to do the basic and useful operations on 3D models—Translation:

  1. Create a Windows Phone Game project named TranslateModel, change Game1. cs to TranslateModelGame.cs. Next, add the model file ball.fbx and font file gameFont.spritefont to the content project.
  2. Declare the variables for ball translation. Add the following lines to the TranslateModelGame class:
    [code]
    // Sprite font for showing the notice message
    SpriteFont font;
    // The beginning offset at axis X
    float begin;
    // The ending offset at axis X
    float end;
    // the translation value at axis X
    float translation;
    // Ball model
    Model modelBall;
    // Ball model position
    Matrix worldBall = Matrix.Identity;
    // Camera position
    Vector3 cameraPosition;
    // Camera view and projection matrix
    Matrix view;
    Matrix projection;
    // Indicate the screen tapping state
    bool Tapped;
    [/code]
  3. Initialize the camera, and define the start and end position for the ball. Insert the following code to the Initialize() method:
    [code]
    // Initialize the camera position
    cameraPosition = new Vector3(0, 5, 10);
    // Initialize the camera view and projection matrices
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Define the offset of beginning position from Vector.Zero at
    // axis X.
    begin = -5;
    // Define the offset of ending position from Vector.Zero at
    // axis X.
    end = 5;
    // Translate the ball to the beginning position
    worldBall *= Matrix.CreateTranslation(begin, 0, 0);
    [/code]
  4. In this step, you will translate the model smoothly when you touch the phone screen. Add the following code into the Update() method:
    [code]
    // Check the screen is tapped
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    if (GraphicsDevice.Viewport.Bounds.Contains
    ((int)touches[0].Position.X, (int)touches[0].
    Position.Y))
    {
    Tapped = true;
    }
    }
    // If the screen is tapped, move the ball in straight along
    // the axis X
    if (Tapped)
    {
    begin = MathHelper.SmoothStep(begin, end, 0.1f);
    translation = begin;
    worldBall += Matrix.CreateTranslation(translation, 0, 0);
    }
    [/code]
  5. Draw the ball model and display the instructions on screen. Paste the following code to the Draw() method:
    [code]
    // Draw the ball model
    DrawModel(modelBall, worldBall, view, projection);
    // Draw the text
    spriteBatch.Begin();
    spriteBatch.DrawString(font, “Please Tap the Screen”,
    new Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  6. We still need to add the DrawModel() method to the TranslateModelGame class:
    [code]
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.World = transforms[mesh.ParentBone.Index] *
    world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  7. Now build and run the application. It will look similar to the following screenshots:
    world coordinates

How it works…

In step 2, the SpriteFont is used to render the text on screen; begin and end specifies the offset at the X-axis; translation is the actual value for ball translation along the X-axis; modelBall loads and stores the ball model; worldBall represents the ball world position in 3D; the following three variables cameraPosition, view, and projection are used to initialize the camera. The bool value Tapped indicates whether Windows Phone 7 was tapped.

In step 4, the first part before if(Tapped) is to check whether the tapped position locates inside the screen bound. If yes, set Tapped to true. Once the screen is tapped, MathHelper. SmoothStep() will increase the begin value to end value defined previously frame-by-frame using the cubic interpolation algorithm and add the latest step value to the translation variable. It will then use the Matrix.CreateTranslation() method to generate a translation matrix for moving the ball model in 3D world.

Scaling a model

In order to change the scale of a model you can adjust the model meets the scene size or construct special effects, such as when a little sprite takes magical water, it suddenly becomes much stronger and bigger. In this recipe, you will learn how to change the model size at runtime.

How to do it…

Follow these steps to scale a 3D model:

  1. Create a Windows Phone Game project named ScaleModel, change Game1.cs to ScaleModelGame.cs. Then add the model file ball.fbx and font file gameFont. fle to the content project.
  2. Declare the necessary variables. Add the following lines to the ScaleModel class field:
    [code]
    // SpriteFont for showing the scale value on screen
    SpriteFont font;
    // Ball model
    Model modelBall;
    // Tree model world position
    Matrix worldBall = Matrix.Identity;
    // Camera Position
    Vector3 cameraPosition;
    // Camera view matrix
    Matrix view;
    // Camera projection matrix
    Matrix projection;
    // Scale factor
    float scale = 1;
    // The size the model will scale to
    float NewSize = 5;
    [/code]
  3. Initialize the camera. Insert the following code into the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(0, 5, 10);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    [/code]
  4. Load the ball model and game font. Paste the following code into the LoadContent() method:
    [code]
    // Load the tree model
    modelBall = Content.Load<Model>(“ball”);
    font = Content.Load<SpriteFont>(“gameFont”);
    [/code]
  5. This step will change the scale value of the ball model to the designated size. Add the following lines to the Update() method:
    [code]
    scale = MathHelper.SmoothStep(scale, NewSize, 0.1f);
    worldBall = Matrix.Identity;
    worldBall *= Matrix.CreateScale(scale);
    [/code]
  6. Draw the ball and font on the Windows Phone 7 screen. Add the following code to the Draw() method:
    [code]
    // Draw the ball
    DrawModel(modelBall, worldBall, view, projection);
    // Draw the scale value
    spriteBatch.Begin();
    spriteBatch.DrawString(font, “scale: ” + scale.ToString(), new
    Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  7. The DrawModel() method should be as follows:
    [code]
    // Draw the model on screen
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.World = transforms[mesh.ParentBone.Index] *
    world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  8. The application. It will run similar to the following screenshots:
    Scaling a model

How it works…

In step 3, the font variable is responsible for the draw-the-scale value on screen; modelBall loads the ball model; worldBall is the key matrix that specifies the world position and scale of the ball model; scale stands for the size of the ball model. When the initiative value is 1, it means the ball is in its original size; NewSize indicates the new size the ball model will scale to.

In step 4, the MathHelper.SmoothStep() method uses the cubic interpolation algorithm to change the current scale to a new value smoothly. Before calling the Matrix. CreateScale() method to create the scale matrix and multiply the worldBall matrix, we must restore the worldBall to Matrix.Identity, otherwise the new scale will change from the previous value.

Viewing the model hierarchy information

In Windows Phone 7 3D game programming, the models come from the modeling software, such as 3DS MAX or Maya, which are used frequently. Sometimes, it you do not want to control a complete animation, just part of it. At that moment, you should know the subdivisions where they are. Actually, the models are organized as a tree, and you can find a specified mesh or bone using a tree-based binary search algorithm; however, there is no need to write an algorithm, as the XNA framework has done them for you. As a handy reference, you should know the hierarchy of the model and the name of every part. In the recipe, you will learn how to get the model hierarchy information.

How to do it…

  1. Create a Windows Phone Game project named ModelHierarchy, and change Game1.cs to ModelHierarchyGame.cs. Then, add the tank.fbx model file from the XNA APP sample to the content project. After that, create a content pipeline extension library named ModelHierarchyProcessor and replace ContentProcessor1.cs to ModelHierarchyProcessor.cs.
  2. Create the ModelHierarchyProcessor class in the ModelHierarchyProcessor.cs file.
    [code]
    [ContentProcessor(DisplayName = “ModelHierarchyProcessor”)]
    public class ModelHierarchyProcessor : ModelProcessor
    {
    public override ModelContent Process(NodeContent input,
    ContentProcessorContext context)
    {
    context.Logger.LogImportantMessage(
    “—- Model Bone Hierarchy —-“);
    // Show the model hierarchy
    DemonstrateNodeTree(input, context, “”);
    return base.Process(input, context);
    }
    private void DemonstrateNodeTree(NodeContent input,
    ContentProcessorContext context, string start)
    {
    // Output the name and type of current model part
    context.Logger.LogImportantMessage(
    start + “- Name: [{0}] – {1}”, input.Name,
    input.GetType().Name);
    // Iterate all of the sub content of current
    //NodeContent
    foreach (NodeContent node in input.Children)
    DemonstrateNodeTree(node, context, start + “- “);
    }
    }
    [/code]
  3. Add ModelHierarchyProcessor.dll to the content project reference list and the content processor of the tank model to ModelHierarchyProcessor, as shown in the following screenshot:
    ModelHierarchyProcessor
  4. Build the ModelHierarchy. In the Output window, the model hierarchy information will show up as follows:
    [code]
    —- Model Bone Hierarchy —-
    – Name: [tank_geo] – MeshContent
    – – Name: [r_engine_geo] – MeshContent
    – – – Name: [r_back_wheel_geo] – MeshContent
    – – – Name: [r_steer_geo] – MeshContent
    – – – – Name: [r_front_wheel_geo] – MeshContent
    – – Name: [l_engine_geo] – MeshContent
    – – – Name: [l_back_wheel_geo] – MeshContent
    – – – Name: [l_steer_geo] – MeshContent
    – – – – Name: [l_front_wheel_geo] – MeshContent
    – – Name: [turret_geo] – MeshContent
    – – – Name: [canon_geo] – MeshContent
    – – – Name: [hatch_geo] – MeshContent
    [/code]
    Compare it to the model information in 3DS MAX, as shown in the following screenshot. They should completely match. For looking up the information in 3DS MAX, you could click Tools | Open Container Explorer.
    Tools | Open Container Explorer

How it works…

In step 2, ModelHierarchyProcessor directly inherits from the ModelProcessor because we just need to print out the model hierarchy. In the DemonstrateNodeTree() method—which is the key method showing the model mesh and bone tree—the Context. Logger.LogImportantMessage() shows the name and type of the current NodeContent. Mostly, the NodeContent is MeshContent or BoneContent in the model processing phase when building the main project. The following recursion is to check whether the current NodeContent has sub node contents. If so, we will process the children one by one at a lower level. Then, the Process() method calls the method before returning the processed ModelContent.

Highlighting individual meshes of a model

The 3D game model object is made up of different meshes. In a real 3D game development, sometimes you want to locate the moving mesh and see the bounding wireframe. This will help you to control the designated mesh more accurately. In this recipe, you will learn how to draw and highlight the mesh of the model individually.

How to do it…

The following steps will help you understand how to highlight different parts of a model for better comprehension of model vertex structure:

  1. Create a Windows Phone Game project named HighlightModelMesh and change Game1.cs to HighlightModelMeshGame.cs. Then, add a new MeshInfo. cs file to the project. Next, add the model file tank.fbx and font file gameFont. spritefont to the content project. After that, create a Content Pipeline Extension Library named MeshVerticesProcessor and replace ContentProcessor1.cs with MeshVerticesProcessor.cs.
  2. Define the MeshVerticesProcessor in MeshVerticesProcessor.cs of MeshVerticesProcessor Content Pipeline Extension Library project. The processor is the extension of ModelProcessor:
    [code]
    // This custom processor attaches vertex position data of every
    mesh to a model’s tag property.
    [ContentProcessor]
    public class MeshVerticesProcessor : ModelProcessor
    [/code]
  3. In the MeshVerticesProcessor class, we add a tagData dictionary in the class field:
    [code]
    Dictionary<string, List<Vector3>> tagData =
    new Dictionary<string, List<Vector3>>();
    [/code]
  4. Next, we define the Process() method:
    [code]
    // The main method in charge of processing the content.
    public override ModelContent Process(NodeContent input,
    ContentProcessorContext context)
    {
    FindVertices(input);
    ModelContent model = base.Process(input, context);
    model.Tag = tagData;
    return model;
    }
    [/code]
  5. Build the MeshVerticesProcessor project. Add a reference to MeshVerticesProcessor.dll in the content project and change the Content Processor of tank.fbx, as shown in the following screenshot:
    Content Processor
  6. Define the MeshInfo class in MeshInfo.cs.
    [code]
    public class MeshInfo
    {
    public string MeshName;
    public List<Vector3> Positions;
    public MeshInfo(string name, List<Vector3> positions)
    {
    this.MeshName = name;
    this.Positions = positions;
    }
    }
    [/code]
  7. From this step, we will start to render the individual wireframe mesh and the whole tank object on the Windows Phone 7 screen. First, declare the necessary variable in the HighlightModelMeshGame class fields:
    [code]
    // SpriteFont for showing the model mesh name
    SpriteFont font;
    // Tank model
    Model modelTank;
    // Tank model world position
    Matrix worldTank = Matrix.Identity;
    // Camera position
    Vector3 cameraPosition;
    // Camera view and projection matrix
    Matrix view;
    Matrix projection;
    // Indicate the screen tapping state
    bool Tapped;
    // The model mesh index in MeshInfo list
    int Index = 0;
    // Dictionary for mesh name and vertices
    Dictionary<string, List<Vector3>> meshVerticesDictionary;
    // Store the current mesh vertices
    List<Vector3> meshVertices;
    // Mesh Info list
    List<MeshInfo> MeshInfoList;
    // Vertex array for drawing the mesh vertices on screen
    VertexPositionColor[] vertices;
    // Vertex buffer store the vertex buffer
    VertexBuffer vertexBuffer;
    // The WireFrame render state
    static RasterizerState WireFrame = new RasterizerState
    {
    FillMode = FillMode.WireFrame,
    CullMode = CullMode.None
    };
    // The noraml render state
    static RasterizerState Normal = new RasterizerState
    {
    FillMode = FillMode.Solid,
    CullMode = CullMode.None
    };
    [/code]
  8. Initialize the camera. Insert the code into the Initialize() method:
    [code]
    cameraPosition = new Vector3(35, 15, 35);
    // Initialize the camera view and projection matrices
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    meshVertices = new List<Vector3>();
    [/code]
  9. Load the tank model and font in the game. Then, map the model Tag dictionary data with mesh info to MeshInfo list. Insert the following code to the LoadContent() method:
    [code]
    // Create a new SpriteBatch, which can be used to draw
    // textures.
    spriteBatch = new SpriteBatch(GraphicsDevice);
    // Load the font
    font = Content.Load<SpriteFont>(“gameFont”);
    // Load the ball model
    modelTank = Content.Load<Model>(“tank”);
    // Get the dictionary data with mesh name and its vertices
    meshVerticesDictionary = (Dictionary<string, List<Vector3>>)
    modelTank.Tag;
    // Get the mapped MeshInfo list
    MeshInfoList = MapMeshDictionaryToList(meshVerticesDictionary);
    // Set the mesh for rendering
    SetMeshVerticesToVertexBuffer(Index);
    [/code]
  10. Change the mesh for rendering. Add the following code to the Update() method:
    [code]
    // Check the screen is tapped and change the rendering mesh
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    if (GraphicsDevice.Viewport.Bounds.Contains(
    (int)touches[0].Position.X, (int)touches[0].
    Position.Y))
    {
    // Clamp the Index value within the amount of model
    meshes
    Index = ++Index % MeshInfoList.Count;
    // Set the mesh index for rendering
    SetMeshVerticesToVertexBuffer(Index);
    }
    }
    [/code]
  11. Draw the tank mode, current mesh, and its name on the Windows Phone 7 screen. Paste the following code into the Draw() method:
    [code]
    GraphicsDevice device = graphics.GraphicsDevice;
    device.Clear(Color.CornflowerBlue);
    // Set the render state for drawing the tank model
    device.BlendState = BlendState.Opaque;
    device.RasterizerState = Normal;
    device.DepthStencilState = DepthStencilState.Default;
    DrawModel(modelTank, worldTank, view, projection);
    // Set the render state for drawing the current mesh
    device.RasterizerState = WireFrame;
    device.DepthStencilState = DepthStencilState.Default;
    // Declare a BasicEffect object to draw the mesh wireframe
    BasicEffect effect = new BasicEffect(device);
    effect.View = view;
    effect.Projection = projection;
    // Enable the vertex color
    effect.VertexColorEnabled = true;
    // Begin to draw
    effect.CurrentTechnique.Passes[0].Apply();
    // Set the VertexBuffer to GraphicDevice
    device.SetVertexBuffer(vertexBuffer);
    // Draw the mesh in TriangleList mode
    device.DrawPrimitives(PrimitiveType.TriangleList, 0,
    meshVertices.Count / 3);
    // Draw the mesh name on screen
    spriteBatch.Begin();
    spriteBatch.DrawString(font, “Curent Mesh Name: ” +
    MeshInfoList[Index].MeshName, new Vector2(0, 0),
    Color.White);
    spriteBatch.End();
    [/code]
  12. Now, build and run the application. It should run as shown in the following screenshots. When you tap the screen the current mesh will change to another.
    Highlighting individual meshes

How it works…

In step 3, the tagData receives the mesh name as the key and the corresponding mesh vertices as the value.

In step 4, the input, a NodeContent object, represents the root NodeContent of the input model. The key called method is the FindVertices() method. It iterates the meshes in the input model and stores the mesh vertices in tagData with the mesh name. The method should be as follows:

[code]
// Extracting a list of all the vertex positions in
// a model.
void FindVertices(NodeContent node)
{
// Transform the current NodeContent to MeshContent
MeshContent mesh = node as MeshContent;
if (mesh != null)
{
string meshName = mesh.Name;
List<Vector3> meshVertices = new List<Vector3>();
// Look up the absolute transform of the mesh.
Matrix absoluteTransform = mesh.AbsoluteTransform;
// Loop over all the pieces of geometry in the mesh.
foreach (GeometryContent geometry in mesh.Geometry)
{
// Loop over all the indices in this piece of
// geometry. Every group of three indices
// represents one triangle.
foreach (int index in geometry.Indices)
{
// Look up the position of this vertex.
Vector3 vertex =
geometry.Vertices.Positions[index];
// Transform from local into world space.
vertex = Vector3.Transform(vertex,
absoluteTransform);
// Store this vertex.
meshVertices.Add(vertex);
}
}
tagData.Add(meshName, meshVertices);
}
// Recursively scan over the children of this node.
foreach (NodeContent child in node.Children)
{
FindVertices(child);
}
}
[/code]

The first line is to transform the current NodeContent to MeshContent so that we can get the mesh vertices. If the current NodeContent is a MeshContent, declare the meshName variable for holding the current mesh name, meshVertices for saving the mesh vertices, and store the world absolute transformation matrix to the absoluteTransform matrix using MeshContent.AbsoluteTransform. The following foreach loop iterates every vertex of the model geometries and transforms it from the object coordinate to the world coordinate, then stores the current vertex to the meshVertices. When all the vertices of the current mesh are processed, we add meshVertices to the tagData dictionary with meshName as key. The last part is to recursively process the vertices of the child NodeContents of the temporary MeshContent.

In step 6, the MeshInfo class assembles the mesh name and its vertices.

In step 7, the font will be used to render the current mesh name on screen; modelTank loads the tank model; worldTank indicates the tank world position; Index determines which mesh will be rendered; meshVerticesDictionary stores the model Tag information, which stores the mesh name and mesh vertices; meshVertices saves the vertices of the current mesh for rendering; MeshInfoList will hold the mesh information mapped from meshVerticesDictionary; vertices represents the VertexPositionColor array for rendering the current mesh vertices on screen; vertexBuffer will allocate the space for the current mesh vertex array. WireFrame and Normal specify the render state for the individual mesh and the tank model.

In step 9, we call two other methods: MapMeshDictionaryToList() and SetMeshVerticesToVertexBuffer().

The MapMeshDictionaryToList() method is to map the mesh info from the dictionary to the MeshInfo list, as follows:
[code]
// Map mesh info dictionary to MeshInfo list
public List<MeshInfo> MapMeshDictionaryToList(
Dictionary<string, List<Vector3>> meshVerticesDictionary)
{
MeshInfo meshInfo;
List<MeshInfo> list = new List<MeshInfo>();
// Iterate the item in dictionary
foreach (KeyValuePair<string, List<Vector3>> item in
meshVerticesDictionary)
{
// Initialize the MeshInfo object with mesh name and
// vertices
meshInfo = new MeshInfo(item.Key, item.Value);
// Add the MeshInfo object to MeshInfoList
list.Add(meshInfo);
}
return list;
}
[/code]

We iterate and read the item of meshVerticesDictionary to meshInfo with the mesh name and vertices. Then, add the mesh info to the MeshInfoList.

The SetMeshVerticesToVertexBuffer() method is to set the current mesh vertices to vertex buffer. The code is as follows:

[code]
// Set the mesh index for rendering
private void SetMeshVerticesToVertexBuffer(int MeshIndex)
{
if (MeshInfoList.Count > 0)
{
// Get the mesh vertices
meshVertices = MeshInfoList[MeshIndex].Positions;
// Declare the VertexPositionColor array
vertices = new VertexPositionColor[meshVertices.Count];
// Initialize the VertexPositionColor array with the
// mesh vertices data
for (int i = 0; i < meshVertices.Count; i++)
{
vertices[i].Position = meshVertices[i];
vertices[i].Color = Color.Red;
}
// Initialize the VertexBuffer for VertexPositionColor
// array
vertexBuffer = new VertexBuffer(GraphicsDevice,
VertexPositionColor.VertexDeclaration,
meshVertices.Count, BufferUsage.WriteOnly);
// Set VertexPositionColor array to VertexBuffer
vertexBuffer.SetData(vertices);
}
}
[/code]

We use MeshIndex to get the current vertices from MeshInfoList. Then allocate the space for vertices—a VertexPositionColor array—and initialize the array data using meshVertices. After that, we initialize the vertexBuffer to store the VertexPositionColor array for drawing the current mesh on screen.

In step 10, this code will react to the valid tap gesture and change the mesh index for choosing different meshes to show.

In step 11, the first part of the code is to draw the tank model in Normal render state defined in the class field. The second part is responsible for rendering the current mesh in WireFrame render state. For rendering the current mesh, we declare a new BasicEffect object and enable the VertexColorEnabled attribute to highlight the selected mesh. The following is the code snippet for the DrawModel() method:

[code]
//Draw the model
public void DrawModel(Model model, Matrix world, Matrix view, Matrix
projection)
{
Matrix[] transforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(transforms);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
effect.World = transforms[mesh.ParentBone.Index] * world;
effect.View = view;
effect.Projection = projection;
}
mesh.Draw();
}
}
[/code]

Implementing a rigid model animation

Since 1997, 3D animation has made modern games more fun, and has given them more possibilities. You can take different actions in role-playing games. 3D model animation will make the game more fun and realistic. In this recipe, you will learn how to process and play the rigid model animation in Windows Phone 7.

How to do it…

The following steps will help you look into detail on implementing a rigid model animation:

  1. Create a Windows Phone Game project named RigidModelAnimationGame, change Game1.cs to RigidAnimationGame.cs and add the 3D animated model file Fan.FBX to the content project. Then create a Windows Phone Class Library project called RigidModelAnimationLibrary to define the animation data and add the class files ModelAnimationClip.cs, AnimationClip. cs, AnimationPlayerBase.cs, ModelData.cs, Keyframe.cs, RigidAnimationPlayer.cs, and RootAnimationPlayer.cs to this project.
  2. Next, build a new Content Pipeline Extension Library project named RigidAnimationModelProcessor to process the animated model and return the model animation data to the Model object when initializing the game.
  3. Define the Keyframe class in Keyframe.cs of RigidModelAnimationLibrary project. Keyframe class is responsible for storing an animation frame for a bone in the mode. An animation frame is required to refer to a corresponding bone. If you have not created a bone, or there is no bone in the mesh, XNA frame will automatically create a bone for this kind of mesh, so that the system can locate the mesh. The class should be as follows:
    [code]
    // Indicate the position of a bone of a model mesh
    public class Keyframe
    {
    public Keyframe() { }
    // Gets the index of the target bone that is animated by
    // this keyframe.
    [ContentSerializer]
    public int Bone;
    // Gets the time offset from the start of the animation to
    // this keyframe.
    [ContentSerializer]
    public TimeSpan Time;
    // Gets the bone transform for this keyframe.
    [ContentSerializer]
    public Matrix Transform;
    // Constructs a new Keyframe object.
    public Keyframe(int bone, TimeSpan time, Matrix transform)
    {
    Bone = bone;
    Time = time;
    Transform = transform;
    }
    }
    [/code]
  4. Implement the AnimationClip class in AnimationClip.cs of RigidModelAnimationLibrary project. AnimationClip class is the runtime equivalent of the Microsoft.Xna.Framework.Content.Pipeline.Graphics. AnimationContent type, which holds all the key frames needed to describe a single model animation. The class is as follows:
    [code]
    public class AnimationClip
    {
    private AnimationClip() { }
    // The total length of the model animation
    [ContentSerializer]
    public TimeSpan Duration;
    // The collection of key frames, sorted by time, for all
    // bones
    [ContentSerializer]
    public List<Keyframe> Keyframes;
    // Animation clip constructor
    public AnimationClip(TimeSpan duration, List<Keyframe>
    keyframes)
    {
    Duration = duration;
    Keyframes = keyframes;
    }
    }
    [/code]
  5. Implement the AnimationPlayerBase class in AnimationPlayerBase.cs of RigidModelAnimationLibrary project. This class is the base class for rigid animation players. It deals with a clip, playing it back at speed, notifying clients of completion, and so on. We add the following lines to the class field:
    [code]
    // Clip currently being played
    AnimationClip currentClip;
    // Current timeindex and keyframe in the clip
    TimeSpan currentTime;
    int currentKeyframe;
    // Speed of playback
    float playbackRate = 1.0f;
    // The amount of time for which the animation will play.
    // TimeSpan.MaxValue will loop forever. TimeSpan.Zero will
    // play once.
    TimeSpan duration = TimeSpan.MaxValue;
    // Amount of time elapsed while playing
    TimeSpan elapsedPlaybackTime = TimeSpan.Zero;
    // Whether or not playback is paused
    bool paused;
    // Invoked when playback has completed.
    public event EventHandler Completed;
    [/code]
  6. We define the attributes of the AnimationPlayerBase class:
    [code]
    // Gets the current clip
    public AnimationClip CurrentClip
    {
    get { return currentClip; }
    }
    // Current key frame index
    public int CurrentKeyFrame
    {
    get { return currentKeyframe; }
    set
    {
    IList<Keyframe> keyframes = currentClip.Keyframes;
    TimeSpan time = keyframes[value].Time;
    CurrentTime = time;
    }
    }
    // Get and set the current playing position.
    public TimeSpan CurrentTime
    {
    get { return currentTime; }
    set
    {
    TimeSpan time = value;
    // If the position moved backwards, reset the keyframe
    // index.
    if (time < currentTime)
    {
    currentKeyframe = 0;
    InitClip();
    }
    currentTime = time;
    // Read keyframe matrices.
    IList<Keyframe> keyframes = currentClip.Keyframes;
    while (currentKeyframe < keyframes.Count)
    {
    Keyframe keyframe = keyframes[currentKeyframe];
    // Stop when we’ve read up to the current time
    // position.
    if (keyframe.Time > currentTime)
    break;
    // Use this keyframe
    SetKeyframe(keyframe);
    currentKeyframe++;
    }
    }
    }
    [/code]
  7. Give the definition of the StartClip() method to the AnimationPlayerBase class:
    [code]
    // Starts the specified animation clip.
    public void StartClip(AnimationClip clip)
    {
    StartClip(clip, 1.0f, TimeSpan.MaxValue);
    }
    // Starts playing a clip, duration (max is loop, 0 is once)
    public void StartClip(AnimationClip clip, float playbackRate,
    TimeSpan duration)
    {
    if (clip == null)
    throw new ArgumentNullException(“Clip required”);
    // Store the clip and reset playing data
    currentClip = clip;
    currentKeyframe = 0;
    CurrentTime = TimeSpan.Zero;
    elapsedPlaybackTime = TimeSpan.Zero;
    // Store the data about how we want to playback
    this.playbackRate = playbackRate;
    this.duration = duration;
    // Call the virtual to allow initialization of the clip
    InitClip();
    }
    [/code]
  8. Add the implementation of Update() to the AnimationPlayerBase class:
    [code]
    // Called during the update loop to move the animation forward
    public virtual void Update(GameTime gameTime)
    {
    if (currentClip == null)
    return;
    TimeSpan time = gameTime.ElapsedGameTime;
    // Adjust for the rate
    if (playbackRate != 1.0f)
    time = TimeSpan.FromMilliseconds(
    time.TotalMilliseconds * playbackRate);
    elapsedPlaybackTime += time;
    // Check the animation is end
    if (elapsedPlaybackTime > duration && duration !=
    TimeSpan.Zero ||
    elapsedPlaybackTime > currentClip.Duration &&
    duration == TimeSpan.Zero)
    {
    if (Completed != null)
    Completed(this, EventArgs.Empty);
    currentClip = null;
    return;
    }
    // Update the animation position.
    time += currentTime;
    CurrentTime = time;
    }
    [/code]
  9. Implement two virtual methods for subclass to custom its special behaviors:
    [code]
    // Subclass initialization when the clip is
    // initialized.
    protected virtual void InitClip()
    {
    }
    // For subclasses to set the associated data of a particular
    // keyframe.
    protected virtual void SetKeyframe(Keyframe keyframe)
    {
    }
    [/code]
  10. Define the RigidAnimationPlayer class in RigidAnimationPlayer.cs of the RigidModelAnimationLibrary project. This animation player knows how to play an animation on a rigid model, applying transformations to each of the objects in the model over time. The class is as follows:
    [code]
    public class RigidAnimationPlayer : AnimationPlayerBase
    {
    // This is an array of the transforms to each object in the
    // model
    Matrix[] boneTransforms;
    // Create a new rigid animation player, receive count of
    // bones
    public RigidAnimationPlayer(int count)
    {
    if (count <= 0)
    throw new Exception(“Bad arguments to model
    animation player”);
    this.boneTransforms = new Matrix[count];
    }
    // Initializes all the bone transforms to the identity
    protected override void InitClip()
    {
    int boneCount = boneTransforms.Length;
    for (int i = 0; i < boneCount; i++)
    this.boneTransforms[i] = Matrix.Identity;
    }
    // Sets the key frame for a bone to a transform
    protected override void SetKeyframe(Keyframe keyframe)
    {
    this.boneTransforms[keyframe.Bone] =
    keyframe.Transform;
    }
    // Gets the current bone transform matrices for the
    // animation
    public Matrix[] GetBoneTransforms()
    {
    return boneTransforms;
    }
    }
    [/code]
  11. Define the RootAnimationPlayer class in RootAnimationPlayer.cs of the RigidModelAnimationLibrary project. The root animation player contains a single transformation matrix to control the entire model. The class should be as follows:
    [code]
    public class RootAnimationPlayer : AnimationPlayerBase
    {
    Matrix currentTransform;
    // Initializes the transformation to the identity
    protected override void InitClip()
    {
    this.currentTransform = Matrix.Identity;
    }
    // Sets the key frame by storing the current transform
    protected override void SetKeyframe(Keyframe keyframe)
    {
    this.currentTransform = keyframe.Transform;
    }
    // Gets the current transformation being applied
    public Matrix GetCurrentTransform()
    {
    return this.currentTransform;
    }
    }
    [/code]
  12. Define the ModelData class in ModelData.cs of RigidModelAnimationLibrary project. The ModelData class combines all the data needed to render an animated rigid model, the ModelData object will be used to store the animated data in Tag property. The class looks similar to the following:
    [code]
    public class ModelData
    {
    [ContentSerializer]
    public Dictionary<string, AnimationClip>
    RootAnimationClips;
    [ContentSerializer]
    public Dictionary<string, AnimationClip>
    ModelAnimationClips;
    public ModelData(
    Dictionary<string, AnimationClip> modelAnimationClips,
    Dictionary<string, AnimationClip> rootAnimationClips
    )
    {
    ModelAnimationClips = modelAnimationClips;
    RootAnimationClips = rootAnimationClips;
    }
    private ModelData()
    {
    }
    }
    [/code]
  13. Now, build the RigidModelAnimationLibrary project and you will get RigidModelAnimationLibrary.dll.
  14. From this step, we will begin to create the RigidModelAnimationProcessor. The RigidModelAnimationProcessor extends from the ModelProcessor because we only want to get the model animation data.
    [code]
    [ContentProcessor(DisplayName = “Rigid Model Animation
    Processor”)]
    public class RigidModelAnimationProcessor : ModelProcessor
    [/code]
  15. Define the maximum number of bones. Add the following line to the class field:
    [code]
    const int MaxBones = 59;
    [/code]
  16. Define the Process() method:
    [code]
    // The main Process method converts an intermediate format
    // content pipeline NodeContent tree to a ModelContent object
    // with embedded animation data.
    public override ModelContent Process(NodeContent input,
    ContentProcessorContext context)
    {
    ValidateMesh(input, context, null);
    List<int> boneHierarchy = new List<int>();
    // Chain to the base ModelProcessor class so it can
    // convert the model data.
    ModelContent model = base.Process(input, context);
    // Animation clips inside the object (mesh)
    Dictionary<string, AnimationClip> animationClips =
    new Dictionary<string, AnimationClip>();
    // Animation clips at the root of the object
    Dictionary<string, AnimationClip> rootClips =
    new Dictionary<string, AnimationClip>();
    // Process the animations
    ProcessAnimations(input, model, animationClips, rootClips);
    // Store the data for the model
    model.Tag = new ModelData(animationClips, rootClips);
    return model;
    }
    [/code]
  17. Define the ProcessAnimations() method:
    [code]
    // Converts an intermediate format content pipeline
    // AnimationContentDictionary object to our runtime
    // AnimationClip format.
    static void ProcessAnimations(
    NodeContent input,
    ModelContent model,
    Dictionary<string, AnimationClip> animationClips,
    Dictionary<string, AnimationClip> rootClips)
    {
    // Build up a table mapping bone names to indices.
    Dictionary<string, int> boneMap =
    new Dictionary<string, int>();
    for (int i = 0; i < model.Bones.Count; i++)
    {
    string boneName = model.Bones[i].Name;
    if (!string.IsNullOrEmpty(boneName))
    boneMap.Add(boneName, i);
    }
    // Convert each animation in the root of the object
    foreach (KeyValuePair<string, AnimationContent> animation
    in input.Animations)
    {
    AnimationClip processed = ProcessRootAnimation(
    animation.Value, model.Bones[0].Name);
    rootClips.Add(animation.Key, processed);
    }
    // Get the unique names of the animations on the mesh
    // children
    List<string> animationNames = new List<string>();
    AddAnimationNodes(animationNames, input);
    // Now create those animations
    foreach (string key in animationNames)
    {
    AnimationClip processed = ProcessAnimation(key,
    boneMap, input, model);
    animationClips.Add(key, processed);
    }
    }
    [/code]
  18. Define the ProcessRootAnimation() method, in the RigidModelAnimationProcessor class, to convert an intermediate format content pipeline AnimationContent object to the runtime AnimationClip format. The code is as follows:
    [code]
    public static AnimationClip ProcessRootAnimation(
    AnimationContent animation, string name)
    {
    List<Keyframe> keyframes = new List<Keyframe>();
    // The root animation is controlling the root of the bones
    AnimationChannel channel = animation.Channels[name];
    // Add the transformations on the root of the model
    foreach (AnimationKeyframe keyframe in channel)
    {
    keyframes.Add(new Keyframe(0, keyframe.Time,
    keyframe.Transform));
    }
    // Sort the merged keyframes by time.
    keyframes.Sort(CompareKeyframeTimes);
    if (keyframes.Count == 0)
    throw new InvalidContentException(“Animation has no”
    + “keyframes.”);
    if (animation.Duration <= TimeSpan.Zero)
    throw new InvalidContentException(“Animation has a”
    + “zero duration.”);
    return new AnimationClip(animation.Duration, keyframes);
    }
    [/code]
  19. Define the AddAnimationNames() static method, in the RigidModelAnimationProcessor class, which assembles the animation names to locate different animations. It is as follows:
    [code]
    static void AddAnimationNames(List<string> animationNames,
    NodeContent node)
    {
    foreach (NodeContent childNode in node.Children)
    {
    // If this node doesn’t have keyframes for this
    // animation we should just skip it
    foreach (string key in childNode.Animations.Keys)
    {
    if (!animationNames.Contains(key))
    animationNames.Add(key);
    }
    AddAnimationNames(animationNames, childNode);
    }
    }
    [/code]
  20. Define the ProcessAnimation() method, in the RigidModelAnimationProcessor class, to process the animations of individual model meshes. The method definition should be as follows:
    [code]
    // Converts an intermediate format content pipeline
    // AnimationContent object to the AnimationClip format.
    static AnimationClip ProcessAnimation(
    string animationName,
    Dictionary<string, int> boneMap,
    NodeContent input)
    {
    List<Keyframe> keyframes = new List<Keyframe>();
    TimeSpan duration = TimeSpan.Zero;
    // Get all of the key frames and duration of the input
    // animated model
    GetAnimationKeyframes(animationName, boneMap, input,
    ref keyframes, ref duration);
    // Sort the merged keyframes by time.
    keyframes.Sort(CompareKeyframeTimes);
    if (keyframes.Count == 0)
    throw new InvalidContentException(“Animation has no
    + “keyframes.”);
    if (duration <= TimeSpan.Zero)
    throw new InvalidContentException(“Animation has a
    + “zero duration.”);
    return new AnimationClip(duration, keyframes);
    }
    [/code]
  21. Define the GetAnimationKeyframe() method referenced by ProcessAnimation() in the RigidModelAnimationProcessor class. This is mainly responsible for processing the input animated model and gets all of its key frames and duration. The complete implementation of the method is as follows:
    [code]
    // Get all of the key frames and duration of the input
    // animated model
    static void GetAnimationKeyframes(
    string animationName,
    Dictionary<string, int> boneMap,
    NodeContent input,
    ref List<Keyframe> keyframes,
    ref TimeSpan duration)
    {
    // Add the transformation on each of the meshes from the
    // animation key frames
    foreach (NodeContent childNode in input.Children)
    {
    // If this node doesn’t have keyframes for this
    // animation we should just skip it
    if (childNode.Animations.ContainsKey(animationName))
    {
    AnimationChannel childChannel =
    childNode.Animations[animationName].Channels[
    childNode.Name];
    if(childNode.Animations[animationName].Duration !=
    duration)
    {
    if (duration < childNode.Animations[
    animationName].Duration)
    duration = childNode.Animations[
    animationName].Duration;
    }
    int boneIndex;
    if(!boneMap.TryGetValue(childNode.Name,
    out boneIndex))
    {
    throw new InvalidContentException(
    string.Format(“Found animation for”
    + “bone ‘{0}’, which is not part of the”
    + “model.”, childNode.Name));
    }
    foreach (AnimationKeyframe keyframe in
    childChannel)
    {
    keyframes.Add(new Keyframe(boneIndex,
    keyframe.Time, keyframe.Transform));
    }
    }
    // Get the child animation key frame by animation
    // name of current NodeContent
    GetAnimationKeyframes(animationName, boneMap,
    childNode, ref keyframes, ref duration);
    }
    }
    [/code]
  22. Define the CompareKeyframeTimes() method for sorting the animation key frames along the animation running sequence.
    [code]
    // Comparison function for sorting keyframes into ascending
    // time order.
    static int CompareKeyframeTimes(Keyframe a, Keyframe b)
    {
    return a.Time.CompareTo(b.Time);
    }
    [/code]
  23. Now the RigidModelAnimationProcessor class is complete, you need to build the RigidModelAnimationProcessor project. You will get a RigidModelAnimationProcessor.dll library file. Add the RigidModelAnimationProcessor.dll to the content project reference list and change the model processor of Fan.fbx to RigidModelAnimationProcessor, as shown in the following screenshot:
    RigidModelAnimationProcessor
  24. From this step, you will begin to draw the animated model on the Windows Phone 7 screen. Add the code to the RigidAnimationGame class:
    [code]
    // Rigid model, animation players, clips
    Model rigidModel;
    Matrix rigidWorld;
    bool playingRigid;
    RootAnimationPlayer rigidRootPlayer;
    AnimationClip rigidRootClip;
    RigidAnimationPlayer rigidPlayer;
    AnimationClip rigidClip;
    // View and Projection matrices used for rendering
    Matrix view;
    Matrix projection;
    RasterizerState Solid = new RasterizerState()
    {
    FillMode = FillMode.Solid,
    CullMode = CullMode.None
    };
    [/code]
  25. Now, build and run the application. It runs as shown in the following screenshots:
    rigid model animation

How it works…

In step 3, we use the key frames to change the bone from original transformation to a new one. The transformation, the Transform variable, saved as a Matrix. TimeSpan will store the animation time when the key frame is played. The ContentSerializer is an attribute that marks a field or property showing how it is serialized or included in serialization.

In step 5, the currentClip represents the current animation that will be played; currentTime indicates the index of time; currentKeyframe is the current playing key frame; playbackRate stands for how fast the animation will be played; duration represents the total length of time of the current animation being played; elapsedPlaybackTime shows the amount of time that the current animation has played.

In step 6, CurrentKeyFrame attribute returns the index of current key frame. When the attribute is set by an integer value, the attribute will read the Time value of the current frame and assign it to the CurrentTime attribute for getting the transformation of the current bone.

In step 7, the StartClip() method is to initialize the necessary data for playing the current clip.

In step 8, the Update() method counts the elapsed time of animation being played. Then, it checks the elapsedPlaybackTime to see whether it is greater than the valid duration value or currentClip.Duration when duration is TimeSpan.Zero. If yes, it means the current animation is ended. Completed will be triggered when it has been initialized. After that, we should update the animation position, time, plus the currentTime, to compute the total time from the beginning. Finally, update the CurrentTime attribute.

In step 10, the boneTransforms will store all of the transformation matrices of model bones. The SetKeyframe() method is responsible for assigning the transformation matrix of the designated key frame of the corresponding element in boneTransforms based on the bone index of the current key frame. The GetBoneTransforms() method returns the processed boneTransforms for actual transformation computation on the model.

In step 11, based on the class, we could transform the entire model, not the individual mesh, which has its own animation. Now, please notice the difference between the RootAnimationPlayer class and the RigidAnimationPlayer class. The RigidAnimationPlayer class constructor receives a count parameter, but the other one does not. The reason is that the RootAnimationPlayer controls the transformation of the entire model; it just needs to get the root bone information, which will be passed to it at runtime. On the other hand, RigidAnimationPlayer is responsible for playing the animation of every individual mesh; it should know how many bones there are for the meshes, which will help it to allocate enough space to store the transformation matrices.

In step 16, the Process() method is the root method to read the animations from an animated model including the root animation for the entire model transformation and the animations of every single model mesh. Finally, you then assign the generated ModelData with root animations and mesh animations to the Model.Tag property for animating the model at runtime.

In step 17, first the code creates a boneMap dictionary that stores the bone name and its index in the model. Then, the ProcessAnimations() will process the animated model into two animation clips, one for the rootClips using the ProcessRootAnimation() method with the input root bone, and one for animationClips using the ProcessAnimation() method.

In step 18, the first line we declare is a Keyframe collection, which stores all of the key frames of animation. Then, the code gets AnimationChannel from AnimationContent. Channel by the root bone name, where the animation channel data has the necessary transformation data to transform the child bones of the root. Since XNA will generate a corresponding bone for every model for its position, the mesh will be transformed when the root bone is transformed. After getting the animation channel data, the following foreach loop will read all the content pipeline type AnimationKeyFrame objects from the current AnimationChannel data and store the key frame information to the animation runtime type Keyframe defined in the RigidModelAnimationLibrary. Notice the digit 0 in the Keyframe constructor parameter list stands for the root bone index. Next, the keyframes. Sort() is to sort the keyframes collection along the actual animation running sequence, CompareKeyFrameTimes is a key frame time comparing method, which will be discussed later on. Next, we use two lines to validate the keyframes and animation. Finally, return a new AnimationClip object with Duration and keyframes to the caller.

In step 19, the first foreach loop iterates the child NodeContent of the input, the second foreach loop looks into every key of animation of the current NodeContent. The key is the animation name. For example, Take001 will be the animation name when you export an FBX format 3D model from AutoDesk 3DS MAX. Then, the animationNames.Contains() method will check whether the animation name is already in the collection or not. If not, the new animation name will be added to the animationNames collection.

In step 20, the two variables keyframes and duration represent the total key frames and running time of the current model mesh animation. Then, we call the GetAnimationKeyframes() method to get all the key frames and duration of the current animation by animation name. After that, the keyframes.Sort() method sorts the key frames along the animation running order. The following two lines are to check whether the current animation is valid. At last, this method returns a new AnimationClip object corresponding to the input animation name.

In step 21, the foreach loop iterates the every child NodeContent of input. In the loop body, the first job is to check whether the current NodeContent has the animation with the input animation using the NodeContent.Animations.ContainsKey() method. If yes, then the childNode.Animations[animationName].Channels[childNode.Name] is responsible for finding the animation channel, which stores the total key frames of a mesh or bone, such as Plane001, from the specified animation by the name of the current NodeContent. The next line is on returning the duration time of the unique animation. So far, we have collected the necessary AnimationChannel data and duration time for creating the runtime KeyFrame collection. Before generating the runtime Keyframe set, we should get the bone index the set will attach to. Depending on the bone.TryGetValue() method, the boneIndex value is returned according to the current NodeContent name. After that, the following foreach loop goes through all of AnimationKeyFrame in the childChannel, and AnimationChannel that we got earlier. Then add a new KeyFrame object to the keyframes with bone index, key frame time, and the related transformation matrix. The last line recursively gets the animation key frames and duration of the child content of the current node.

In step 22, the CompareTo() method of KeyFrame.TimeSpan compares the KeyFrame object a to the TimeSpan of another Keyframe object b and returns an integer that indicates whether the TimeSpan of a is earlier than, equal to, or later than the TimeSpan of b. This method lets the keyframes.Sort() method in ProcessAnimation() and ProcessRootAnimation() know how to sort the keyframes.