# First Step into XNA Game, Coordinates and View

### Trouble With Home Collateral Debt Consolidation Loans

Drawing the axes for a 2D game

In 2D or 3D game programming, the axes are the basis for the position of objects. With the coordinate system, it is convenient for you to place or locate the objects. In 2D games the coordinate system is made up of two axes, X and Y. In 3D games, there is another axis, that is the Z axis. Usually, drawing the axes on your screen is a handy tool for your game debugging. In this recipe, you will learn how to render the axes in 2D in Windows Phone 7.

How to do it…

Follow the given steps to draw the 2D axis:

1. Create a Windows Phone Game project in Visual Studio 2010, change the name from Game1.cs to Draw2DAxesGame.cs. Then add a new class named Axes2D.cs. This class is responsible for drawing the 2D line on screen. We declare the field variables in the Axes2D class:
[code]
// Pixel Texture
Texture2D pixel;
public int Thickness = 5;
// Render depth of the primitive line object (0 = front, 1 =
// back)
public float Depth;
[/code]
2. Then, we define the overload constructor of the Axes2D class:
[code]
//Creates a new primitive line object.
public Axes2D(GraphicsDevice graphicsDevice, Color color)
{
// create pixels
pixel = new Texture2D(graphicsDevice, 1, 1);
Color[] pixels = new Color;
pixels = color;
pixel.SetData<Color>(pixels);
Depth = 0;
}
[/code]
3. When the pixel data size and color are ready, the following code will draw the line object:
[code]
public void DrawLine(SpriteBatch spriteBatch, Vector2 start,
Vector2 end)
{
// calculate the distance between the two vectors
float distance = Vector2.Distance(start, end);
// calculate the angle between the two vectors
float angle = (float)Math.Atan2((double)(end.Y – start.Y),
(double)(end.X – start.X));
// stretch the pixel between the two vectors
spriteBatch.Draw(pixel,
start,
null,
Color.White,
angle,
Vector2.Zero,
new Vector2(distance, Thickness),
SpriteEffects.None,
Depth);
}
[/code]
4. Use the Axes2D class in the main game class and insert the following code at the top of the class:
[code]
// The axis X line object
Axes2D axisX;
// The axis Y line object
Axes2D axisY;
// The start and end of axis X line object
Vector2 vectorAxisXStart;
Vector2 vectorAxisXEnd;
// The start and end of axis Y line object
Vector2 vectorAxisYStart;
Vector2 vectorAxisYEnd;
[/code]
5. Initialize the axes objects and their start and end positions, add the following code to the Initialize() method:
[code]
// Set the color of axis X to red
axisX = new Axes2D(GraphicsDevice, Color.Red);
// Set the color of axis Y to green
axisY = new Axes2D(GraphicsDevice, Color.Green);
// Set the start and end positions of axis X
vectorAxisXStart = new Vector2(100,
GraphicsDevice.Viewport.Height / 2);
vectorAxisXEnd = new Vector2(700,
GraphicsDevice.Viewport.Height / 2);
// Set the start and end positions of axis Y
vectorAxisYStart = new Vector2(
GraphicsDevice.Viewport.Width / 2, 50);
vectorAxisYEnd = new Vector2(
GraphicsDevice.Viewport.Width /2, 450);
[/code]
6. Draw the two line objects on the screen and insert the following code in to the Draw() method:
[code]
spriteBatch.Begin();
axisX.DrawLine(spriteBatch, vectorAxisXStart, vectorAxisXEnd);
axisY.DrawLine(spriteBatch, vectorAxisYStart, vectorAxisYEnd);
spriteBatch.End();
[/code]
7. Now, build and run the application, and it will run similar to the following screenshot. Please make sure that the Windows Phone screen has been rotated to landscape mode: How it works…

In step 1, in the 2D line drawing for Windows Phone 7, we use pixel texture to present the line point by point; the Thickness variable will be used to change the pixel size of the line object; the Depth value will be used to define the drawing order.

In step 2, the constructor receives a GraphicDevice parameter and a Color parameter. We use them to create the pixel texture, which is of one unit width and height, and set the color to the pixel texture through the SetData() method; this is another way of creating a texture in code.

In step 3, the SpriteBatch is the main object for drawing the line objects on the screen. The start parameter represents the start position of the line object; the end parameter indicates the end position. In the method body, the first line will compute the distance between the start and end points, the second line will compute the slope between the two positions; the third line will draw the pixels one by one from the start position along the line slope to the end along the angle. This is a more generic method of drawing every line.

In step 5, the X axis is located in the middle of screen height, the Y axis is located in the middle of screen width.

In step 6, within the SpriteBatch rendering code, we call the axisX.DrawLine() and axisY.DrawLine() to draw the lines.

Setting up the position, direction, and field of view of a fixed camera

In the 2D world of Windows Phone 7 game programming, the presentation of images or animations with X and Y axes is straightforward. You just need to know that the original point (0, 0) is located at the top left of the touchscreen, and the screen width and height. Now, in a 3D world, things are different, as there are now X, Y, and Z axes and the original point is not simply sitting at the top left of the touchscreen. In this recipe, you will learn how to deal with the new coordinate system.

In 3D programming, especially for Windows Phone 7, the first thing we must make sure of is the coordinate system, which is on either the right hand or the left hand. In Windows Phone 7 XNA 3D programming, the coordinate system is on the right-hand, which means the increasing Z axis points towards you when you are playing a Windows Phone 7 game.

The next step is to set the camera, like the eye, to make the objects in the 3D world visible. During the process, we save the position and direction in a matrix, which is called the View matrix. To create the View matrix, XNA uses the CreateLookAt() method. It needs Position, Target, and Up vectors of the camera:

[code]
public static Matrix CreateLookAt (
Vector3 Position,
Vector3 Target,
Vector3 Up
)
[/code]

The Position indicates the position of the camera in the 3D world; the Target defines where you want to face the camera; the Up variable is very important because it represents the rotation of your camera. If it is positive, everything goes well, otherwise, it will be inverted. In XNA, the Vector3.UP is equal to (0, 1, 0); Vector3.Forward stands for (0, 0, -1); Vector3.Right is the same as (1, 0, 0); Vector3.Down stands for (0,-1,0). These predefined vectors are easy for you to apply in your game. Once you have understood the View matrix, the next important matrix for the camera is called Projection. In the 3D world, every object has its own 3D position. If we want to render the 3D objects on the screen, which is a 2D plain, how do we do it? The Project matrix gives us a hand.

Before rendering the objects from the 3D environment to the 2D screen, we must know how many of them need to be rendered and the range. From the computer graphics perspective, the range is called Frustum, as shown in the following figure: The Near, Right, Left, and Far planes compose the Frustum to determine whether the objects are inside it. In Windows Phone 7, you can use the Matrix. CreatePerspectiveFieldofView() to create the Projection matrix:

[code]
public static Matrix CreatePerspectiveFieldOfView (
float fieldOfView,
float aspectRatio,
float nearPlaneDistance,
float farPlaneDistance
)
[/code]

The first parameter here is the field of view angle around the Y axis which, similar to the human eye’s view, has a value of 45 degrees. Also, you can use MathHelper.PiOver4, which means ¼ of Pi which is the radian value of 45 degrees; the aspectRatio parameter specifies the view width divided by height, and this value corresponds to the ratio of the back buffer. The last two parameters individually represent the near plane and far plane for the frustum. The value for the near plane defines the beginning of the frustum. It means any object nearer than the plane will not be rendered, and for the far plane it is vice versa.

In your Windows Phone 7 game, you can update the View matrix, similar to the FPS game, and the value depends on the screen input. In the drawing phase of your 3D game, it is required to pass the View and Projection matrices to the effect when you are rendering certain objects to let the rendering hardware know how to transform the 3D objects to proper positions on the screen.

Now that, you have learned the essential ideas for the camera, it’s time for programming your own application.

How to do it…

1. First, you need to create a Windows Phone Game project in Visual Studio 2010. Then change the name from Game1.cs to FixedCameraGame.cs and add Tree.fbx from the code bundle to your content project. For the 3D model creation, you can use commercial tools, such as AutoDesk 3DS MAX, MAYA, or the free alternative named Blender. Then, in the field of the FixedCameraGame class, insert the following lines:
[code]
Matrix view;
Matrix projection;
Model model;
[/code]
2. Then, in the Initialize() method, we will add the following lines:
[code]
Vector3 position = new Vector3(0, 40, 50);
Vector3 target = new Vector3(0, 0, 0);
view = Matrix.CreateLookAt(position, target, Vector3.Up);
projection =
Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
GraphicsDevice.Viewport.AspectRatio,
1, 1000.0f);
[/code]
3. Next, we load and initialize the Tree.fbx 3D model located at the associate content project, into our game. Add the following code to the LoadContent() method:
[code]
[/code]
4. The last step for our game is to draw the model on the screen. Insert the following lines into the Draw() method:
[code]
// Define and copy the transforms of model
Matrix[] transforms = new Matrix[this.model.Bones.Count];
this.model.CopyAbsoluteBoneTransformsTo(transforms);
// Draw the model. A model can have multiple meshes, so loop.
foreach (ModelMesh mesh in this.model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
// Get the transform information from its parent
effect.World = transforms[mesh.ParentBone.Index];
// Pass the View and Projection matrix to effect
// make the rendering hardware how to transform the
// model
effect.View = view;
effect.Projection = projection;
}
mesh.Draw();
}
[/code]
5. Now, build and run the application. It will run similar to the following screenshot: How it works…

In step 1, the model variable will be used to load and show the 3D model.

In step 2, we define the View matrix with position, target, and up vector. After that, we give the definition of the Project matrix.

In step 4, the first two lines define the matrix array depending on the bone count. Then we use the CopyAbsoluteBoneTransformsTo() method to assign the actual values to the transform array. In the foreach loop, we iterate all of the meshes in the model. In the loop body, we use BasicEffect to render the mesh. In Windows Phone 7 XNA programming, so far, it supports five built-in effects; here we just use the simplest one. For effect, Effect. World indicates the mesh’s position; Effect.View represents the View matrix, Effect. Projection represents the Projection matrix. When all of the subloop effects are done, mesh.Draw()—in the loop for meshes—will render the mesh to the touchscreen.

Drawing the axes for a 3D game

The presentation of 3D lines is completely different from 2D; to draw the 3D axes in 3D will help you have a straightforward sense on the 3D world. The key to drawing the axes is the vertex format and vertex buffer, which holds the vertex data for rendering the lines. VertexBuffer is a sequence of allocated memory for storing vertices, which have the vertex position, color, texture coordinates, and the normal vector for rendering shapes or models. In other words, you can also use vertex buffer as an array of vertices. When the XNA application begins to render, it will read the vertex buffer and draw the vertex with the corresponding information that was saved into the game world. Based on the vertex buffer, the rendering performance will be much faster than passing the vertex one by one when requesting. In this recipe, you will learn how to use the vertex buffer to draw the axes in 3D. For a better view, the example will run in landscape view.

How to do it…

1. Create a Windows Phone Game in Visual Studio 2010. Change the name from Game1.cs to Draw3DAxesGame.cs and then add the following class-level variables:
[code]
// Basic Effect object
BasicEffect basicEffect;
// Vertex Data with Positon and Color
VertexPositionColor[] pointList;
// Vertex Buffer to hold the vertex data for drawing
VertexBuffer vertexBuffer;
// Camera View and Projection matrix
Matrix viewMatrix;
Matrix projectionMatrix;
// The Left and right hit region on the screen for rotating the
// axes
Rectangle recLeft;
Rectangle recRight;
// The rotation value
float rotation = 45;
[/code]
2. Initialize the 3D world for the axes and axes vertex data. Insert the following code to the Initialize() method:
[code]
// Define the camera View matrix
viewMatrix = Matrix.CreateLookAt(
new Vector3(0.0f, 0.0f, 150f),Vector3.Zero,
Vector3.Up
);
// Define the camera Projection matrix
projectionMatrix = Matrix.CreatePerspectiveFieldOfView(
MathHelper.PiOver4,
GraphicsDevice.Viewport.AspectRatio,
0.5f,
1000.0f);
// Initialize the basic effect
basicEffect = new BasicEffect(GraphicsDevice);
// Initialize the axes world matrix of the position in 3D world
basicEffect.World = Matrix.Identity;
// Initialize the vertex data
pointList = new VertexPositionColor;
// Define the vertex data of axis X
pointList = new VertexPositionColor(new Vector3(0, 0, 0),
Color.Red);
pointList = new VertexPositionColor(new Vector3(50, 0, 0),
Color.Red);
// Define the vertex data of axis Y
pointList = new VertexPositionColor(new Vector3(0, 0, 0),
Color.White);
pointList = new VertexPositionColor(new Vector3(0, 50, 0),
Color.White);
// Define the vertex data of axis Z
pointList = new VertexPositionColor(new Vector3(0, 0, 0),
Color.Blue);
pointList = new VertexPositionColor(new Vector3(0, 0, 50),
Color.Blue);
// Initialize the vertex buffer and allocate the space in
//vertex buffer for the vertex data
vertexBuffer = new VertexBuffer(GraphicsDevice,
VertexPositionColor.VertexDeclaration, 6,
BufferUsage.None);
// Set the vertex buffer data to the array of vertices.
vertexBuffer.SetData<VertexPositionColor>(pointList);
// Define the Left and Right hit region on the screen
recLeft = new Rectangle(0, 0,
GraphicsDevice.Viewport.Width / 2,
GraphicsDevice.Viewport.Height);
recRight = new Rectangle(GraphicsDevice.Viewport.Width / 2, 0,
GraphicsDevice.Viewport.Width / 2,
GraphicsDevice.Viewport.Height);
[/code]
3. Now, we need to check if the user has tapped the screen, so that we can rotate the axis. Add the following lines to the Update() method:
[code]
// Check whether the tapped position is in the Left or the
Right // hit region
TouchCollection touches = TouchPanel.GetState();
if (touches.Count > 0 && touches.State ==
TouchLocationState.Pressed)
{
Point point = new Point((int)touches.Position.X,
(int)touches.Position.Y);
// Rotate the axis in the landscape mode
if (recLeft.Contains(point))
{
rotation += 10f;
}
if (recRight.Contains(point))
{
rotation -= 10f;
}
}
[/code]
4. Draw the axes on screen. Insert the following code to the Draw() method:
[code]
// Rotate the axes
basicEffect.World =
// Give the view and projection to the basic effect
basicEffect.View = viewMatrix;
basicEffect.Projection = projectionMatrix;
// Enable the vertex color in Basic Effect
basicEffect.VertexColorEnabled = true;
// Draw the axes on screen, iterate the pass in Basic Effect
foreach (EffectPass pass in
basicEffect.CurrentTechnique.Passes)
{
// Begin Drawing
pass.Apply();
// Set the vertex buffer to graphic device
GraphicsDevice.SetVertexBuffer(vertexBuffer, 0);
// Draw the axes with LineList Type
GraphicsDevice.DrawUserPrimitives<VertexPositionColor>(
PrimitiveType.LineList, pointList, 0, 3);
}
[/code]
5. Let’s build and run the example. It will run similar to the following screenshot: How it works…

In step 1, we declare the BasicEffect for axes drawing, the VertexPositionColor array to store the vertex information of the axes, and then use the declared VertexBuffer object to hold the VertexPositionColor data. The following two matrices indicate the camera view and projection. The other two rectangular objects will be used to define the left and right hit regions on the screen. The last rotation variable is the controlling factor for axes rotation in 3D.

In step 2, we first define the camera View and Project matrices and then initialize the vertex data for the 3D axes. We use the Vector3 and Color objects to initialize the VertexPositionColor structure and then we define the VertexBuffer object. The VertexBuffer object has two overload constructors. The first one is:

[code]
public VertexBuffer (
GraphicsDevice graphicsDevice,
VertexDeclaration vertexDeclaration,
int vertexCount,
BufferUsage usage
)
[/code]

The first parameter is graphic device value; the second is vertex declaration, which describes per-vertex data, the size, and usage of each vertex element; the vertexCount parameter indicates how many vertices the vertex buffer will store; and the last parameter BufferUsage defines the access right for the vertex buffer, read and write or write only.

The second parameter of the second overload method is a little different. This parameter gets the type of the vertex, especially for the custom vertex type:

[code]
public VertexBuffer (
GraphicsDevice graphicsDevice,
Type vertexType,
int vertexCount,
BufferUsage usage
)
[/code]

Here, we use the first overload method and pass 6 as the vertex total number. When the vertex buffer is allocated, you will need to set the vertex data to the vertex buffer for drawing. The last two lines are the definition of left and right regions on the touchscreen.

In step 3, the code checks whether the tapped position is located within the left or right rectangle region and changes the rotation value according to the different rectangles.

In step 4, the first line is to rotate the axes around Y based on the value of rotation. Then you give the View and Projection matrices for the camera. Next, you can enable the BasicEffect.VertexColorEnabled to color the axes. The last foreach loop will draw the vertex data about the 3D axes on the screen. The DrawUserPrimitives() method has four parameters:

[code]
public void DrawUserPrimitives<T> (
PrimitiveType primitiveType,
T[] vertexData,
int vertexOffset,
int primitiveCount
)
[/code]

The PrimitiveType describes the type of primitive to render. Here, we use PrimitiveType.LineList, which will draw the line segments as the order of vertex data. The vertex data describes the vertex array information. Vertex offset tells the rendering functions the start index of the vertex data. The primitiveCount indicates the number of primitives to render, in this example, the number is three for the three axes.

Implementing a first-person shooter (FPS) camera in your game

Have you ever played a first-person shooter (FPS) game, such as Counter-Strike, Quake, or Doom? In these kind of games your eyes will be the main view. When you are playing, the game updates the eye view and makes you feel like it is real. On the computer, it is easy to change the view using the mouse or the keyboard; the challenge for Windows Phone 7 FPS camera is how to realize these typical behaviors without the keyboard or the mouse. In this recipe, you will master the technique to overcome it.

It is amazing and exciting to play a FPS game on the PC. In Windows Phone 7, you would want to have similar experiences. Actually, the experiences may be different; you just use the screen for everything. A Windows Phone FPS game also needs to define the camera first. The difference between this and the third-person shooter (TPS) camera is that, in the FPS camera, you should update the position of the camera itself and for the TPS camera you need to make the camera follow the updating position of the main player object at a reasonable distance. In a FPS game, you can use the arrow keys to move the player’s position and the mouse to change the direction of your view. In Windows Phone 7, we could use different regions of the touchscreen to move and use FreeDrag to update the view.

How to do it…

Now, let’s begin the exciting work:

1. Create a Windows Phone Game in Visual Studio 2010, and change the name from Game1.cs to FPSCameraGame.cs. Then, add the 3D models box.fbx and tree. fbx, XNA font object, and gameFont.font to content project. After the preparation work, you need to insert the variables in the class field:
[code]
// Game Font
SpriteFont spriteFont;
// Camera View matrix
Matrix view;// Camera Projection matrix
Matrix projection;
// Position of Camera
Vector3 position;
// Models
Model modelTree;
Model modelBox;
// Hit regions on the touchscreen
Rectangle recUp;
Rectangle recDown;
Rectangle recRight;
Rectangle recLeft;
// Angle for rotation
Vector3 angle;
// Gesture delta value
Vector2 gestureDelta;
[/code]
2. You need to initialize the camera View matrix and projection and the hit regions on the touchscreen. Now, add the following lines to the Initialize() method:
[code]
angle = new Vector3();
// Enable the FreeDrag gesture
TouchPanel.EnabledGestures = GestureType.FreeDrag;
// Define the camera position and the target position
position = new Vector3(0, 40, 50);
Vector3 target = new Vector3(0, 0, 0);
// Create the camera View matrix and Projection matrix
view = Matrix.CreateLookAt(position, target, Vector3.Up);
projection = Matrix.CreatePerspectiveFieldOfView(
MathHelper.PiOver4,
GraphicsDevice.Viewport.AspectRatio, 1, 1000.0f);
// Define the four hit regions on touchscreen
recUp = new Rectangle(GraphicsDevice.Viewport.Width / 4, 0,
GraphicsDevice.Viewport.Width / 2,
GraphicsDevice.Viewport.Height / 2);
recDown = new Rectangle(GraphicsDevice.Viewport.Width / 4,
GraphicsDevice.Viewport.Height / 2,
GraphicsDevice.Viewport.Width / 2,
GraphicsDevice.Viewport.Height / 2);
recRight = new Rectangle( GraphicsDevice.Viewport.Width –
GraphicsDevice.Viewport.Width / 4, 0,
GraphicsDevice.Viewport.Width /
4,GraphicsDevice.Viewport.Height);
recLeft = new Rectangle(0, 0,
GraphicsDevice.Viewport.Width –
GraphicsDevice.Viewport.Width / 4,
GraphicsDevice.Viewport.Height);
[/code]
[code]
[/code]
4. Add the core logic code for the FPS camera updating in the Update() method. This code reacts to the tap and flick gestures to change the camera view:
[code]
// Get the touch data
TouchCollection touches = TouchPanel.GetState();
// Check the tapped point whether in the hit regions
if (touches.Count > 0 && touches.State ==
TouchLocationState.Pressed)
{
// Get the tapped position
Point point = new Point((int)touches.Position.X,
(int)touches.Position.Y);
// Check whether the point is inside the UP region
if(recUp.Contains(point))
{
// Move the camera forward
view.Translation += new Vector3(0, 0, 5);
}
// Check whether the point is inside the DOWN region
else if (recDown.Contains(point))
{
// Move the camera backward
view.Translation += new Vector3(0, 0, -5);
}
// Check whether the point is inside the LEFT region
else if (recLeft.Contains(point))
{
// Rotate the camera around Y in clockwise
view *= Matrix.CreateRotationY(
}
// Check whether the point is inside the RIGHT region
else if (recRight.Contains(point))
{
// Rotate the camera around Y in counter-
// clockwise
view *= Matrix.CreateRotationY(
}
}
// Check the available gestures
while (TouchPanel.IsGestureAvailable)
{
switch (gestures.GestureType)
{
// If the GestureType is FreeDrag
case GestureType.FreeDrag:
// Read the Delta.Y to angle.X, Delta.X to angle.Y
// Because the rotation value around axis Y
// depends on the Delta changing on axis X
angle.X = gestures.Delta.Y * 0.001f;
angle.Y = gestures.Delta.X * 0.001f;
gestureDelta = gestures.Delta;
// Identify the view and rotate it
view *= Matrix.Identity;
view *= Matrix.CreateRotationX(angle.X);
view *= Matrix.CreateRotationY(angle.Y);
// Reset the angle to next coming gesture.
angle.X = 0;
angle.Y = 0;
break;
}
}
[/code]
5. Render the models on screen. We define a DrawModel() method, which will be called in the main Draw() method for showing the models:
[code]
public void DrawModel(Model model)
{
Matrix[] transforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(transforms);
// Draw the model. A model can have multiple meshes.
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
effect.World = transforms[mesh.ParentBone.Index];
effect.View = view;
effect.Projection = projection;
}
mesh.Draw();
}
}
[/code]
6. Then insert the following code to the Draw() method:
[code]
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
GraphicsDevice.BlendState = BlendState.Opaque;
DrawModel(modelTree);
DrawModel(modelBox);
spriteBatch.Begin();
spriteBatch.DrawString(spriteFont, gestureDelta.ToString(),
new Vector2(0, 0), Color.White);
spriteBatch.End();
[/code]
7. Now, build and run the application. It will look similar to the following screenshots. Flick on the screen and you will see a different view: How it works…

In step 1, in this code we declare two matrices, one for camera View matrix and one for Projection matrix. A Vector3 variable, position, indicates the camera position. The other two Model variables modelTree and modelBox will be used to load the 3D model. The following four Rectangle variables individually represent the Up, Down, Right, and Left hit regions on the Windows Phone 7 touchscreen. The angle variable will let the game know how to rotate the View matrix and the last variable, gestureDelta, will show the actual delta value the gestures take.

In step 2, in the initialize process, you will need to enable the GestureType.FreeDrag for the view rotation changing in the Update process. Then define the camera View matrix and Projection matrix. The block of code after that is about the hit region on screen definition. You can understand the basic logic from the following figure: We use four rectangles to divide the screen into four parts, UP, LEFT, RIGHT, and DOWN. The width of UP and DOWN rectangles is half of the screen width in landscape mode and their height is half of the screen height. Similarly, the width of the LEFT and RIGHT rectangle is a quarter of the screen width in landscape mode and their height is the height of the screen. According to the width and height of each rectangle, it will be easy for you to know their start positions.

In step 3, all the models or fonts must be loaded from the ContentManager, which you have used for 2D image manipulation.

In step 4, the first part of code before the while loop is to get the tapped position and check whether it is in the four hit regions and do the corresponding operations. If the tapped position is within the UP or DOWN rectangle, we translate the camera; if it is located at the range of the LEFT or RIGHT rectangle, the camera view will be rotated. The next part is to react to the FreeDrag gesture, which will change the direction of the camera freely. In the code block of step 4, you need to read the gesture after gesture availability checking, and then determine which gesture type is taking place. Here, we deal with the GestureType. FreeDrag. If you flick horizontally, the X axis will change, you could rotate the camera around the Y axis by delta X; if you flick vertically, the Y axis will change, you could rotate the camera around the X axis by delta Y. Following the law, we assign Delta.X to angle.Y for rotation around the Y axis and assign Delta.Y to angle.X for pitch rotation around the X axis. When all the necessary gesture delta values are ready, you can do the camera rotation. Identify the View matrix, then use Matrix.CreateRotationX() and Matrix.CreateRotationY() to rotate the camera around the Y and X axes. Finally, you need to reset the angle for the next gesture delta value.

In step 5, we also copy and transform model matrices and apply the effect in every mesh in the model.

In step 6, you may be curious on the DepthStencilState. The depth stencil state controls how the depth buffer and the stencil buffer are used, from XNA SDK.

During rendering, the z position (or depth) of each pixel is stored in the depth buffer. When rendering pixels more than once—such as when objects overlap—depth data is compared between the current pixel and the previous pixel to determine which pixel is closer to the camera. When a pixel passes the depth test, the pixel color is written to a render target and the pixel depth is written to the depth buffer.

A depth buffer may also contain stencil data, which is why a depth buffer is often called a depth-stencil buffer. Use a stencil function to compare a reference stencil value—a global value you set—to the per-pixel value in the stencil buffer to mask which pixels get saved and which are discarded.

The depth buffer stores floating-point depth or z data for each pixel while the stencil buffer stores integer data for each pixel. The depth-stencil state class, DepthStencilState, contains the state that controls how depth and stencil data impact rendering.

Implementing a round rotating camera in a 3D game

When a 3D game reaches the end, sometimes the camera will go up and rotate around the player. On the other hand, when a 3D game just begins, a camera will fly from a very far point to the player’s position very fast, like a Hollywood movie. It’s impressive and fantastic. In this recipe, you will learn how to create this effect.

How to do it…

1. First of all, we create a Windows Phone Game project, change the name from Game1. cs to RoundRotateCameraGame.cs. Then, add two 3D models, tree.fbx and box.fbx, to the content project.
2. Declare the variables used in the game in the RoundRotateCamerGame class:
[code]
// View matrix for camera
Matrix view;
// Projection matrix for camera
Matrix projection;
// Camera position
Vector3 position;
// Tree and box models
Model modelTree;
Model modelBox;
[/code]
3. Define the View and Project matrix and add the following code to the Initialize() method:
[code]
// Camera position
position = new Vector3(0, 40, 50);
// Camera lookat target
Vector3 target = new Vector3(0, 0, 0);
// Define the View matrix
view = Matrix.CreateLookAt(position, target, Vector3.Up);
// Define the Project matrix
projection = Matrix.CreatePerspectiveFieldOfView(
MathHelper.PiOver4,
GraphicsDevice.Viewport.AspectRatio, 1, 1000.0f);
[/code]
4. We load and initialize the 3D models and insert the following lines into LoadContent():
[code]
[/code]
5. This step is most important for rotating the camera. Paste the following code into the Update() method:
[code]
// Get the game time
float time = (float)gameTime.TotalGameTime.TotalSeconds;
// Get the rotate value from -0.1 to +0.1 around the Sin(time)
Matrix rotate = Matrix.CreateRotationY(
(float)Math.Sin(time) * 0.1f);
// Update the view camera’s position according to the rotate
//value;
position = (Matrix.CreateTranslation(position) *
rotate).Translation;
view = Matrix.CreateLookAt(position, Vector3.Zero,
Vector3.Up);
[/code]
6. The last step is to draw the models on the touchscreen. We defined a model drawing method:
[code]
public void DrawModel(Model model)
{
Matrix[] transforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(transforms);
// Draw the model. A model can have multiple meshes, so
// loop.
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
effect.World =
transforms[mesh.ParentBone.Index];
effect.View = view;
effect.Projection = projection;
}
mesh.Draw();
}
}
[/code]
7. Then we add the other code to the Draw() method:
[code]
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
GraphicsDevice.BlendState = BlendState.Opaque;
DrawModel(modelTree);
DrawModel(modelBox);
[/code]
8. All done! Build and run the application. You will see a rotating camera around the 3D objects, as shown in the following screenshots: How it works…

In step 2, we declare the view and projection matrices for the camera and the position vector for the camera’s location. The last two model variables will be used to load the 3D model objects.

In step 3, the camera will locate at (X:0, Y:40, Z:50) forwards to (X:0, Y:0, Z:0) and have an Up vector(0, 1, 0), the Projection will have a 45 degree view scope from 1 to 1000.

In step 4, you should always load the content from ContentManager with different types as you need. Here, the type is Model—a 3D object.

In step 5, we read the game time to rotate the camera automatically as time passes by. Then we use Math.Sin() to change the rotation value within the range of -1 to +1. This is required because without this method, the time will keep on increasing and the camera will rotate faster and faster. Matrix.CreateRotationY() receives a radian value to rotate around the Y axis, here the value will be -0.1 to +0.1. The last part is about updating the view matrix; we translate and rotate the camera’s position for creating a new view matrix.

In step 6, this code is the basic for drawing a static model, we have discussed in former recipes, here we just need to pay attention on the View matrix and Projection matrix and these two matrices actually impact the rendered effect.

In step 7, notice the GraphicsDevice.DepthStencilState, which is important for the rendering order.

Implementing a chase camera

A chase camera will move smoothly around a 3D object and regardless of how the camera view is changed, the camera will restore to its original position. This kind of camera is useful for a racing game or an acceleration effect. In this recipe, you will learn how to make your own chase camera in Windows Phone 7.

How to do it…

1. Create a Windows Phone Game in Visual Studio 2010, change the name from Game1.cs to ChaseCameraGame.cs. Then add the box.fbx 3D model to the content project. After the initial work, you should insert the following code to the ChaseGameCamera class as fields:
[code]
Model boxModel;
// Camera View and Projection matrix
Matrix view;
Matrix projection;
// Camera’s position
Vector3 position;
// Camera look at target
Vector3 target;
// Offset distance from the target.
Vector3 offsetDistance;
// Yaw, Pitch values
float yaw;
float pitch;
// Angle delta for GestureType.FreeDrag
Vector3 angle;
[/code]
2. Instantiate the variables. Add the following lines into the Initialize() method:
[code]
// Enable the FreeDrag gesture type
TouchPanel.EnabledGestures = GestureType.FreeDrag;
// Define the camera position and desired position
position = new Vector3(0, 1000, 1000);
// Define the target position and desired target position
target = new Vector3(0, 0, 0);
// the offset from target
offsetDistance = new Vector3(0, 50, 100);
yaw = 0.0f;
pitch = 0.0f;
// Identify the camera View matrix
view = Matrix.Identity;
// Define the camera Projection matrix
projection = Matrix.CreatePerspectiveFieldOfView(
MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
1f, 1000f);
// Initialize the angle
angle = new Vector3();
[/code]
[code]
[/code]
4. Update the chase camera. First, we define the UpdateView() method, as follows:
[code]
private void UpdateView(Matrix World)
{
// Normalize the right and up vector of camera world
// matrix
World.Right.Normalize();
World.Up.Normalize();
// Assign the actual world matrix translation to target
target = World.Translation;
// Rotate the right vector of camera world matrix
target += World.Right * yaw;
// Rotate the up vector of camera world matrix
target += World.Up * pitch;
// Interpolate the increment in every frame until the
// position is
// equal to the offset distance from target
position = Vector3.SmoothStep(position, offsetDistance,
0.15f);
// Interpolate the increment or decrement
// from current yaw value to 0 to yaw in every frame
yaw = MathHelper.SmoothStep(yaw, 0f, 0.1f);
// Interpolate the increment or decrement from current
// pitch value to 0 to pitch in every frame
pitch = MathHelper.SmoothStep(pitch, 0f, 0.1f);
// Update the View matrix.
view = Matrix.CreateLookAt(position, target, World.Up);
}
[/code]
5. In the Update() method, we insert the following code:
[code]
// Check the available gestures
while (TouchPanel.IsGestureAvailable)
{
// Make sure which gesture type is taking place
switch (gestures.GestureType)
{
// If the gesture is GestureType.FreeDrag
case GestureType.FreeDrag:
// Read the Delta.Y to angle.X, Delta.X to angle.Y
// Because the rotation value around axis Y
// depends on the Delta changing on axis X
angle.Y += gestures.Delta.X ;
angle.X += gestures.Delta.Y ;
// assign the angle value to yaw and pitch
yaw = angle.Y;
pitch = angle.X;
// Reset the angle value for next FreeDrag gesture
angle.Y = 0;
angle.X = 0;
break;
}
}
// Update the viewMatrix
UpdateView(Matrix.Identity);
[/code]
6. The final step is to draw the model. The drawing code will be as follows:
[code]
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
// The following three lines are to ensure that the
// models
// are drawn correctly
GraphicsDevice.DepthStencilState =
DepthStencilState.Default;
GraphicsDevice.BlendState = BlendState.AlphaBlend;
DrawModel(boxModel);
base.Draw(gameTime);
}
// Draw the model
private void DrawModel(Model model)
{
Matrix[] modelTransforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(modelTransforms);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
effect.World =
modelTransforms[mesh.ParentBone.Index];
effect.View = view;
effect.Projection = projection;
}
mesh.Draw();
}
}
[/code]
7. Now, build and run the application. You will see the application runs similar to the following screenshots: How it works…

In step 1, we declare a boxModel object for loading the box model and the view and projection matrices for camera. The position vector specifies the camera position. The offsetDistance indicates the distance from the target. The yaw and pitch variables represent the rotation value of the camera and the angle stores the actual value that the gesture generates.

In step 2, the initialization phase, you should enable the FreeDrag gesture, give the camera the startup position and its target and define the offset distance from the target, assign the initial value to yaw and pitch and then identify the camera View matrix, which will be used to update the rotation of the camera. Moreover, you still need to specify the camera Projection matrix and instantiate the angle variable.

In step 4, the UpdateView() method does the actual rotation and chase operations depending on the gesture value. First, we normalize the right and up vectors of the world matrix to make the directions easy and accurate to compute. Then, assign the world translation to the target variable, which will be used as the camera look-at position. For camera rotation, we use World.Right * yaw to rotate the camera around the Y axis and World.Up * pitch to rotate the camera around the X axis. Next, we interpolate the increment about 0.15 to the camera’s position until it is equal to the predefined offset distance in every frame. The MathHelper.SmoothStep() method generates smooth values between the source and end settings. Then, we interpolate the increment or decrement about 0.1 from the current yaw value to 0 in every frame. The camera will be rotated to the original position around the Y axis. Similarly, we interpolate the smooth values to the pitch variable. The final step will update the view matrix based on the position and target values.

In step 5, the first part is to handle the FreeDrag gesture. When the gesture is caught, the code will store the gesture delta value to angle. angle.X saves the delta value around the Y axis and angle.Y maintains the value around the X axis. Then we pass angle.Y to the yaw variable for the rotation around the Y axis and assign angle.X to the pitch variable for rotating around the X axis. After that, we reset the angle value for the next gesture. Eventually, we call the UpdateView() to update the camera view.

Using culling to remove the unseen parts and texture mapping

In a real 3D game, a large number of objects exist in the game world. Every object has hundreds or thousands of faces. To render all of the faces is a big performance cost. Therefore, we use Frustum to filter the outside objects, and then use the culling algorithm to remove the unseen part of the objects. These approaches cut the unnecessary parts in the 3D game, improving the performance significantly. In this recipe, you will learn how to use the culling method in Windows Phone 7 Game development.

In Windows Phone 7 XNA, the culling method uses the back-face culling algorithm to remove the unseen parts. This culling method is based on the observation that, if all objects in the world are closed, then the polygons, which do not face the viewer, cannot be seen. This directly translates to the vector angle between the direction where the viewer is facing and the normal of the face; if the angle is more than 90 degrees, the polygon can be discarded. Backface culling is automatically performed by XNA. It can be expected to cull roughly half of the polygons in the view Frustum.

How to do it…

Now, let’s see how Windows Phone 7 XNA performs the culling method:

1. Create a Windows Phone Game project in Visual Studio 2010, change the name from Game1.cs to CullingGame.cs, and add the Square.png file to the content project.
2. Declare the variables for the project. Add the following lines to the CullingGame class:
[code]
// Texture
Texture2D texSquare;
// Camera’s Position
Vector3 position;
// Camera look at target
Vector3 target;
//Camera World matrix
Matrix world;
//Camera View matrix
Matrix view;
//Camera Projection matrix
Matrix projection;
BasicEffect basicEffect;
// Vertex Structure
VertexPositionTexture[] vertexPositionTextures;
// Vertex Buffer
VertexBuffer vertexBuffer;
// Rotation for the texture
float rotation;
// Translation for the texture
Matrix translation;
// Bool value for whether keeps rotating the texture
bool KeepRotation = false;
[/code]
3. Initialize the basic effect, camera, vertexPositionTextures array, and set the culling mode in Windows Phone 7 XNA. Insert the following code to the Initialize() method:
[code]
// Initialize the basic effect
basicEffect = new BasicEffect(GraphicsDevice);
// Define the world matrix of texture
translation = Matrix.CreateTranslation(new Vector3(25, 0, 0));
// Initialize the camera position and look-at target
position = new Vector3(0, 0, 200);
target = Vector3.Zero;
// Initialize the camera transformation matrices
world = Matrix.Identity;
view = Matrix.CreateLookAt(position, target, Vector3.Up);
projection = Matrix.CreatePerspectiveFieldOfView(
MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
1, 1000);
// Allocate the VertexPositionTexture array
vertexPositionTextures = new VertexPositionTexture;
// Define the vertex information
vertexPositionTextures = new VertexPositionTexture(
new Vector3(-25, -25, 0), new Vector2(0, 1));
vertexPositionTextures = new VertexPositionTexture(
new Vector3(-25, 25, 0), new Vector2(0, 0));
vertexPositionTextures = new VertexPositionTexture(
new Vector3(25, -25, 0), new Vector2(1, 1));
vertexPositionTextures = new VertexPositionTexture(
new Vector3(-25, 25, 0), new Vector2(0, 0));
vertexPositionTextures = new VertexPositionTexture(
new Vector3(25, 25, 0), new Vector2(1, 0));
vertexPositionTextures = new VertexPositionTexture(
new Vector3(25, -25, 0), new Vector2(1, 1));
// Define the vertex buffer
vertexBuffer = new VertexBuffer(
GraphicsDevice,
VertexPositionTexture.VertexDeclaration, 6,
BufferUsage.None);
// Set the VertexPositionTexture array to vertex buffer
vertexBuffer.SetData<VertexPositionTexture>(
vertexPositionTextures);
// Set the cull mode
RasterizerState rasterizerState = new RasterizerState();
rasterizerState.CullMode = CullMode.CullCounterClockwiseFace;
GraphicsDevice.RasterizerState = rasterizerState;
// Set graphic sample state to PointClamp
graphics.GraphicsDevice.SamplerStates =
SamplerState.PointClamp;
[/code]
[code]
[/code]
5. React to the Tap to rotate the square texture. Add the following code to the Update() method:
[code]
TouchCollection touches = TouchPanel.GetState();
if (touches.Count > 0 && touches.State ==
TouchLocationState.Pressed)
{
Point point = new Point((int)touches.Position.X,
(int)touches.Position.Y);
if (GraphicsDevice.Viewport.Bounds.Contains(point))
{
KeepRotation = true;
}
}
if (KeepRotation)
{
rotation += 0.1f;
}
[/code]
6. Draw the texture on the screen and add the following lines to the Draw() method:
[code]
// Set the matrix information to basic effect
basicEffect.World = world * translation *
Matrix.CreateRotationY(rotation);
basicEffect.View = view;
basicEffect.Projection = projection;
// Set the texture
basicEffect.TextureEnabled = true;
basicEffect.Texture = texSquare;
// Iterate the passes in the basic effect
foreach (var pass in basicEffect.CurrentTechnique.Passes)
{
pass.Apply();
GraphicsDevice.SetVertexBuffer(vertexBuffer, 0);
GraphicsDevice.DrawUserPrimitives<VertexPositionTexture>
(PrimitiveType.TriangleList, vertexPositionTextures,
0, 2);
}
[/code]
7. Now, build and run the application. The application will run similar to the following screenshot: 8. When you tap on the screen, the texture will run similar to the following screenshots. The last one is blank because the rotation is 90 degrees. How it works…

In step 2, the basicEffect represents the effect for rendering the texture. The VertexPositionTexture array will be used to locate and scale the texture on screen. The VertexBuffer holds the VertexPositionTexture data for the graphic device to draw the texture. The rotation variable will determine how much rotation will take place around the Y axis. The translation matrix indicates the world position of the texture. The bool value KeepRotation is a flag that signals to the object whether it has to keep rotating or not. This value could be changed by touching the Windows Phone 7 screen.

In step 3, in the code, you need to notice the VertexPositionTexture array initialization. The texture is a square composed of two triangles having six vertices. We define the position and texture UV coordinates of every vertex. You can find a detailed explanation of texture coordinates from a computer graphic introduction book such as Computer Graphics with OpenGL written by Donald D. Hearn, M. Pauline Baker, and Warren Carithers. After the vertex initialization, we define the vertex buffer with the VertexPositionTexture type and pass 6 to vertex count and let the vertex buffer be read and written by setting the BufferUsage to None. Next, we need to fill the vertex buffer with VertexPositionTexture data defined previously. After that, the configuration of CullMode in RasterizerState will impact the culling method for the texture. Here, we set the CullMode to CullMode. CullCounterClockwiseFace. From the back-face algorithm, the normal will face the camera, and none of the backward polygons will be seen, as they will be removed. The last setting on GraphicDevice.SamplerState is important. The SamplerState class determines how to sample texture data. When covering a 3D triangle mesh with a 2D texture, you supply 2D texture coordinates that range from (0, 0), the upper-left corner, to (1, 1), the lower-right corner. But you can also supply texture coordinates outside that range, and based on the texture address mode setting the image will be clamped (that is, the outside rim of pixels will just be repeated) or a tiled pattern or wrapped in a flip-flop mirror effect. The SamplerState supports Wrap, Mirror, and Clamp effects.

In step 5, the code first checks whether the tapped position is within the touchscreen. If so, set the KeepRotation value to true. Then the updating will increase the rotation value by 0.1 in every frame if the KeepRotation is true.

In step 6, the BasicEffect.World is used to translate and rotate the texture in 3D, where BasicEffect.View and Projection define the camera view. Then, we set the texture to basic effect. When all the necessary settings have been done, the foreach loop will use the Pass in basic effect technique to draw the primitives. We set the vertexBuffer defined before to the current graphic device. Finally, we draw the primitives with texture from the vertex buffer.

### Digital Marketing for Beginners

Digital marketing for starter, Let to basic learning about connecting with your audience in the right place at the...