Drawing with Z-Index Ordering

Prioritized Drawing

We already have the ability to perform prioritized “z-index” drawing of a sprite image with the SpriteBatch.Draw() method, and we have been using the z-index parameter all along, just set to a value of zero. This effectively gave every sprite the same priority. When that is the case, priority will be based entirely on the order at which sprites are drawn (in gameplay code). SpriteBatch.Draw() has the capability to automatically prioritize the drawing of some sprites over the top of other sprites using this z-index buffer.

What does “z-index” mean, you may be wondering? When doing 2D sprite programming, we deal only with the X and Y coordinates on the screen. The Z coordinate, then, is the position of the sprite in relation to other sprites that overlap each other. The range for the z-index goes from 0.0 to 1.0. So, a sprite with a z-index priority of 0.6 will draw over a sprite with a z-index priority of 0.3. Note that 1.0 is the highest, and 0.0 is the lowest.

Sprite Class Changes

Adding Z-Buffering to the Sprite Class

A very minor change is required to enable z-index drawing in our Sprite class. We’ll go over those changes here.

  1. Add a zindex variable to the class:
    [code]
    public float zindex;
    [/code]
  2. In the Sprite class constructor, initialize the new variable:
    [code]
    zindex = 0.0f;
    [/code]
  3. Replace the hard-coded zero with the zindex variable in the SpriteBatch.Draw() calls inside Sprite.Draw() (there are two of them):
    [code]
    public void Draw()
    {
    if (!visible) return;
    if (totalFrames > 1)
    {
    Rectangle source = new Rectangle();
    source.X = (frame % columns) * (int)size.X;
    source.Y = (frame / columns) * (int)size.Y;
    source.Width = (int)size.X;
    source.Height = (int)size.Y;
    p_spriteBatch.Draw(image, position, source, color,
    rotation, origin, scale, SpriteEffects.None, zindex);
    }
    else
    {
    p_spriteBatch.Draw(image, position, null, color, rotation,
    origin, scaleV, SpriteEffects.None, zindex);
    }
    }
    [/code]

Adding Rendering Support for Z-Buffering

To use a z-buffer for prioritized drawing, a change must be made to the call to SpriteBatch.Begin(), which gets drawing started in the main program code. No change is made to SpriteBatch.End(). There are actually five overloads of SpriteBatch.Begin()! The version we want requires just two parameters: SpriteSortMode and BlendState. Table 16.1 shows the SpriteSortMode enumeration values.

SpriteSortMode Enumeration

The default sorting mode is Deferred, which is what SpriteBatch uses when the default Begin() is called. For z-index ordering, we will want to use either BackToFront or FrontToBack. There is very little difference between these two except the weight direction of each sprite’s z-index.

When using BackToFront, smaller z-index values have higher priority, with 0.0 being drawn over other sprites within the range up to 1.0.

When using FrontToBack, the opposite is true: Larger z-index values (such as 1.0) are treated with higher priority than lower values (such as 0.0).

It works best if you just choose one and stick with it to avoid confusion. If you think of a z-index value of 1.0 being “higher” in the screen depth, use FrontToBack. If 0.0 seems to have a higher priority, use BackToFront.

In our example, we will use FrontToBack, with larger values for the z-index having higher priority.

Z-Index Demo

To demonstrate the effect z-index ordering has on a scene, I have prepared an example that draws a screen full of sprites and then moves a larger sprite across the screen. Based on the z-index value of each sprite, the larger sprite will appear either over or under the other sprites. In this example, the screen is filled with animated asteroid sprites, and the larger sprite is an image of a shuttle. Figure 16.1 shows the demo with the shuttle sprite drawn on top of the first half of the asteroids.

The shuttle appears over the first half of the rows.
FIGURE 16.1 The shuttle appears over the first half of the rows.

Figure 16.2 shows the shuttle sprite farther across the screen, where it appears under the second half of the rows of asteroid sprites (which have a higher z-index than the shuttle sprite).

The shuttle appears under the second half of the rows.
FIGURE 16.2 The shuttle appears under the second half of the rows.

Wrapping Around the Screen Edge

To assist with this demo, a new Animation subclass has been written to automatically wrap a sprite around the edges of the screen. This completely eliminates any gameplay code that would otherwise have to be added to the Update() method. This is called synergy—when a combination of simple things produces awesome results! We’re starting to see that happen with our sprite and animation code now. Wrapping around the edges of the screen might be considered more of a behavior than an animation.

[code]
public class WrapBoundary : Animation
{
Rectangle boundary;
public WrapBoundary(Rectangle boundary)
: base()
{
animating = true;
this.boundary = boundary;
}
public override Vector2 ModifyPosition(Vector2 original)
{
Vector2 pos = original;
if (pos.X < boundary.Left)
pos.X = boundary.Right;
else if (pos.X > boundary.Right)
pos.X = boundary.Left;
if (pos.Y < boundary.Top)
pos.Y = boundary.Bottom;
else if (pos.Y > boundary.Bottom)
pos.Y = boundary.Top;
return pos;
}
}
[/code]

Z-Index Demo Source Code

Listing 16.1 contains the source code for the Z-Index Demo. Note the code highlighted in bold—these lines are relevant to our discussion of z-buffering.

LISTING 16.1 Source Code for the Z-Index Demo

[code]
public class Game1 : Microsoft.Xna.Framework.Game
{
GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;
TouchLocation oldTouch;
Random rand;
SpriteFont font;
List<Sprite> objects;
Texture2D asteroidImage;
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = “Content”;
TargetElapsedTime = TimeSpan.FromTicks(333333);
oldTouch = new TouchLocation();
}
protected override void Initialize()
{
base.Initialize();
}
protected override void LoadContent()
{
rand = new Random();
spriteBatch = new SpriteBatch(GraphicsDevice);
font = Content.Load<SpriteFont>(“WascoSans”);
//create object list
objects = new List<Sprite>();
//create shuttle sprite
Sprite shuttle = new Sprite(Content, spriteBatch);
shuttle.Load(“shuttle”);
shuttle.scale = 0.4f;
shuttle.position = new Vector2(0, 240);
shuttle.rotation = MathHelper.ToRadians(90);
shuttle.velocityLinear = new Vector2(4, 0);
Rectangle bounds = new Rectangle(-80, 0, 800 + 180, 480);
shuttle.animations.Add(new WrapBoundary(bounds));
shuttle.zindex = 0.5f;
objects.Add(shuttle);
//load asteroid image
asteroidImage = Content.Load<Texture2D>(“asteroid”);
//create asteroid sprites with increasing z-index
for (int row = 0; row < 13; row++)
{
float zindex = row / 12.0f;
for (int col = 0; col < 8; col++)
{
Sprite spr = new Sprite(Content, spriteBatch);
spr.image = asteroidImage;
spr.columns = 8;
spr.totalFrames = 64;
spr.animations.Add(new FrameLoop(0, 63, 1));
spr.position = new Vector2(30 + 60 * row, 30 + col * 60);
spr.size = new Vector2(60, 60);
spr.zindex = zindex;
objects.Add(spr);
}
}
}
protected override void Update(GameTime gameTime)
{
if (GamePad.GetState(PlayerIndex.One).Buttons.Back ==
ButtonState.Pressed)
this.Exit();
foreach (Sprite spr in objects)
{
spr.Rotate();
spr.Move();
}
base.Update(gameTime);
}
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin(SpriteSortMode.FrontToBack, BlendState.AlphaBlend);
foreach (Sprite spr in objects)
{
spr.Animate();
spr.Draw();
}
spriteBatch.End();
base.Draw(gameTime);
}
}
[/code]

Transforming Sprites

Translating (Moving) a Sprite

Sprite translation, or movement, can be done directly or indirectly. The direct approach involves writing code to move a sprite specifically to one location or another. The indirect approach involves setting up properties and variables that cause the sprite to move on its own based on those properties—we only write code that updates and draws the sprite, but the properties determine the movement. Although it is possible to cause a sprite to move in an arc or to follow a curve, that is not the type of movement we will be discussing in this hour. Instead, we’ll see how to move a sprite a short distance per frame (and remember, although an XNA game normally draws at 60 fps, a WP7 game runs at only 30). The “curves” our sprites will follow are actually complete circles, as you’ll see in the Solar System Demo coming up.

The math calculations we will perform in this hour to translate (move) a sprite are based on the 2D Cartesian coordinate system, represented in Figure 7.1. There are two axes, X and Y, which is why we refer to it as a “2D” system. Both the X and the Y axes start at the center, which is called the origin, residing at position (0,0). The X axis goes left (-) and right (+) from the origin. The Y axis goes down (-) and up (+) from the origin.

2D Cartesian coordinates are composed of two axes, X and Y.
FIGURE 7.1 2D Cartesian coordinates are composed of two axes, X and Y.

When making trigonometric calculations to transform a sprite’s position, the Y value must be reversed. If you look at the figure, you can see why! On a computer screen, as well as the WP7 screen, the origin (0,0) is located at the upper left, with X increasing right and Y increasing down. Since the Cartesian system represents Y positive going up, any Y coordinates we calculate must be reversed, or negated. This can be done by multiplying every Y result by -1.

Moving the position of a sprite needn’t be overly complicated. This is the easiest of the three transforms that we can calculate, because the calculation can just be skipped and the point can be moved to the desired target position (at this early stage). See Figure 7.2.

Translation (movement) from one point to another.
FIGURE 7.2 Translation (movement) from one point to another.

Using Velocity as Movement Over Time

Velocity can be calculated as distance over time. Mathematically, the formula is V = D / T. This calculation is more useful after a certain distance has been traveled in a certain amount of time, to figure out the average speed. But it is not as useful when it comes to moving sprites across the screen. For that purpose, we need to set a certain velocity ahead of time, while the distance and time variables are not as important. Let’s add a new velocity property to our ever-evolving Sprite class. Since velocity is a public property, we can use it outside of the class. Just for fun, let’s add a new method to the class to move the sprite based on velocity, as shown in Listing 7.1.

LISTING 7.1 Adding a new property to our Sprite class.

[code]
public class Sprite
{
private ContentManager p_content;
private SpriteBatch p_spriteBatch;
public Texture2D image;
public Vector2 position;
public Vector2 velocity;
public Color color;
public Sprite(ContentManager content, SpriteBatch spriteBatch)
{
p_content = content;
p_spriteBatch = spriteBatch;
image = null;
position = Vector2.Zero;
color = Color.White;
}
public bool Load(string assetName)
{
try
{
image = p_content.Load<Texture2D>(assetName);
}
catch (Exception) { return false; }
return true;
}
public void Draw()
{
p_spriteBatch.Draw(image, position, color);
}
public void Move()
{
position += velocity;
}
}
[/code]

The Move() method does something very simple—adds the velocity to the sprite’s position. The position will be rounded to the nearest X,Y pixel coordinate on the screen, but the velocity definitely will not be represented with whole numbers. Considering a game running at 60 fps, the velocity will be a fraction of a pixel, such as 0.01 (1/100th or 1% of a pixel). At this rate, assuming that the game is running at 60 fps, the sprite will move one pixel in 6/10ths of a second. This is equal to roughly two pixels per second, because 0.01 times 60 is 0.60. See Figure 7.3.

Sprite velocity in relation to frame rate.
FIGURE 7.3 Sprite velocity in relation to frame rate.

An XNA WP7 program runs at only 30 fps, so expect a difference in performance compared to Windows and Xbox 360 projects (which usually run at 60 fps).

We can put this simple addition to the Sprite class to the test. Included with this hour is an example called Sprite Movement. Open the project and take it for a spin. You’ll see that a funny-looking spaceship is moving over a background image, as shown in Figure 7.4, wrapping from right to left as it reaches the edge of the screen. Two asset files are required to run this project: craters800x480.tga and fatship256.tga, also included with the project. The source code is found in Listing 7.2.

Testing velocity with sprite motion.
FIGURE 7.4 Testing velocity with sprite motion.

LISTING 7.2 Test project for the new and improved Sprite class with movement capability.

[code]
public class Game1 : Microsoft.Xna.Framework.Game
{
GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;
Sprite background;
Sprite ship;
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = “Content”;
TargetElapsedTime = TimeSpan.FromTicks(333333);
}
protected override void Initialize()
{
base.Initialize();
}
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
background = new Sprite(Content, spriteBatch);
background.Load(“craters800x480”);
ship = new Sprite(Content, spriteBatch);
ship.Load(“fatship256”);
ship.position = new Vector2(0, 240 – ship.image.Height / 2);
ship.velocity = new Vector2(8.0f, 4.0f);
}
protected override void Update(GameTime gameTime)
{
if (GamePad.GetState(PlayerIndex.One).Buttons.Back ==
ButtonState.Pressed)
this.Exit();
//move the sprite
ship.Move();
//rebound from top and bottom
if (ship.position.Y < -70 | ship.position.Y > 480 – 185)
{
ship.velocity.Y *= -1;
}
//wrap around right to left
if (ship.position.X > 800)
ship.position.X = -256;
base.Update(gameTime);
}
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin();
//draw the background
background.Draw();
//draw the sprite
ship.Draw();
spriteBatch.End();
base.Draw(gameTime);
}
}
[/code]

Moving Sprites in a Circle

We’re going to use the sun and planet artwork introduced in the preceding hour to make a simple solar system simulation. The calculations will not be meaningful to an astronomy fan; we just want the planet sprites to rotate around the sun, with no relation to our actual solar system. For this project, we need several planet bitmaps with alpha channel transparency, but because the background will just be black, if the planets have a black background that will suffice.

The sun needs to be positioned at the center of the screen, with the planets moving around it in concentric circular orbits. To make a sprite move in a circle, we need to use basic trigonometry—sine and cosine functions.

To make the code in our Solar System Demo a little cleaner, I’ve created a quick subclass of Sprite called Planet that adds three properties: radius, angle, and velocity. Notice that the Planet constructor, Planet(), calls the Sprite constructor using the base() call, passing ContentManager and SpriteBatch as the two required parameters to Sprite. The syntax for inheritance from a base class is shown in the class definition that follows. This is very helpful as we can count on Sprite to initialize itself.

[code]
public class Planet : Sprite
{
public float radius;
public float angle;
public float velocity;
public Planet(ContentManager content, SpriteBatch spriteBatch)
: base(content, spriteBatch)
{
radius = 0.0f;
angle = 0.0f;
velocity = 0.0f;
}
}
[/code]

The rest of the source code for the Solar System Demo is up next. There are just four planets, but it is still efficient to use a List container to store them rather than using global variables. We’re slowly seeing a little more code added to our WP7 “programmer’s toolbox” with each new hour and each new example, which is a good thing. There’s a lot more code in the initialization part of ContentLoad() than in either Update() or Draw(), but that is to be expected in a program with so few sprites on the screen. Figure 7.5 shows the program running, and the source code is found in Listing 7.3.

Solar System demo.
FIGURE 7.5 Solar System demo.

LISTING 7.3 The Solar System Demo project code.

[code]
public class Game1 : Microsoft.Xna.Framework.Game
{
GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;
Viewport screen;
Sprite sun;
List<Planet> planets;
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = “Content”;
TargetElapsedTime = TimeSpan.FromTicks(333333);
}
protected override void Initialize()
{
base.Initialize();
}
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
screen = GraphicsDevice.Viewport;
//create sun sprite
sun = new Sprite(Content, spriteBatch);
sun.Load(“sun”);
sun.position = new Vector2((screen.Width – sun.image.Width)/2,
(screen.Height – sun.image.Height)/2);
//create planet sprites
planets = new List<Planet>();
//add 1st planet
Planet planet = new Planet(Content, spriteBatch);
planet.Load(“planet1”);
planet.velocity = 0.2f;
planet.radius = 100;
planet.angle = MathHelper.ToRadians(30);
planets.Add(planet);
//add 2nd planet
planet = new Planet(Content, spriteBatch);
planet.Load(“planet2”);
planet.velocity = 0.16f;
planet.radius = 140;
planet.angle = MathHelper.ToRadians(60);
planets.Add(planet);
//add 3rd planet
planet = new Planet(Content, spriteBatch);
planet.Load(“planet3”);
planet.velocity = 0.06f;
planet.radius = 195;
planet.angle = MathHelper.ToRadians(90);
planets.Add(planet);
//add 4th planet
planet = new Planet(Content, spriteBatch);
planet.Load(“planet4”);
planet.velocity = 0.01f;
planet.radius = 260;
planet.angle = MathHelper.ToRadians(120);
planets.Add(planet);
}
protected override void Update(GameTime gameTime)
{
if (GamePad.GetState(PlayerIndex.One).Buttons.Back ==
ButtonState.Pressed)
this.Exit();
//update planet orbits
foreach (Planet planet in planets)
{
//update planet’s orbit position
planet.angle += planet.velocity;
//calculate position around sun with sine and cosine
float orbitx = (float)(Math.Cos(planet.angle) * planet.radius);
float orbity = (float)(Math.Sin(planet.angle) * planet.radius);
//center the planet so it orbits around the sun
float x = (screen.Width – planet.image.Width) / 2 + orbitx;
float y = (screen.Height – planet.image.Height) / 2 + orbity;
//save new position
planet.position = new Vector2(x, y);
}
base.Update(gameTime);
}
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin();
//draw the sun
sun.Draw();
//draw the planets
foreach (Planet planet in planets)
{
planet.Draw();
}
spriteBatch.End();
base.Draw(gameTime);
}
}
[/code]

Did you find the code in LoadContent() a bit disconcerting, given that the same planet variable was used over and over again for each planet sprite? The important thing is that each one was added to the planets list, which became a “group” of sprites. You should do this whenever possible with related sprites, because it greatly simplifies everything about your code!

We can also use these sine() and cosine() functions to fire projectiles (such as a missile) from a launcher toward any target at any position on the screen.

The Sprite Movement Demo showed how a sprite can be moved automatically based on its velocity property, although we still have to step in and manage the event when the sprite reaches a screen edge. Although rotation is a subject of the next hour, we did explore moving a sprite around in a circle. This is different from sprite rotation, which involves actually changing the image. In our Solar System Demo, the sprite images were not rotated; they remained in a fixed rotation but followed a circular path around the sun using trigonometry.

Treating Bitmaps as Sprites

Bringing Bitmaps to Life

A sprite is a bitmap with benefits. XNA provides the SpriteBatch class to draw bitmaps with the SpriteBatch.Draw() method, of which there are several overloaded variants. But despite the name, SpriteBatch does not give us “sprite” capabilities from the gameplay perspective. SpriteBatch is a rendering class, used solely for drawing, not “managing” game entities traditionally known as “sprites.” The difference might seem a subtle one, but it’s actually quite a distinction. SpriteBatch might have been a somewhat incorrect name for the class. The “Batch” part of the name refers to the way in which “bitmaps” (not sprites) are drawn—in a batch. That is, all bitmap drawing via SpriteBatch.Draw() is put into a queue and then all the drawing is done quickly when SpriteBatch.End() is called. It’s faster to perform many draw calls at once in this manner, since the video card is going to be switching state only once. Every time SpriteBatch.Begin() and SpriteBatch.End() are called, that involves a state change (which is very slow in terms of rendering). The fewer state changes that happen, the better!

SpriteBatch.Draw() can handle animation, rotation, scaling, and translation (movement). But, without properties, we have to write custom code to do all of these things with just global variables. It can get tedious! We will see how tedious by writing an example using all global variables. Then, for comparison, we’ll write a simple Sprite class and convert the program. This is not just an illustration of how useful object-oriented programming can be (which is true) but to show why we need a Sprite class for gameplay.

SpriteBatch.Draw() works fine. We don’t need an alternative replacement because it can do anything we need. But the code to draw sprites is very specific. If we don’t want to manually draw every character, or vehicle, or avatar in the game, we have to automate the process in some manner. The key to making this work is via properties. A property is a trait or an attribute that partially describes something. In the case of a person, one property would be gender (male or female); other properties include race, height, weight, and age. No single property fully describes a person, but when all (or most) of the properties are considered, it gives you a pretty good idea of what that person looks like.

The simplest and most common property for a sprite is its position on the screen. In the previous hour, we used a variable called Vector2 shipPos to represent the position of the bitmap, and a variable called Texture2D shipImage to represent the image. These two variables were properties for a game object—a spaceship to be used in a sci-fi game. Wouldn’t it be easier to manage both of these properties inside a Sprite class? Before we do that, let’s see whether it’s really that big of a deal to keep track of properties with global variables.

Drawing Lots of Bitmaps

Let’s create a short example. Now, working with just one bitmap is a piece of cake, because there’s only one call to SpriteBatch.Draw(), only one position variable, and only one image variable. When things get messy is when about five or more bitmaps need to be manipulated and drawn. Up to that point, managing variables for four or so images isn’t so bad, but as the number climbs, the amount of manual code grows and becomes unwieldy (like a giant two-handed sword).

[code]
Vector2 position1, position2, position3, position4, position5;
Texture2D image1, image2, image3, image4, image5;
[/code]

It’s not just declaring and using the variables that can be a problem. It’s the ability to use any more than this practically in a game’s sources. But let’s give it a try anyway for the sake of the argument. Figure 6.1 shows the output for this short program, which is a prototype for a solar system simulation we’ll be creating in this hour. The source code is found in Listing 6.1.

The Many Bitmaps Demo program.
FIGURE 6.1 The Many Bitmaps Demo program.

LISTING 6.1 Source code for the Many Bitmaps Demo program, a precursor to a larger project.

[code]
public class Game1 : Microsoft.Xna.Framework.Game
{
GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;
Vector2 position1, position2, position3, position4, position5;
Texture2D image1, image2, image3, image4, image5;
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = “Content”;
TargetElapsedTime = TimeSpan.FromTicks(333333);
}
protected override void Initialize()
{
base.Initialize();
}
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
image1 = Content.Load<Texture2D>(“sun”);
image2 = Content.Load<Texture2D>(“planet1”);
image3 = Content.Load<Texture2D>(“planet3”);
image4 = Content.Load<Texture2D>(“planet2”);
image5 = Content.Load<Texture2D>(“planet4”);
position1 = new Vector2(100, 240-64);
position2 = new Vector2(300, 240-32);
position3 = new Vector2(400, 240-32);
position4 = new Vector2(500, 240-16);
position5 = new Vector2(600, 240-16);
}
protected override void Update(GameTime gameTime)
{
if (GamePad.GetState(PlayerIndex.One).Buttons.Back ==
ButtonState.Pressed)
this.Exit();
base.Update(gameTime);
}
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin();
spriteBatch.Draw(image1, position1, Color.White);
spriteBatch.Draw(image2, position2, Color.White);
spriteBatch.Draw(image3, position3, Color.White);
spriteBatch.Draw(image4, position4, Color.White);
spriteBatch.Draw(image5, position5, Color.White);
spriteBatch.End();
base.Draw(gameTime);
}
}
[/code]

Running into Limits with Global Variables

The Many Bitmaps Demo program wasn’t too difficult to deal with, was it? I mean, there were only five bitmaps to draw, so we needed five position variables. But what if there were 20, or 50, or 100? With so many game objects, it would be impossible to manage them all with global variables. Furthermore, that’s bad programming style when there are better ways to do it. Obviously, I’m talking about arrays and collections. But not only is it a quantity issue with regard to the global variables, but if we want to add another property to each object, we’re talking about adding another 20, 50, or 100 variables for that new property!

Let’s rewrite the program using an array. Later in the hour, we’ll work with a list, which is a container class, but for this next step, an array is a little easier to follow. Here is a new version using arrays. The new source code is found in Listing 6.2.

LISTING 6.2 New source code for the program rewritten to more efficiently store the planet bitmaps and vectors in arrays.

[code]
public class Game1 : Microsoft.Xna.Framework.Game
{
GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;
Texture2D[] images;
Vector2[] positions;
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = “Content”;
TargetElapsedTime = TimeSpan.FromTicks(333333);
}
protected override void Initialize()
{
base.Initialize();
}
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
images = new Texture2D[5];
images[0] = Content.Load<Texture2D>(“sun”);
images[1] = Content.Load<Texture2D>(“planet1”);
images[2] = Content.Load<Texture2D>(“planet3”);
images[3] = Content.Load<Texture2D>(“planet2”);
images[4] = Content.Load<Texture2D>(“planet4”);
positions = new Vector2[5];
positions[0] = new Vector2(100, 240-64);
positions[1] = new Vector2(300, 240-32);
positions[2] = new Vector2(400, 240-32);
positions[3] = new Vector2(500, 240-16);
positions[4] = new Vector2(600, 240-16);
}
protected override void Update(GameTime gameTime)
{
if (GamePad.GetState(PlayerIndex.One).Buttons.Back ==
ButtonState.Pressed)
this.Exit();
base.Update(gameTime);
}
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin();
for (int n = 0; n < 5; n++)
{
spriteBatch.Draw(images[n], positions[n], Color.White);
}
spriteBatch.End();
base.Draw(gameTime);
}
}
[/code]

As you examine the code in this version of the program, what stands out? I notice that in ContentLoad(), the initialization code is actually a bit more complicated than it was previously, but the code in Draw() is shorter. We’re not counting lines of code, but in Draw(), the for loop could accommodate 100 or 1,000 objects with the same amount of code. This is the most significant difference between this and the previous program—we now can handle any arbitrary number of objects.

We could shorten the code in LoadContent() even further by building the filename for each planet. The key is to make the asset names consistent. So, if we rename ”sun” to ”planet0”, then loading the assets becomes a simple for loop. Here is an improvement:

[code]
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
images = new Texture2D[5];
positions = new Vector2[5];
for (int n = 0; n < 5; n++)
{
string filename = “planet” + n.ToString();
images[n] = Content.Load<Texture2D>(filename);
}
positions[0] = new Vector2(100, 240 – 64);
positions[1] = new Vector2(300, 240 – 32);
positions[2] = new Vector2(400, 240 – 32);
positions[3] = new Vector2(500, 240 – 16);
positions[4] = new Vector2(600, 240 – 16);
}
[/code]

The positions array must still be set manually due to the differences in the sizes of the planet images. But I think we could automate that as well by looking at the width and height of each image. The point is not just to make the code shorter, but to find ways to improve the code, make it more versatile, more reusable, and easier to modify. Using a consistent naming convention for asset files will go a long way toward that end.

Creating a Simple Sprite Class

Now let’s experiment with some code that actually does something interesting besides drawing fixed images. We’ll start by creating a simple class to encapsulate a sprite, and then add some features to make the Sprite class useful. This will be a hands-on section where we build the Sprite class in stages. If you are an experienced programmer, you may skip this section.

Creating the Sprite Class

Let’s begin with a new project. The project type will be, as usual, Windows Phone (4.0), and the name is Sprite Demo. In the Game1.cs file, which is the main source code file for the game, we’re going to add the new Sprite class to the top of the file, above the Game1 class. After a while, the Sprite class can be moved into its own file called Sprite.cs. I find this more convenient while working on a new class. See Listing 6.3 for the complete source code.

Classes do not need to be stored in unique source code files; that’s a practice to keep a large project tidy and easier to maintain. But it is acceptable (and often practical) to define a new class inside an existing file. This is especially true when several classes are closely related and you want to better organize the project. The best practice, though, for large classes, is to define each class in its own file.

LISTING 6.3 Source code for the new project with included Sprite class.

[code]
public class Sprite
{
public Texture2D image;
public Vector2 position;
public Color color;
}
public class Game1 : Microsoft.Xna.Framework.Game
{
GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;
Viewport viewport;
Sprite sun;
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = “Content”;
TargetElapsedTime = TimeSpan.FromTicks(333333);
}protected override void Initialize()
{
base.Initialize();
}
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
//get screen dimensions
viewport = GraphicsDevice.Viewport;
//create sun sprite
sun = new Sprite();
sun.image = Content.Load<Texture2D>(“sun”);
//center sun sprite on screen
float x = (viewport.Width – sun.image.Width) / 2;
float y = (viewport.Height – sun.image.Height) / 2;
sun.position = new Vector2(x,y);
//set color
sun.color = Color.White;
}
protected override void Update(GameTime gameTime)
{
if (GamePad.GetState(PlayerIndex.One).Buttons.Back ==
ButtonState.Pressed)
this.Exit();
base.Update(gameTime);
}
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin();
//draw the sun
spriteBatch.Draw(sun.image, sun.position, sun.color);
spriteBatch.End();
base.Draw(gameTime);
}
}
[/code]

The code listings in this book omit the using statements at the beginning of every XNA project, as well as the namespace line and surrounding brackets, to focus on just the functional part of a program, only when that code is generated by Visual Studio. You should assume that those lines are required to compile every example listed herein.

Scope and Clarity

This is how most classes begin life, with just some public properties that could have been equally well defined in a struct. In fact, if you just want to create a quick container for a few variables and don’t want to deal with a class, go ahead and do it. Structures (defined as struct) can even have constructor methods to initialize their properties, as well as normal methods. But in a struct, there is no scope; everything is public. In contrast, everything in a class is private by default. This would work, for example:

[code]
struct Sprite
{
Texture2D image;
Vector2 position;
Color color;
}
[/code]

But one limitation with a struct is room for growth; with a class, we can make changes to scope (which means changing whether individual properties and methods are visible outside of the class). At this simplistic level, there’s very little difference between a class and a struct. Some programmers differentiate between them by using structs only as property containers, with no methods, reserving methods only for defined classes. It’s ultimately up to you!

An OOP (object-oriented programming) “purist” would demand that our image, position, and color properties be defined with private scope, and accessed via property methods. The difference between property variables and property methods is fuzzy in C#, whereas they are very clear-cut in a more highly precise language such as C++. Let’s see what the class will look like when the three variables (image, position, and color) are converted into private properties with public accessor methods.

[code]
public class Sprite2 //”good” OOP version
{
Texture2D p_image;
Vector2 p_position;
Color p_color;
public Texture2D image
{
get { return p_image; }
set { p_image = value; }
}
public Vector2 position
{
get { return p_position; }
set { p_position = value; }
}
public Color color
{
get { return p_color; }
set { p_color = value; }
}
}
[/code]

In general, I prefer to not hide property variables (such as public Texture2D image in the Sprite class), because it just requires extra code to access the property later. This is, again, a matter of preference, and might be dependent on the coding standards of your team or employer. If it’s up to you, just focus on writing clean, tight code, and don’t worry about making your code “OOP safe” for others.

Initializing the Sprite Class with a Constructor

Compared to the original Sprite class defined in Game1.cs, what do you think of the “safe” version (renamed to Sprite2 to avoid confusion)? The three variable names have had “p_” added (to reflect that they are now private in scope), and now in their place are three “properly defined” properties. Each property now has an accessor method (get) and a mutator method (set). If you prefer this more highly structured form of object-oriented C#, I encourage you to continue doing what works best for you. But for the sake of clarity, I will use the original version of Sprite with the simpler public access property variables.

Let’s give the Sprite class more capabilities. Currently, it’s just a container for three variables. A constructor is a method that runs automatically when an object is created at runtime with the new operator, for example:

[code]
Sprite sun = new Sprite();
[/code]

The term Sprite(), with the parentheses, denotes a method—the default method since it requires no parameters. Here is ours:

[code]
public class Sprite
{
public Texture2D image;
public Vector2 position;
public Color color;
public Sprite()
{
image = null;
position = Vector2.Zero;
color = Color.White;
}
}
[/code]

Here, we have a new constructor that initializes the class’s properties to some initial values. This is meant to avoid problems later if one forgets to initialize them manually. In our program now, since color is automatically set to Color.White, we no longer need to manually set it, which cleans up the code in LoadContent() a bit.

[code]
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
//get screen dimensions
viewport = GraphicsDevice.Viewport;
//create sun sprite
sun = new Sprite();
sun.image = Content.Load<Texture2D>(“sun”);
//center sun sprite on screen
float x = (viewport.Width – sun.image.Width) / 2;
float y = (viewport.Height – sun.image.Height) / 2;
sun.position = new Vector2(x,y);
}
[/code]

Writing Reusable Code with Abstraction

A usual goal for an important base game class like Sprite is to abstract the XNA code, at least somewhat, to make the class stand on its own as much as possible. This becomes a priority when you find yourself writing games on several platforms. Within the XNA family, we have Windows, Xbox 360, and Windows Phone. But on a larger scale, it’s fairly common to port games to other systems. After you have rewritten your Sprite class a few times for different platforms (and even languages, believe it or not!), you begin to see similarities among the different systems, and begin to take those similarities into account when writing game classes.

There are two aspects that I want to abstract in the Sprite class. First, there’s loading the image. This occurs in LoadContent() when we simply expose the image property to Content.Load(). Second, there’s drawing the sprite. This occurs in Draw(), also when we expose the image property. To properly abstract the class away from XNA, we need our own Load() and Draw() methods within Sprite itself. To do this, the Sprite class must have access to both ContentManager and SpriteBatch. We can do this by passing those necessary runtime objects to the Sprite class constructor. Listing 6.4 contains the new source code for the Sprite class.

LISTING 6.4 Source code for the expanded Sprite class.

[code]
public class Sprite
{
private ContentManager p_content;
private SpriteBatch p_spriteBatch;
public Texture2D image;
public Vector2 position;
public Color color;
public Sprite(ContentManager content, SpriteBatch spriteBatch)
{
p_content = content;
p_spriteBatch = spriteBatch;
image = null;
position = Vector2.Zero;
color = Color.White;
}
public bool Load(string assetName)
{
try
{
image = p_content.Load<Texture2D>(assetName);
}
catch (Exception) { return false; }
return true;
}
public void Draw()
{
p_spriteBatch.Draw(image, position, color);
}
}
[/code]

Putting our new changes into action (in Listing 6.5) reveals some very clean-looking code in LoadContent() and Draw(), with the output shown in Figure 6.2.

LISTING 6.5 Modifications to the project to support the new Sprite features.

[code]
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
viewport = GraphicsDevice.Viewport;
//create sun sprite
sun = new Sprite(Content, spriteBatch);
sun.Load(“sun”);
//center sun sprite on screen
float x = (viewport.Width – sun.image.Width) / 2;
float y = (viewport.Height – sun.image.Height) / 2;
sun.position = new Vector2(x, y);
}
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin();
sun.Draw();
spriteBatch.End();
base.Draw(gameTime);
}
[/code]

Demonstrating the Sprite class.
FIGURE 6.2 Demonstrating the Sprite class.

Error Handling

The Sprite.Load() method has error handling built in via a try…catch block. Inside the try block is a call to Content.Load(). If the passed asset name is not found, XNA generates an exception error, as shown in Figure 6.3. We don’t want the end user to ever see an exception error, and in a very large game project, it is fairly common for asset files to be renamed and generate errors like this—it’s all part of the development process to track down and fix such common bugs.

To assist, the code in Sprite.Load() returns false if an asset is not found—rather than crashing with an exception error. The problem is, we can’t exit on an error condition from within LoadContent(); XNA is just not in a state that will allow the program to terminate at that point. What we need to do is set a flag and look for it after LoadContent() is finished running.

I have an idea. What if we add a feature to display an optional error message in a pre-shutdown process in the game loop? All it needs to do is check for this error state and then print whatever is in the global errorMessage variable. The user would then read the message and manually shut down the program by closing the window. Let’s just try it out; this won’t be a permanent fixture in future chapters, but you may continue to use it if you want to. First, we need some new variables.

 An exception error occurs when an asset cannot be found.
FIGURE 6.3 An exception error occurs when an asset cannot be found.

[code]
//experimental error handling variables
bool errorState;
string errorMessage;
SpriteFont ErrorFont;
Next, we initialize them.
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = “Content”;
TargetElapsedTime = TimeSpan.FromTicks(333333);
errorMessage = ““;
errorState = false;
}
[/code]

And in LoadContent(), we need to trap the exception error (note that ”sun” was temporarily renamed to ”sun1” to demonstrate the exception error).

[code]
//create sun sprite
sun = new Sprite(Content, spriteBatch);
if (!sun.Load(“sun1”))
{
errorState = true;
errorMessage = “Asset file ‘sun’ not found.”;
return;
}
[/code]

Draw() is where the error handling process comes into play.

[code]
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin();
//experimental error handler
if (errorState)
{
spriteBatch.DrawString(ErrorFont, “CRITICAL ERROR”,
Vector2.Zero, Color.Red);
spriteBatch.DrawString(ErrorFont, errorMessage,
new Vector2(0,100), Color.Red);
}
else
{
sun.Draw();
}
spriteBatch.End();
base.Draw(gameTime);
}
[/code]

When run again with the new error handling code in place, the previous exception error now becomes a nice in-game notification, as shown in Figure 6.4.

The exception error has been handled nicely.
FIGURE 6.4 The exception error has been handled nicely.

We’ve just scratched the surface of what will be possible with the new Sprite class in this hour. Over the next several chapters, the class will be enhanced significantly, making it possible—with properties and methods—to perform transformations

Getting User Input

Exploring Windows Phone Touchscreen Input

Programming a game’s input system really does require a lot of design consideration ahead of time because all we really can use is the touchscreen! Oh, there is an accelerometer that can be used for input, but it is a rare and often niche game that uses the accelerometer to read the phone’s orientation (the angle and position at which it is being held). The touchscreen, for all practical purposes, is treated like mouse input without a visible mouse cursor. Windows programmers will have a slightly harder time adjusting than someone who has been working with the Xbox 360 or another console, which already requires an adjustment in one’s assumptions about user input. Windows games are a cakewalk, with 100-plus keyboard keys, the mouse, and an optional controller! That’s a lot of input! On the Windows Phone, though, all we have is the touchscreen. So we need to make the most of it, being mindful that a user’s finger is rather large compared to the precise input of a mouse cursor.

For the purpose of detecting input for a game, the most obvious method might be to just detect the touch location and convert that to an average coordinate—like mouse input. But the Windows Phone touchscreen is capable of multitouch, not just single-touch input. Although the screen is very small compared to a tablet or PC screen, it is still capable of detecting input from up to four fingers at once. To develop a multitouch game, you will need the actual Windows Phone hardware— either a real phone or an unlocked development model (with no phone service).

Multitouch is a significant feature for the new phone! For our purposes here, however, we will just be concerned with single “tap” input from one finger, simulated with mouse motion in the emulator. We can do a single tap with a mouse click, or a drag operation by touching the screen and moving the finger across the screen. Again, this will have to be done with your mouse for development on the emulator (unless your Windows system uses a touchscreen!).

You will not be able to test multitouch in the emulator without a multitouch screen on your Windows development system or an actual Windows Phone device.

Simulating Touch Input

The key to touch input with a WP7 device with XNA is a class called TouchPanel. The XNA services for mouse, keyboard, and Xbox 360 controller input are not available in a WP7 project. So, for most XNA programmers, TouchPanel will be a new experience. Not to worry; it’s similar to the mouse code if we don’t tap into the multitouch capabilities.

The TouchPanel class includes one very interesting property that we can parse— MaximumTouchCount represents the number of touch inputs that the device can handle at a time.

The second class that we need to use for touch input is called TouchCollection. As the name implies, this is a collection that will be filled when we parse the TouchPanel while the game is running. TouchCollection has a State property that is similar to MouseState, with the enumerated values Pressed, Moved, and Released.

The Touch Demo Project, Step by Step

Let’s create a sample project to try out some of this code. I’ll skip the usual new project instructions at this point since the steps should be familiar by now. Just create a new project and add a font so that we can print something on the screen. I’ve used Moire Bold 24 as the font in this example.

  1. Add some needed variables for working with text output. (The code for the touchscreen input system will be added shortly.)
    [code]
    public class Game1 : Microsoft.Xna.Framework.Game
    {
    GraphicsDeviceManager graphics;
    SpriteBatch spriteBatch;
    SpriteFont MoireBold24;
    Vector2 position;
    Vector2 size;
    string text = “Touch Screen Demo”;
    [/code]
  2. Initialize the font and variables used in the program.
    [code]
    protected override void LoadContent()
    {
    spriteBatch = new SpriteBatch(GraphicsDevice);
    MoireBold24 = Content.Load<SpriteFont>(“MoireBold24”);
    size = MoireBold24.MeasureString(text);
    Viewport screen = GraphicsDevice.Viewport;
    position = new Vector2((screen.Width – size.X) / 2,
    (screen.Height – size.Y) / 2);
    }
    [/code]
  3. Write the update code to get touch input.
    [code]
    protected override void Update(GameTime gameTime)
    {
    if (GamePad.GetState(PlayerIndex.One).Buttons.Back ==
    ButtonState.Pressed)
    this.Exit();
    //get state of touch inputs
    TouchCollection touchInput = TouchPanel.GetState();
    //look at all touch points (usually 1)
    foreach(TouchLocation touch in touchInput)
    {
    position = new Vector2(touch.Position.X – size.X / 2,
    touch.Position.Y – size.Y / 2);
    }
    base.Update(gameTime);
    }
    [/code]
  4. Print the message on the screen at the touch coordinates.
    [code]
    protected override void Draw(GameTime gameTime)
    {
    GraphicsDevice.Clear(Color.CornflowerBlue);
    spriteBatch.Begin();
    spriteBatch.DrawString(MoireBold24, text, position, Color.White);
    spriteBatch.End();
    base.Draw(gameTime);
    }
    [/code]

Figure 4.1 shows the output of the program running in the WP7 emulator. Although the static figure doesn’t show movement, the message “Touch Screen Demo” moves on the screen with touch input! On a real Windows Phone device, this would work with finger input on the screen.

The Windows Phone hardware specification calls for at least four simultaneous touchscreen inputs!

The text message moves based on touch input.
FIGURE 4.1 The text message moves based on touch input.

How to Simulate More Inputs

What if you really do have a great design for a multitouch game but have no way to test it (that is, you do not have a WP7 device or a touchscreen for your Windows PC)? One alternative is to develop and test your game with simulated inputs for each finger, and then test each input separately in the emulator. This actually works quite well! Suppose your game requires one input to move a ship left or right on the screen, and another input to fire a weapon. The easy approach is to just leave the ship’s movement alone (presumably at the center of the screen without movement) to test the firing input. When that is working satisfactorily, then you might test leftto- right movement input with random or regular shooting intervals. This is how most developers will approach the problem.

Using Gestures on the Touchscreen

A related capability of the TouchPanel class is a gesture interface. This is potentially a really great feature for WP7 games, so don’t pass it up! A gesture is essentially nonverbal communication with your hands. A gesture on a touchscreen might be to flick an object across the screen rather than dragging to the exact location where you want it to go. Instead of manually moving something one direction or another, one might just flick it in the general direction. So, it’s up to the game to interpret the gestures based on the game’s user interface design.

Since gesture input can be used in place of touch input, you might want to just use gesture input instead. If all you need for your game is single-touch input, a tap gesture would work in place of the mouselike touch code seen previously. The main difference between the two methods is that gesture input does not support multitouch.

XNA supports gestures with the same TouchPanel class used for touch input. Here are the contents of the GestureType enumeration:

  • None
  • FreeDrag
  • Tap
  • Pinch
  • DoubleTap
  • Flick
  • Hold
  • DragComplete
  • HorizontalDrag
  • PinchComplete
  • VerticalDrag

Gesture-based input does not support multitouch. Only a single gesture (with one finger) can be used at a time.

To use gesture input, we have to enable it, because gesture input is not automatic. There are obvious problems with input if all gestures are enabled by default, because the game will behave erratically with flick gestures in a game using a lot of “drag”-style input to move and interact with the game. Even the simplest of arcadestyle or puzzle games will involve some dragging of the finger across the screen for input, and this could be misinterpreted as a flick if the intended movement is too fast. To enable both tap and flick gestures, add this line to the constructor, Game1(), or LoadContent():

[code]
TouchPanel.EnabledGestures = GestureType.Tap | GestureType.Flick;
[/code]

A simple example can be added to the Update() method of the Touch Demo to see how it works. Note that the first if condition is required. Trying to read a gesture when none is available will cause an exception!

[code]
if (TouchPanel.IsGestureAvailable)
{
GestureSample gesture = TouchPanel.ReadGesture();
if (gesture.GestureType == GestureType.Tap)
{
position = new Vector2(gesture.Position.X – size.X / 2,
gesture.Position.Y – size.Y / 2);
}
}
[/code]

Running this code, you might be surprised to find that the tap gesture is more like a mouse click-and-release event. Click-dragging does not produce the gesture! You can see for yourself by commenting out the touch input code in the Touch Demo program, and running just the gesture code instead. If you’re really interested in gesture input, a real WP7 device is a must-have! The emulator just doesn’t satisfy. If you are doing WP7 development for profit, a dev device or an actual Windows Phone device (with carrier subscription) is a good investment.

Touch input can be quite satisfying to a game designer after working with the traditional keyboard and mouse, because it offers the potential for very creative forms of input in a game. Although regular multitouch input will be the norm for most Windows Phone games, the ability to use gestures also might prove to be fun.

Windows Phone Special Effects

Using dual texture effects

Dual texture is very useful when you want to map two textures onto a model. The Windows Phone 7 XNA built-in DualTextureEffect samples the pixel color from two texture images. That is why it is called dual texture. The textures used in the effect have their own texture coordinate and can be mapped individually, tiled, rotated, and so on. The texture is mixed using the pattern:

[code]
finalTexture.color = texture1.Color * texture2.Color;
finalTexture.alpha = texture1.Alpha * texture2.Alpha;
[/code]

The color and alpha of the final texture come from a separate computation. The best practice of DualTextureEffect is to apply the lightmap on the model. In computer graphics, computing the lighting and shadows is a big performance job in real time. The lightmap texture is pre-computed and stored individually. A lightmap is a data set comprising the different surfaces of a 3D model. This will save the performance cost on lighting computation. Sometimes, you might want to use the ambient occlusion effect, which is costly. At this point, lightmap can be used as a texture, then mapped to the special model or scene for realistic effect. As the lightmap is pre-computed in 3D modeling software (you will learn how to deal with this in 3DS MAX), it is easy for you to use the most complicated lighting effects (shadows, ray-tracing, radiosity, and so on.) in Windows Phone 7. You can use the dual texture effect if you just want the game scene to have shadows and lighting. In this recipe, you will learn how to create the lightmap and apply it on your game model using the DualTextureEffect.

How to do it…

The following steps show you the process for creating the lightmap in 3DS MAX and how to use the lightmap in your Windows Phone 7 game using DualTextureEffect:

  1. Create the Sphere lightmap in 3DS MAX 2011. Open 3DS MAX 2011, in the Create panel, click the Geometry button, then create a sphere by choosing the Sphere push button, as shown in the following screenshot:
    the Geometry button
  2. Add the texture to the Material Compact Editor and apply the material to the sphere. Click the following menu items of 3DS MAX 2011: Rendering | Material Editor | Compact Material Editor. Choose the first material ball and apply the texture you want to the material ball. Here, we use the tile1.png, a checker image, which you can find in the Content directory of the example bundle file. The applied material ball looks similar to the following screenshot:
    Material Compact Editor
  3. Apply the Target Direct Light to the sphere. In the Create panel—the same panel for creating sphere—click the Lights button and choose the Target Direct option. Then drag your mouse over the sphere in the Perspective viewport and adjust the Hotspot/Beam to let the light encompass the sphere, as shown in the following screenshot:
    the Perspective viewport
  4. Render the Lightmap. When the light is set as you want, the next step is to create the lightmap. After you click the sphere that you plan to build the lightmap for, click the following menu items in 3DS MAX: Rendering | Render To Texture. In the Output panel of the pop-up window, click the Add button. Another pop-up window will show up; choose the LightingMap option, and then click Add Elements, as shown in the following screenshot:
    Rendering | Render To Texture
  5. After that, change the setting of the lightmap:
    • Change the Target Map Slot to Self-Illumination in the Output panel.
    • Change the Baked Material Settings to Output Into Source in the Baked Material panel.
    • Change the Channel to 2 in the Mapping Coordinates panel.
    • Finally, click the Render button. The generated lightmap will look similar to the following screenshot:
      the Render buttonBy default, the lightmap texture type is .tga, and the maps are placed in the images subfolder of the folder where you installed 3DS MAX. The new textures are flat. In other words, they are organized according to groups of object faces. In this example, the lightmap name is Sphere001LightingMap.tga.
  6. Open the Material Compact Editor again by clicking the menu items Rendering | Material Editor | Compact Material Editor. You will find that the first material ball has a mixed texture combined with the original texture and the lightmap. You can also see that Self-Illumination is selected and the value is Sphere001LightingMap. tga. This means the lightmap for the sphere is applied successfully.
  7. Select the sphere and export to an FBX model file named DualTextureBall.FBX, which will be used in our Windows Phone 7 game.
  8. From this step, we will render the lightmap of the sphere in our Windows Phone 7 XNA game using the new built-in effect DualTextureEffect. Now, create a Windows Phone Game project named DualTextureEffectBall in Visual Studio 2010 and change Game1.cs to DualTextureEffectBallGame.cs. Then, add the texture file tile1.png, the lightmap file Sphere001LightingMap.tga, and the model DualTextureBall.FBX to the content project.
  9. Declare the indispensable variables in the DualTextureEffectBallGame class. Add the following code to the class field:
    [code]
    // Ball Model
    Model modelBall;
    // Dual Texture Effect
    DualTextureEffect dualTextureEffect;
    // Camera
    Vector3 cameraPosition;
    Matrix view;
    Matrix projection;
    [/code]
  10. Initialize the camera. Insert the following code to the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(0, 50, 200);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,
    1.0f, 1000.0f);
    [/code]
  11. Load the ball model and initialize the DualTextureEffect. Paste the following code to the LoadContent() method:
    [code]
    // Load the ball model
    modelBall = Content.Load<Model>(“DualTextureBall”);
    // Initialize the DualTextureEffect
    dualTextureEffect = new DualTextureEffect(GraphicsDevice);
    dualTextureEffect.Projection = projection;
    dualTextureEffect.View = view;
    // Set the diffuse color
    dualTextureEffect.DiffuseColor = Color.Gray.ToVector3();
    // Set the first and second texture
    dualTextureEffect.Texture =
    Content.Load<Texture2D>(“tile1”);
    dualTextureEffect.Texture2 =
    Content.Load<Texture2D>(“Sphere001LightingMap”);
    Define the DrawModel() method in the class:
    // Draw model
    private void DrawModel(Model m, Matrix world,
    DualTextureEffect effect)
    {
    foreach (ModelMesh mesh in m.Meshes)
    {
    // Iterate every part of current mesh
    foreach (ModelMeshPart meshPart in mesh.MeshParts)
    {
    // Change the original effect to the designed
    // effect
    meshPart.Effect = effect;
    // Update the world matrix
    effect.World *= world;
    }
    mesh.Draw();
    }
    }
    [/code]
  12. Draw the ball model using DualTextureEffect on the Windows Phone 7 screen. Add the following lines to the Draw() method:
    [code]
    // Rotate the ball model around axis Y.
    float timer =
    (float)gameTime.ElapsedGameTime.TotalSeconds;
    DrawModel(modelBall, Matrix.CreateRotationY(timer),
    dualTextureEffect);
    [/code]
  13. Build and run the example. It should run as shown in the following screenshot:
    DualTextureEffect
  14. If you comment the following line in LoadContent() to disable the lightmap texture, you will find the difference when lightmap is on or off:
    [code]
    dualTextureEffect.Texture2 =
    Content.Load<Texture2D>(“Sphere001LightingMap”);
    [/code]
  15. Run the application without lightmap. The model will be in pure black as shown in the following screenshot:
    dual texture effects

How it works…

Steps 1–6 are to create the sphere and its lightmap in 3DS MAX 2011.

In step 8, the modelBall is responsible for loading and holding the ball model. The dualTextureEffect is the object of XNA 4.0 built-in effect DualTextureEffect for rendering the ball model with its original texture and the lightmap. The following three variables cameraPosition, view, and projection represent the camera.

In step 10, the first line is to load the ball model. The rest of the lines initialize the DualTextureEffect. Notice, we use the tile1.png for the first and original texture, and the Sphere001LightingMap.tga for the lightmap as the second texture.

In step 11, the DrawModel() method is different from the definition. Here, we need to replace the original effect of each mesh with the DualTextureEffect. When we iterate the mesh parts of every mesh of the current model, we assign the effect to the meshPart.Effect for applying the DualTextureEffect to the mesh part.

Using environment map effects

In computer games, environment mapping is an efficient image-based lighting technique for aligning the reflective surface with the distant environment surrounding the rendered object. In Need for Speed, produced by Electronic Arts, if you open the special visual effect option while playing the game, you will find the car body reflects the scene around it, which may be trees, clouds, mountains, or buildings. They are amazing and attractive. This is environment mapping, it makes games more realistic. The methods for storing the surrounding environment include sphere mapping and cube mapping, pyramid mapping, and the octahedron mapping. In XNA 4.0, the framework uses cube mapping, in which the environment is projected onto the six faces of a cube and stored as six square textures unfolded into six square regions of a single texture. In this recipe, you will learn how to make a cubemap using the DirectX texture tool, and apply the cube map on a model using EnvironmentMappingEffect.

Getting ready

Cubemap is used in real-time engines to fake refractions. It’s way faster than ray-tracing because they are only textures mapped as a cube. So that’s six images (one for each face of the cube).

For creating the cube map for the environment map effect, you should use the DirectX Texture Tool in the DirectX SDK Utilities folder. The latest version of Microsoft DirectX SDK can be downloaded from the URL http://www.microsoft.com/downloads/en/ details.aspx?FamilyID=3021d52b-514e-41d3-ad02-438a3ba730ba.

How to do it…

The following steps lead you to create an application using the Environment Mapping effect:

  1. From this step, we will create the cube map in DirectX Texture Tool. Run this application and create a new Cube Map. Click the following menu items: File | New Texture. A window will pop-up; in this window, choose the Cubemap Texture for Texture Type; change the dimension to 512 * 512 in the Dimensions panel; set the Surface/Volume Format to Four CC 4-bit: DXT1. The final settings should look similar to the following screenshot:
    Cubemap Texture
  2. Set the texture of every face of the cube. Choose a face for setting the texture by clicking the following menu items: View | Cube Map Face | Positive X, as shown in the following screenshot:
    Cube Map Face | Positive X
  3. Then, apply the image for the Positive X face by clicking: File | Open Onto This Cubemap Face, as shown in the following screenshot:
    Open Onto This Cubemap Face
  4. When you click the item, a pop-up dialog will ask you to choose a proper image for this face. In this example, the Positive X face will look similar to the following screenshot:
    Positive X face will look similar
  5. It is similar for the other five faces, Negative X, Positive Y, Negative Y, Positive Z, and Negative Z. When all of the cube faces are appropriately set, we save cubemap as SkyCubeMap.dds. The cube map will look similar to the following figure:
    Negative X, Positive Y, Negative Y, Positive Z, and Negative Z
  6. From this step, we will start to render the ball model using the XNA 4.0 built-in effect called EnvironmentMapEffect. Create a Windows Phone Game project named EnvironmentMapEffectBall in Visual Studio 2010 and change Game1.cs to EnvironmentMapEffectBallGame.cs. Then, add the ball model file ball.FBX, ball texture file silver.jpg, and the generated cube map from DirectX Texture Tool SkyCubemap.dds to the content project.
  7. Declare the necessary variables of the EnvironmentMapEffectBallGame class. Add the following lines to the class:
    [code]
    // Ball model
    Model modelBall;
    // Environment Map Effect
    EnvironmentMapEffect environmentEffect;
    // Cube map texture
    TextureCube textureCube;
    // Ball texture
    Texture2D texture;
    // Camera
    Vector3 cameraPosition;
    Matrix view;
    Matrix projection;
    [/code]
  8. Initialize the camera. Insert the following lines to the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(2, 3, 32);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,
    1.0f, 100.0f);
    [/code]
  9. Load the ball model, ball texture, and the sky cube map. Then initialize the environment map effect and set its properties. Paste the following code in the LoadContent() method:
    [code]
    // Load the ball model
    modelBall = Content.Load<Model>(“ball”);
    // Load the sky cube map
    textureCube = Content.Load<TextureCube>(“SkyCubeMap”);
    // Load the ball texture
    texture = Content.Load<Texture2D>(“Silver”);
    // Initialize the EnvironmentMapEffect
    environmentEffect = new EnvironmentMapEffect(GraphicsDevice);
    environmentEffect.Projection = projection;
    environmentEffect.View = view;
    // Set the initial texture
    environmentEffect.Texture = texture;
    // Set the environment map
    environmentEffect.EnvironmentMap = textureCube;
    environmentEffect.EnableDefaultLighting();
    // Set the environment effect factors
    environmentEffect.EnvironmentMapAmount = 1.0f;
    environmentEffect.FresnelFactor = 1.0f;
    environmentEffect.EnvironmentMapSpecular = Vector3.Zero;
    [/code]
  10. Define the DrawModel() of the class:
    [code]
    // Draw Model
    private void DrawModel(Model m, Matrix world,
    EnvironmentMapEffect environmentMapEffect)
    {
    foreach (ModelMesh mesh in m.Meshes)
    {
    foreach (ModelMeshPart meshPart in mesh.MeshParts)
    {
    meshPart.Effect = environmentMapEffect;
    environmentMapEffect.World = world;
    }
    mesh.Draw();
    }
    }
    [/code]
  11. Draw and rotate the ball with EnvironmentMapEffect on the Windows Phone 7 screen. Insert the following code to the Draw() method:
    [code]
    // Draw and rotate the ball model float time = (float)gameTime.TotalGameTime.TotalSeconds; DrawModel(modelBall, Matrix.CreateRotationY(time * 0.3f) * Matrix.CreateRotationX(time), environmentEffect);
    [/code]
  12. Build and run the application. It should run similar to the following screenshot:
    environment map effects

How it works…

Steps 1 and 2 use the DirectX Texture Tool to generate a sky cube map for the XNA 4.0 built-in effect EnvironmentMapEffect.

In step 4, the modelBall loads the ball model, environmentEffect will be used to render the ball model in EnvironmentMapEffect, and textureCube is a cube map texture. The EnvironmentMapEffect will receive the texture as an EnvironmentMap property; texture represents the ball texture; the last three variables cameraPosition, view, and projection are responsible for initializing and controlling the camera.

In step 6, the first three lines are used to load the required contents including the ball model, texture, and the sky cube map. Then, we instantiate the object of EnvironmentMapEffect and set its properties. environmentEffect.Projection and environmentEffect. View are for the camera; environmentEffect.Texture is for mapping ball texture onto the ball model; environmentEffect.EnvironmentMap is the environment map from which the ball model will get the reflected color and mix it with its original texture.

The EnvironmentMapAmount is a float that describes how much of the environment map could show up, which also means how much of the cube map texture will blend over the texture on the model. The values range from 0 to 1 and the default value is 1.

The FresnelFactor makes the environment map visible independent of the viewing angle. Use a higher value to make the environment map visible around the edges; use a lower value to make the environment map visible everywhere. Fresnel lighting only affects the environment map color (RGB values); alpha is not affected. The value ranges from 0.0 to 1.0. 0.0 is used to disable the Fresnel Lighting. 1.0 is the default value.

The EnvironmentMapSpecular implements cheap specular lighting, by encoding one or more specular highlight patterns into the environment map alpha channel, then setting the EnvironmentMapSpecular to the desired specular light color.

In step 7, we replace the default effect of every mesh part of the model meshes with the EnvironmentMapEffect, and draw the mesh with replaced effect.

Rendering different parts of a character into textures using RenderTarget2D

Sometimes, you want to see a special part of a model or an image, and you also want to see the original view of them at the same time. This is where, the render target will help you. From the definition of render target in DirectX, a render target is a buffer where the video card draws pixels for a scene that is being rendered by an effect class. In Windows Phone 7, the independent video card is not supported. The device has an embedded processing unit for graphic rendering. The major application of render target in Windows Phone 7 is to render the viewing scene, which is in 2D or 3D, into 2D texture. You can manipulate the texture for special effects such as transition, partly showing, or something similar. In this recipe, you will discover how to render different parts of a model into texture and then draw them on the Windows Phone 7 screen.

Getting ready

Render target, by default, is called the back buffer. This is the part of the video memory that contains the next frame to be drawn. You can create other render targets with the RenderTarget2D class, reserving new regions of video memory for drawing. Most games render a lot of content to other render targets besides the back buffer (offscreen), then assemble the different graphical elements in stages, combining them to create the final product in the back buffer.

A render target has a width and height. The width and height of the back buffer are the final resolution of your game. An offscreen render target does not need to have the same width and height as the back buffer. Small parts of the final image can be rendered in small render targets, and copied to another render target later. To use a render target, create a RenderTarget2D object with the width, height, and other options you prefer. Then, call GraphicsDevice.SetRenderTarget to make your render target the current render target. From this point on, any Draw calls you make will draw into your render target because the RenderTarget2D is the subclass of Texture2D. When you are finished with the render target, call GraphicsDevice.SetRenderTarget to a new render target (or null for the back buffer).

How to do it…

In the following steps, you will learn how to use RenderTarget2D to render different parts of a designated model into textures and present them on the Windows Phone 7 screen:

  1. Create a Windows Phone Game project named RenderTargetCharacter in Visual Studio 2010 and change Game1.cs to RenderTargetCharacterGame.cs. Then, add the character model file character.FBX and the character texture file Blaze. tga to the content project.
  2. Declare the required variables in the RenderTargetCharacterGame class field. Add the following lines of code to the class field:
    [code]
    // Character model
    Model modelCharacter;
    // Character model world position
    Matrix worldCharacter = Matrix.Identity;
    // Camera
    Vector3 cameraPosition;
    Vector3 cameraTarget;
    Matrix view;
    Matrix projection;
    // RenderTarget2D objects for rendering the head, left //fist,
    and right foot of character
    RenderTarget2D renderTarget2DHead;
    RenderTarget2D renderTarget2DLeftFist;
    RenderTarget2D renderTarget2DRightFoot;
    [/code]
  3. Initialize the camera and render targets. Insert the following code to the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(0, 40, 350);
    cameraTarget = new Vector3(0, 0, 1000);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Initialize the RenderTarget2D objects with different sizes
    renderTarget2DHead = new RenderTarget2D(GraphicsDevice,
    196, 118, false, SurfaceFormat.Color,
    DepthFormat.Depth24, 0,
    RenderTargetUsage.DiscardContents);
    renderTarget2DLeftFist = new RenderTarget2D(GraphicsDevice,
    100, 60, false, SurfaceFormat.Color,
    DepthFormat.Depth24,
    0, RenderTargetUsage.DiscardContents);
    renderTarget2DRightFoot = new
    RenderTarget2D(GraphicsDevice, 100, 60, false,
    SurfaceFormat.Color, DepthFormat.Depth24, 0,
    RenderTargetUsage.DiscardContents);
    [/code]
  4. Load the character model and insert the following line of code to the LoadContent() method:
    [code]
    modelCharacter = Content.Load<Model>(“Character”);
    [/code]
  5. Define the DrawModel() method:
    [code]
    // Draw the model on screen
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.DiffuseColor = Color.White.ToVector3();
    effect.World =
    transforms[mesh.ParentBone.Index] * world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  6. Get the rendertargets of the right foot, left fist, and head of the character. Then draw the rendertarget textures onto the Windows Phone 7 screen. Insert the following code to the Draw() method:
    [code]
    // Get the rendertarget of character head
    GraphicsDevice.SetRenderTarget(renderTarget2DHead);
    GraphicsDevice.Clear(Color.Blue);
    cameraPosition = new Vector3(0, 110, 60);
    cameraTarget = new Vector3(0, 110, -1000);
    view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
    Vector3.Up);
    DrawModel(modelCharacter, worldCharacter, view,
    projection);
    GraphicsDevice.SetRenderTarget(null);
    // Get the rendertarget of character left fist
    GraphicsDevice.SetRenderTarget(renderTarget2DLeftFist);
    GraphicsDevice.Clear(Color.Blue);
    cameraPosition = new Vector3(-35, -5, 40);
    cameraTarget = new Vector3(0, 5, -1000);
    view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
    Vector3.Up);
    DrawModel(modelCharacter, worldCharacter, view,
    projection);
    GraphicsDevice.SetRenderTarget(null);
    // Get the rendertarget of character right foot
    GraphicsDevice.SetRenderTarget(renderTarget2DRightFoot);
    GraphicsDevice.Clear(Color.Blue);
    cameraPosition = new Vector3(20, -120, 40);
    cameraTarget = new Vector3(0, -120, -1000);
    view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
    Vector3.Up);
    DrawModel(modelCharacter, worldCharacter, view,
    projection);
    GraphicsDevice.SetRenderTarget(null);
    // Draw the character model
    cameraPosition = new Vector3(0, 40, 350);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    GraphicsDevice.Clear(Color.CornflowerBlue);
    DrawModel(modelCharacter, worldCharacter, view,
    projection);
    // Draw the generated rendertargets of different parts of
    // character model in 2D
    spriteBatch.Begin();
    spriteBatch.Draw(renderTarget2DHead, new Vector2(500, 0),
    Color.White);
    spriteBatch.Draw(renderTarget2DLeftFist, new Vector2(200,
    220),
    Color.White);
    spriteBatch.Draw(renderTarget2DRightFoot, new Vector2(500,
    400),
    Color.White);
    spriteBatch.End();
    [/code]
  7. Build and run the application. The application will run as shown in the following screenshot:
    RenderTarget2D

How it works…

In step 2, the modelCharacter loads the character 3D model and the worldCharacter represents the world transformation matrix of the character. The following four variables cameraPosition, cameraTarget, view, and projection are used to initialize the camera. Here, the cameraTarget will have the same Y value as the cameraPosition and large enough Z value, which is far away behind the center, because we want the camera’s look-at direction to be parallel to the XZ plane. The last three RenderTarget2D objects, renderTarget2DHead, renderTarget2DLeftFist, and renderTarget2DRightFoot, are responsible for rendering the different parts of the character from 3D real-time view to 2D texture.

In step 3, we initialize the camera and the three render targets. The initialization code for the camera is nothing new. The RenderTarget2D has three overloaded constructers, and the most complex one is the third. If you understand the third, the other two are easy. This constructor looks similar to the following code:

[code]
public RenderTarget2D (
GraphicsDevice graphicsDevice,
int width,
int height,
bool mipMap,
SurfaceFormat preferredFormat,
DepthFormat preferredDepthFormat,
int preferredMultiSampleCount,
RenderTargetUsage usage
)
[/code]

Let’s have a look at what all these parameters stand for:

  • graphicsDevice: This is the graphic device associated with the render target resource.
  • width: This is an integer, in pixels, of the render target. You can use graphicsDevice.PresentationParameters.BackBufferWidth to get the current screen width. Because the RenderTarget2D is a subclass of Texture2D, the value for width and height of RenderTarget2D objects are used to define the size of the final RenderTarget2D texture. Notice, the maximum size for Texture2D in Windows Phone 7 is less than 2048, so the width value of RenderTarget2D cannot be beyond this limitation.
  • height: This is an integer, in pixels, of the render target. You can use graphicsDevice.PresentationParameters.BackBufferHeight to get the current screen height. The additional information is similar to the width parameter.
  • mipMap: This is true to enable a full mipMap chain to be generated, otherwise false.
  • preferredFormat: This is the preferred format for the surface data. This is the format preferred by the application, which may or may not be available from the hardware. In the XNA Framework, all two-dimensional (2D) images are represented by a range of memory called a surface. Within a surface, each element holds a color value representing a small section of the image, called a pixel. An image’s detail level is defined by the number of pixels needed to represent the image and the number of bits needed for the image’s color spectrum. For example, an image that is 800 pixels wide and 600 pixels high with 32 bits of color for each pixel (written as 800 x 600 x 32) is more detailed than an image that is 640 pixels wide and 480 pixels tall with 16 bits of color for each pixel (written as 640 x 480 x 16). Likewise, the more detailed image requires a larger surface to store the data. For an 800 x 600 x 32 image, the surface’s array dimensions are 800 x 600, and each element holds a 32-bit value to represent its color.

    All formats are listed from left to right, most-significant bit to least-significant bit. For example, ARGB formats are ordered from the most-significant bit channel A (alpha), to the least-significant bit channel B (blue). When traversing surface data, the data is stored in memory from least-significant bit to most-significant bit, which means that the channel order in memory is from least-significant bit (blue) to most-significant bit (alpha).

    The default value for formats that contain undefined channels (Rg32, Alpha8, and so on) is 1. The only exception is the Alpha8 format, which is initialized to 000 for the three color channels. Here, we use the SurfaceFormat.Color option. The SurfaceFormat.Color is an unsigned format, 32-bit ARGB pixel format with alpha, using 8 bits per channel.

  • preferredDepthFormat: This is a depth buffer containing depth data and possibly stencil data. You can control a depth buffer using a state object. The depth format includes Depth16, Depth24, and Depth24 Stencil.
  • usage: This is the object of RenderTargetUsage. It determines how render target data is used once a new target is set. This enumeration has three values: PreserveContents, PlatformContents, and DiscardContents. The default value DiscardContents means whenever a rendertarget is set onto the device, the previous one will be destroyed first. On the other hand, when you choose the PreserveContents option, the data associated with the render target will be maintained if a new rendertarget is set. This method will impact the performance greatly because it stores data and copies it all back to rendertarget when you use it again. The PlatformContents will either clear or keep the data, depending on the current platform. On Xbox 360 and Windows Phone 7, the render target will discard contents. On PC, the render target will discard the contents if multi-sampling is enabled, and preserve the contents if not.

In step 6, the first part of the Draw() method gets the render target texture for the head of the character, the GraphicDevice.SetRenderTarget() sets a new render target for this device. As the application runs on Windows Phone 7 and the RenderTargetUsage is set to DiscardContents, every time a new render target is assigned onto the device, the previous one will be destroyed. From XNA 4.0 SDK, the method has some restrictions while calling. They are as follows:

  • The multi-sample type must be the same for the render target and the depth stencil surface
  • The formats must be compatible for the render target and the depth stencil surface
  • The size of the depth stencil surface must be greater than, or equal to, the size of the render target

These restrictions are validated only while using the debug runtime when any of the GraphicsDevice drawing methods are called. Then, the following lines until the GraphicsDevice.SetRenderTarget(null) are used to adjust the camera position and the look-at target for rendering the head of the character. The block of code points out the view for transforming and rendering the model from 3D to 2D texture as render target, which will be displayed at the designated place on the Windows Phone screen. The method calling of GraphicsDevice.SetRenderTarget(null) will reset the render target currently on the graphics device for the next render target using it. It is similar to renderTarget2DRightFoot and renderTarget2DLeftFist in the second and third part of the Draw() method. The fourth part is to draw the actual character 3D model. After that, we will present all of the generated render targets on the Windows Phone 7 screen using the 2D drawing methods.

Creating a screen transition effect using RenderTarget2D

Do you remember the scene transition in Star Wars? The scene transition is a very common method for smoothly changing the movie scene from current to next. The frequent transition patterns are Swiping, Rotating, Fading, Checkerboard Scattering, and so on. With the proper transition effects, the audience will know that the plots go well when the stage changes. Besides movies, the transition effects also have a relevant application in video games, especially in 2D games. Every game state change will trigger a transition effect. In this recipe, you will learn how to create a typical transition effect using RenderTarget2D for your Windows Phone 7 game.

How to do it…

The following steps will draw a spinning squares transition effect using the RenderTarget2D technique:

  1. Create a Windows Phone Game named RenderTargetTransitionEffect and change Game1.cs to RenderTargetTransitionEffectGame.cs. Then, add Image1.png and Image2.png to the content project.
  2. Declare the indispensable variables. Insert the following code to the RenderTargetTransitionEffectGame code field:
    [code]
    // The first forefront and background images
    Texture2D textureForeFront;
    Texture2D textureBackground;
    // the width of each divided image
    int xfactor = 800 / 8;
    // the height of each divided image
    int yfactor = 480 / 8;
    // The render target for the transition effect
    RenderTarget2D transitionRenderTarget;
    float alpha = 1;
    // the time counter
    float timer = 0;
    const float TransitionSpeed = 1.5f;
    [/code]
  3. Load the forefront and background images, and initialize the render target for the jumping sprites transition effect. Add the following code to the LoadContent() method:
    [code]
    // Load the forefront and the background image
    textureForeFront = Content.Load<Texture2D>(“Image1”);
    textureBackground = Content.Load<Texture2D>(“Image2”);
    // Initialize the render target
    transitionRenderTarget = new RenderTarget2D(GraphicsDevice,
    800, 480, false, SurfaceFormat.Color,
    DepthFormat.Depth24, 0,
    RenderTargetUsage.DiscardContents);
    [/code]
  4. Define the core method DrawJumpingSpritesTransition() for the jumping sprites transition effect. Paste the following lines into the RenderTargetTransitionEffectGame class:
    [code]
    void DrawJumpingSpritesTransition(float delta, float alpha,
    RenderTarget2D renderTarget)
    {
    // Instance a new random object for generating random
    //values to change the rotation, scale and position
    //values of each sub divided images
    Random random = new Random();
    // Divide the image into designated pieces,
    //here 8 * 8 = 64
    // ones.
    for (int x = 0; x < 8; x++)
    {
    for (int y = 0; y < 8; y++)
    {
    // Define the size of each piece
    Rectangle rect = new Rectangle(xfactor * x,
    yfactor * y, xfactor, yfactor);
    // Define the origin center for rotation and
    //scale of the current subimage
    Vector2 origin =
    new Vector2(rect.Width, rect.Height) / 2;
    float rotation =
    (float)(random.NextDouble() – 0.5f) *
    delta * 20;
    float scale = 1 +
    (float)(random.NextDouble() – 0.5f) *
    delta * 20;
    // Randomly change the position of current
    //divided subimage
    Vector2 pos =
    new Vector2(rect.Center.X, rect.Center.Y);
    pos.X += (float)(random.NextDouble()) ;
    pos.Y += (float)(random.NextDouble()) ;
    // Draw the current sub image
    spriteBatch.Draw(renderTarget, pos, rect,
    Color.White * alpha, rotation, origin,
    scale, 0, 0);
    }
    }
    }
    [/code]
  5. Get the render target of the forefront image and draw the jumping sprites transition effect by calling the DrawJumpingSpritesTransition() method. Insert the following code to the Draw() method:
    [code]
    // Render the forefront image to render target texture
    GraphicsDevice.SetRenderTarget(transitionRenderTarget);
    spriteBatch.Begin();
    spriteBatch.Draw(textureForeFront, new Vector2(0, 0),
    Color.White);
    spriteBatch.End();
    GraphicsDevice.SetRenderTarget(null);
    // Get the total elapsed game time
    timer += (float)(gameTime.ElapsedGameTime.TotalSeconds);
    // Compute the delta value in every frame
    float delta = timer / TransitionSpeed * 0.01f;
    // Minus the alpha to change the image from opaque to
    //transparent using the delta value
    alpha -= delta;
    // Draw the jumping sprites transition effect
    spriteBatch.Begin();
    spriteBatch.Draw(textureBackground, Vector2.Zero,
    Color.White);
    DrawJumpingSpritesTransition(delta, alpha,
    transitionRenderTarget);
    spriteBatch.End();
    [/code]
  6. Build and run the application. It should run similar to the following screenshots:
    transition effect using RenderTarget2D

How it works…

In step 2, the textureForeFront and textureBackground will load the forefront and background images prepared for the jumping sprites transition effect. The xfactor and yfactor define the size of each subdivided image used in the transition effect. transitionRenderTarget is the RenderTarget2D object that will render the foreground image into render target texture for the jumping sprites transition effect. The alpha variable will control the transparency of each subimage and timer will accumulate the total elapsed game time. The TransitionSpeed is a constant value that defines the transition speed.

In step 4, we define the core method DrawJumpingSpritesTransition() for drawing the jumping sprites effect. First of all, we instantiate a Random object, and the random value generated from the object will be used to randomly change the rotation, scale, and position values of the divided subimages in the transition effect. In the following loop, we iterate every subimage row by row and column by column. When it is located at one of the subimages, we create a Rectangle object with the pre-defined size. Then, we change the origin point to the image center; this will make the image rotate and scale in place. After that, we randomly change the rotation, scale, and the position values. Finally, we draw the current subimage on the Windows Phone 7 screen.

In step 5, we draw the forefront image first, because we want the transition effect on the forefront image. Then using the render target, transform the current view to the render target texture by putting the drawing code between the GraphicsDevice.SetRenderTarget(tr ansitionRenderTarget) and GraphicsDevice.SetRenderTarget(null) methods. Next, we use the accumulated elapsed game time to compute the delta value to minus the alpha value. The alpha will be used in the SpriteBatch.Draw() method to make the subimages of the jumping sprites change from opaque to transparent. The last part in the Draw() method is to draw the background image first, then draw the transition effect. This drawing order is important. The texture that has the transition effect must be drawn after the images without the transition effect. Otherwise, you will not see the effect you want.