Using dual texture effects
Dual texture is very useful when you want to map two textures onto a model. The Windows Phone 7 XNA built-in DualTextureEffect samples the pixel color from two texture images. That is why it is called dual texture. The textures used in the effect have their own texture coordinate and can be mapped individually, tiled, rotated, and so on. The texture is mixed using the pattern:
[code]
finalTexture.color = texture1.Color * texture2.Color;
finalTexture.alpha = texture1.Alpha * texture2.Alpha;
[/code]
The color and alpha of the final texture come from a separate computation. The best practice of DualTextureEffect is to apply the lightmap on the model. In computer graphics, computing the lighting and shadows is a big performance job in real time. The lightmap texture is pre-computed and stored individually. A lightmap is a data set comprising the different surfaces of a 3D model. This will save the performance cost on lighting computation. Sometimes, you might want to use the ambient occlusion effect, which is costly. At this point, lightmap can be used as a texture, then mapped to the special model or scene for realistic effect. As the lightmap is pre-computed in 3D modeling software (you will learn how to deal with this in 3DS MAX), it is easy for you to use the most complicated lighting effects (shadows, ray-tracing, radiosity, and so on.) in Windows Phone 7. You can use the dual texture effect if you just want the game scene to have shadows and lighting. In this recipe, you will learn how to create the lightmap and apply it on your game model using the DualTextureEffect.
How to do it…
The following steps show you the process for creating the lightmap in 3DS MAX and how to use the lightmap in your Windows Phone 7 game using DualTextureEffect:
- Create the Sphere lightmap in 3DS MAX 2011. Open 3DS MAX 2011, in the Create panel, click the Geometry button, then create a sphere by choosing the Sphere push button, as shown in the following screenshot:
- Add the texture to the Material Compact Editor and apply the material to the sphere. Click the following menu items of 3DS MAX 2011: Rendering | Material Editor | Compact Material Editor. Choose the first material ball and apply the texture you want to the material ball. Here, we use the tile1.png, a checker image, which you can find in the Content directory of the example bundle file. The applied material ball looks similar to the following screenshot:
- Apply the Target Direct Light to the sphere. In the Create panel—the same panel for creating sphere—click the Lights button and choose the Target Direct option. Then drag your mouse over the sphere in the Perspective viewport and adjust the Hotspot/Beam to let the light encompass the sphere, as shown in the following screenshot:
- Render the Lightmap. When the light is set as you want, the next step is to create the lightmap. After you click the sphere that you plan to build the lightmap for, click the following menu items in 3DS MAX: Rendering | Render To Texture. In the Output panel of the pop-up window, click the Add button. Another pop-up window will show up; choose the LightingMap option, and then click Add Elements, as shown in the following screenshot:
- After that, change the setting of the lightmap:
- Change the Target Map Slot to Self-Illumination in the Output panel.
- Change the Baked Material Settings to Output Into Source in the Baked Material panel.
- Change the Channel to 2 in the Mapping Coordinates panel.
- Finally, click the Render button. The generated lightmap will look similar to the following screenshot:
By default, the lightmap texture type is .tga, and the maps are placed in the images subfolder of the folder where you installed 3DS MAX. The new textures are flat. In other words, they are organized according to groups of object faces. In this example, the lightmap name is Sphere001LightingMap.tga.
- Open the Material Compact Editor again by clicking the menu items Rendering | Material Editor | Compact Material Editor. You will find that the first material ball has a mixed texture combined with the original texture and the lightmap. You can also see that Self-Illumination is selected and the value is Sphere001LightingMap. tga. This means the lightmap for the sphere is applied successfully.
- Select the sphere and export to an FBX model file named DualTextureBall.FBX, which will be used in our Windows Phone 7 game.
- From this step, we will render the lightmap of the sphere in our Windows Phone 7 XNA game using the new built-in effect DualTextureEffect. Now, create a Windows Phone Game project named DualTextureEffectBall in Visual Studio 2010 and change Game1.cs to DualTextureEffectBallGame.cs. Then, add the texture file tile1.png, the lightmap file Sphere001LightingMap.tga, and the model DualTextureBall.FBX to the content project.
- Declare the indispensable variables in the DualTextureEffectBallGame class. Add the following code to the class field:
[code]
// Ball Model
Model modelBall;
// Dual Texture Effect
DualTextureEffect dualTextureEffect;
// Camera
Vector3 cameraPosition;
Matrix view;
Matrix projection;
[/code] - Initialize the camera. Insert the following code to the Initialize() method:
[code]
// Initialize the camera
cameraPosition = new Vector3(0, 50, 200);
view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
Vector3.Up);
projection = Matrix.CreatePerspectiveFieldOfView(
MathHelper.PiOver4,
GraphicsDevice.Viewport.AspectRatio,
1.0f, 1000.0f);
[/code] - Load the ball model and initialize the DualTextureEffect. Paste the following code to the LoadContent() method:
[code]
// Load the ball model
modelBall = Content.Load<Model>(“DualTextureBall”);
// Initialize the DualTextureEffect
dualTextureEffect = new DualTextureEffect(GraphicsDevice);
dualTextureEffect.Projection = projection;
dualTextureEffect.View = view;
// Set the diffuse color
dualTextureEffect.DiffuseColor = Color.Gray.ToVector3();
// Set the first and second texture
dualTextureEffect.Texture =
Content.Load<Texture2D>(“tile1”);
dualTextureEffect.Texture2 =
Content.Load<Texture2D>(“Sphere001LightingMap”);
Define the DrawModel() method in the class:
// Draw model
private void DrawModel(Model m, Matrix world,
DualTextureEffect effect)
{
foreach (ModelMesh mesh in m.Meshes)
{
// Iterate every part of current mesh
foreach (ModelMeshPart meshPart in mesh.MeshParts)
{
// Change the original effect to the designed
// effect
meshPart.Effect = effect;
// Update the world matrix
effect.World *= world;
}
mesh.Draw();
}
}
[/code] - Draw the ball model using DualTextureEffect on the Windows Phone 7 screen. Add the following lines to the Draw() method:
[code]
// Rotate the ball model around axis Y.
float timer =
(float)gameTime.ElapsedGameTime.TotalSeconds;
DrawModel(modelBall, Matrix.CreateRotationY(timer),
dualTextureEffect);
[/code] - Build and run the example. It should run as shown in the following screenshot:
- If you comment the following line in LoadContent() to disable the lightmap texture, you will find the difference when lightmap is on or off:
[code]
dualTextureEffect.Texture2 =
Content.Load<Texture2D>(“Sphere001LightingMap”);
[/code] - Run the application without lightmap. The model will be in pure black as shown in the following screenshot:
How it works…
Steps 1–6 are to create the sphere and its lightmap in 3DS MAX 2011.
In step 8, the modelBall is responsible for loading and holding the ball model. The dualTextureEffect is the object of XNA 4.0 built-in effect DualTextureEffect for rendering the ball model with its original texture and the lightmap. The following three variables cameraPosition, view, and projection represent the camera.
In step 10, the first line is to load the ball model. The rest of the lines initialize the DualTextureEffect. Notice, we use the tile1.png for the first and original texture, and the Sphere001LightingMap.tga for the lightmap as the second texture.
In step 11, the DrawModel() method is different from the definition. Here, we need to replace the original effect of each mesh with the DualTextureEffect. When we iterate the mesh parts of every mesh of the current model, we assign the effect to the meshPart.Effect for applying the DualTextureEffect to the mesh part.
Using environment map effects
In computer games, environment mapping is an efficient image-based lighting technique for aligning the reflective surface with the distant environment surrounding the rendered object. In Need for Speed, produced by Electronic Arts, if you open the special visual effect option while playing the game, you will find the car body reflects the scene around it, which may be trees, clouds, mountains, or buildings. They are amazing and attractive. This is environment mapping, it makes games more realistic. The methods for storing the surrounding environment include sphere mapping and cube mapping, pyramid mapping, and the octahedron mapping. In XNA 4.0, the framework uses cube mapping, in which the environment is projected onto the six faces of a cube and stored as six square textures unfolded into six square regions of a single texture. In this recipe, you will learn how to make a cubemap using the DirectX texture tool, and apply the cube map on a model using EnvironmentMappingEffect.
Getting ready
Cubemap is used in real-time engines to fake refractions. It’s way faster than ray-tracing because they are only textures mapped as a cube. So that’s six images (one for each face of the cube).
For creating the cube map for the environment map effect, you should use the DirectX Texture Tool in the DirectX SDK Utilities folder. The latest version of Microsoft DirectX SDK can be downloaded from the URL http://www.microsoft.com/downloads/en/ details.aspx?FamilyID=3021d52b-514e-41d3-ad02-438a3ba730ba.
How to do it…
The following steps lead you to create an application using the Environment Mapping effect:
- From this step, we will create the cube map in DirectX Texture Tool. Run this application and create a new Cube Map. Click the following menu items: File | New Texture. A window will pop-up; in this window, choose the Cubemap Texture for Texture Type; change the dimension to 512 * 512 in the Dimensions panel; set the Surface/Volume Format to Four CC 4-bit: DXT1. The final settings should look similar to the following screenshot:
- Set the texture of every face of the cube. Choose a face for setting the texture by clicking the following menu items: View | Cube Map Face | Positive X, as shown in the following screenshot:
- Then, apply the image for the Positive X face by clicking: File | Open Onto This Cubemap Face, as shown in the following screenshot:
- When you click the item, a pop-up dialog will ask you to choose a proper image for this face. In this example, the Positive X face will look similar to the following screenshot:
- It is similar for the other five faces, Negative X, Positive Y, Negative Y, Positive Z, and Negative Z. When all of the cube faces are appropriately set, we save cubemap as SkyCubeMap.dds. The cube map will look similar to the following figure:
- From this step, we will start to render the ball model using the XNA 4.0 built-in effect called EnvironmentMapEffect. Create a Windows Phone Game project named EnvironmentMapEffectBall in Visual Studio 2010 and change Game1.cs to EnvironmentMapEffectBallGame.cs. Then, add the ball model file ball.FBX, ball texture file silver.jpg, and the generated cube map from DirectX Texture Tool SkyCubemap.dds to the content project.
- Declare the necessary variables of the EnvironmentMapEffectBallGame class. Add the following lines to the class:
[code]
// Ball model
Model modelBall;
// Environment Map Effect
EnvironmentMapEffect environmentEffect;
// Cube map texture
TextureCube textureCube;
// Ball texture
Texture2D texture;
// Camera
Vector3 cameraPosition;
Matrix view;
Matrix projection;
[/code] - Initialize the camera. Insert the following lines to the Initialize() method:
[code]
// Initialize the camera
cameraPosition = new Vector3(2, 3, 32);
view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
Vector3.Up);
projection = Matrix.CreatePerspectiveFieldOfView(
MathHelper.PiOver4,
GraphicsDevice.Viewport.AspectRatio,
1.0f, 100.0f);
[/code] - Load the ball model, ball texture, and the sky cube map. Then initialize the environment map effect and set its properties. Paste the following code in the LoadContent() method:
[code]
// Load the ball model
modelBall = Content.Load<Model>(“ball”);
// Load the sky cube map
textureCube = Content.Load<TextureCube>(“SkyCubeMap”);
// Load the ball texture
texture = Content.Load<Texture2D>(“Silver”);
// Initialize the EnvironmentMapEffect
environmentEffect = new EnvironmentMapEffect(GraphicsDevice);
environmentEffect.Projection = projection;
environmentEffect.View = view;
// Set the initial texture
environmentEffect.Texture = texture;
// Set the environment map
environmentEffect.EnvironmentMap = textureCube;
environmentEffect.EnableDefaultLighting();
// Set the environment effect factors
environmentEffect.EnvironmentMapAmount = 1.0f;
environmentEffect.FresnelFactor = 1.0f;
environmentEffect.EnvironmentMapSpecular = Vector3.Zero;
[/code] - Define the DrawModel() of the class:
[code]
// Draw Model
private void DrawModel(Model m, Matrix world,
EnvironmentMapEffect environmentMapEffect)
{
foreach (ModelMesh mesh in m.Meshes)
{
foreach (ModelMeshPart meshPart in mesh.MeshParts)
{
meshPart.Effect = environmentMapEffect;
environmentMapEffect.World = world;
}
mesh.Draw();
}
}
[/code] - Draw and rotate the ball with EnvironmentMapEffect on the Windows Phone 7 screen. Insert the following code to the Draw() method:
[code]
// Draw and rotate the ball model float time = (float)gameTime.TotalGameTime.TotalSeconds; DrawModel(modelBall, Matrix.CreateRotationY(time * 0.3f) * Matrix.CreateRotationX(time), environmentEffect);
[/code] - Build and run the application. It should run similar to the following screenshot:
How it works…
Steps 1 and 2 use the DirectX Texture Tool to generate a sky cube map for the XNA 4.0 built-in effect EnvironmentMapEffect.
In step 4, the modelBall loads the ball model, environmentEffect will be used to render the ball model in EnvironmentMapEffect, and textureCube is a cube map texture. The EnvironmentMapEffect will receive the texture as an EnvironmentMap property; texture represents the ball texture; the last three variables cameraPosition, view, and projection are responsible for initializing and controlling the camera.
In step 6, the first three lines are used to load the required contents including the ball model, texture, and the sky cube map. Then, we instantiate the object of EnvironmentMapEffect and set its properties. environmentEffect.Projection and environmentEffect. View are for the camera; environmentEffect.Texture is for mapping ball texture onto the ball model; environmentEffect.EnvironmentMap is the environment map from which the ball model will get the reflected color and mix it with its original texture.
The EnvironmentMapAmount is a float that describes how much of the environment map could show up, which also means how much of the cube map texture will blend over the texture on the model. The values range from 0 to 1 and the default value is 1.
The FresnelFactor makes the environment map visible independent of the viewing angle. Use a higher value to make the environment map visible around the edges; use a lower value to make the environment map visible everywhere. Fresnel lighting only affects the environment map color (RGB values); alpha is not affected. The value ranges from 0.0 to 1.0. 0.0 is used to disable the Fresnel Lighting. 1.0 is the default value.
The EnvironmentMapSpecular implements cheap specular lighting, by encoding one or more specular highlight patterns into the environment map alpha channel, then setting the EnvironmentMapSpecular to the desired specular light color.
In step 7, we replace the default effect of every mesh part of the model meshes with the EnvironmentMapEffect, and draw the mesh with replaced effect.
Rendering different parts of a character into textures using RenderTarget2D
Sometimes, you want to see a special part of a model or an image, and you also want to see the original view of them at the same time. This is where, the render target will help you. From the definition of render target in DirectX, a render target is a buffer where the video card draws pixels for a scene that is being rendered by an effect class. In Windows Phone 7, the independent video card is not supported. The device has an embedded processing unit for graphic rendering. The major application of render target in Windows Phone 7 is to render the viewing scene, which is in 2D or 3D, into 2D texture. You can manipulate the texture for special effects such as transition, partly showing, or something similar. In this recipe, you will discover how to render different parts of a model into texture and then draw them on the Windows Phone 7 screen.
Getting ready
Render target, by default, is called the back buffer. This is the part of the video memory that contains the next frame to be drawn. You can create other render targets with the RenderTarget2D class, reserving new regions of video memory for drawing. Most games render a lot of content to other render targets besides the back buffer (offscreen), then assemble the different graphical elements in stages, combining them to create the final product in the back buffer.
A render target has a width and height. The width and height of the back buffer are the final resolution of your game. An offscreen render target does not need to have the same width and height as the back buffer. Small parts of the final image can be rendered in small render targets, and copied to another render target later. To use a render target, create a RenderTarget2D object with the width, height, and other options you prefer. Then, call GraphicsDevice.SetRenderTarget to make your render target the current render target. From this point on, any Draw calls you make will draw into your render target because the RenderTarget2D is the subclass of Texture2D. When you are finished with the render target, call GraphicsDevice.SetRenderTarget to a new render target (or null for the back buffer).
How to do it…
In the following steps, you will learn how to use RenderTarget2D to render different parts of a designated model into textures and present them on the Windows Phone 7 screen:
- Create a Windows Phone Game project named RenderTargetCharacter in Visual Studio 2010 and change Game1.cs to RenderTargetCharacterGame.cs. Then, add the character model file character.FBX and the character texture file Blaze. tga to the content project.
- Declare the required variables in the RenderTargetCharacterGame class field. Add the following lines of code to the class field:
[code]
// Character model
Model modelCharacter;
// Character model world position
Matrix worldCharacter = Matrix.Identity;
// Camera
Vector3 cameraPosition;
Vector3 cameraTarget;
Matrix view;
Matrix projection;
// RenderTarget2D objects for rendering the head, left //fist,
and right foot of character
RenderTarget2D renderTarget2DHead;
RenderTarget2D renderTarget2DLeftFist;
RenderTarget2D renderTarget2DRightFoot;
[/code] - Initialize the camera and render targets. Insert the following code to the Initialize() method:
[code]
// Initialize the camera
cameraPosition = new Vector3(0, 40, 350);
cameraTarget = new Vector3(0, 0, 1000);
view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
Vector3.Up);
projection = Matrix.CreatePerspectiveFieldOfView(
MathHelper.PiOver4,
GraphicsDevice.Viewport.AspectRatio,
0.1f, 1000.0f);
// Initialize the RenderTarget2D objects with different sizes
renderTarget2DHead = new RenderTarget2D(GraphicsDevice,
196, 118, false, SurfaceFormat.Color,
DepthFormat.Depth24, 0,
RenderTargetUsage.DiscardContents);
renderTarget2DLeftFist = new RenderTarget2D(GraphicsDevice,
100, 60, false, SurfaceFormat.Color,
DepthFormat.Depth24,
0, RenderTargetUsage.DiscardContents);
renderTarget2DRightFoot = new
RenderTarget2D(GraphicsDevice, 100, 60, false,
SurfaceFormat.Color, DepthFormat.Depth24, 0,
RenderTargetUsage.DiscardContents);
[/code] - Load the character model and insert the following line of code to the LoadContent() method:
[code]
modelCharacter = Content.Load<Model>(“Character”);
[/code] - Define the DrawModel() method:
[code]
// Draw the model on screen
public void DrawModel(Model model, Matrix world, Matrix view,
Matrix projection)
{
Matrix[] transforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(transforms);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
effect.DiffuseColor = Color.White.ToVector3();
effect.World =
transforms[mesh.ParentBone.Index] * world;
effect.View = view;
effect.Projection = projection;
}
mesh.Draw();
}
}
[/code] - Get the rendertargets of the right foot, left fist, and head of the character. Then draw the rendertarget textures onto the Windows Phone 7 screen. Insert the following code to the Draw() method:
[code]
// Get the rendertarget of character head
GraphicsDevice.SetRenderTarget(renderTarget2DHead);
GraphicsDevice.Clear(Color.Blue);
cameraPosition = new Vector3(0, 110, 60);
cameraTarget = new Vector3(0, 110, -1000);
view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
Vector3.Up);
DrawModel(modelCharacter, worldCharacter, view,
projection);
GraphicsDevice.SetRenderTarget(null);
// Get the rendertarget of character left fist
GraphicsDevice.SetRenderTarget(renderTarget2DLeftFist);
GraphicsDevice.Clear(Color.Blue);
cameraPosition = new Vector3(-35, -5, 40);
cameraTarget = new Vector3(0, 5, -1000);
view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
Vector3.Up);
DrawModel(modelCharacter, worldCharacter, view,
projection);
GraphicsDevice.SetRenderTarget(null);
// Get the rendertarget of character right foot
GraphicsDevice.SetRenderTarget(renderTarget2DRightFoot);
GraphicsDevice.Clear(Color.Blue);
cameraPosition = new Vector3(20, -120, 40);
cameraTarget = new Vector3(0, -120, -1000);
view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
Vector3.Up);
DrawModel(modelCharacter, worldCharacter, view,
projection);
GraphicsDevice.SetRenderTarget(null);
// Draw the character model
cameraPosition = new Vector3(0, 40, 350);
view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
Vector3.Up);
GraphicsDevice.Clear(Color.CornflowerBlue);
DrawModel(modelCharacter, worldCharacter, view,
projection);
// Draw the generated rendertargets of different parts of
// character model in 2D
spriteBatch.Begin();
spriteBatch.Draw(renderTarget2DHead, new Vector2(500, 0),
Color.White);
spriteBatch.Draw(renderTarget2DLeftFist, new Vector2(200,
220),
Color.White);
spriteBatch.Draw(renderTarget2DRightFoot, new Vector2(500,
400),
Color.White);
spriteBatch.End();
[/code] - Build and run the application. The application will run as shown in the following screenshot:
How it works…
In step 2, the modelCharacter loads the character 3D model and the worldCharacter represents the world transformation matrix of the character. The following four variables cameraPosition, cameraTarget, view, and projection are used to initialize the camera. Here, the cameraTarget will have the same Y value as the cameraPosition and large enough Z value, which is far away behind the center, because we want the camera’s look-at direction to be parallel to the XZ plane. The last three RenderTarget2D objects, renderTarget2DHead, renderTarget2DLeftFist, and renderTarget2DRightFoot, are responsible for rendering the different parts of the character from 3D real-time view to 2D texture.
In step 3, we initialize the camera and the three render targets. The initialization code for the camera is nothing new. The RenderTarget2D has three overloaded constructers, and the most complex one is the third. If you understand the third, the other two are easy. This constructor looks similar to the following code:
[code]
public RenderTarget2D (
GraphicsDevice graphicsDevice,
int width,
int height,
bool mipMap,
SurfaceFormat preferredFormat,
DepthFormat preferredDepthFormat,
int preferredMultiSampleCount,
RenderTargetUsage usage
)
[/code]
Let’s have a look at what all these parameters stand for:
- graphicsDevice: This is the graphic device associated with the render target resource.
- width: This is an integer, in pixels, of the render target. You can use graphicsDevice.PresentationParameters.BackBufferWidth to get the current screen width. Because the RenderTarget2D is a subclass of Texture2D, the value for width and height of RenderTarget2D objects are used to define the size of the final RenderTarget2D texture. Notice, the maximum size for Texture2D in Windows Phone 7 is less than 2048, so the width value of RenderTarget2D cannot be beyond this limitation.
- height: This is an integer, in pixels, of the render target. You can use graphicsDevice.PresentationParameters.BackBufferHeight to get the current screen height. The additional information is similar to the width parameter.
- mipMap: This is true to enable a full mipMap chain to be generated, otherwise false.
- preferredFormat: This is the preferred format for the surface data. This is the format preferred by the application, which may or may not be available from the hardware. In the XNA Framework, all two-dimensional (2D) images are represented by a range of memory called a surface. Within a surface, each element holds a color value representing a small section of the image, called a pixel. An image’s detail level is defined by the number of pixels needed to represent the image and the number of bits needed for the image’s color spectrum. For example, an image that is 800 pixels wide and 600 pixels high with 32 bits of color for each pixel (written as 800 x 600 x 32) is more detailed than an image that is 640 pixels wide and 480 pixels tall with 16 bits of color for each pixel (written as 640 x 480 x 16). Likewise, the more detailed image requires a larger surface to store the data. For an 800 x 600 x 32 image, the surface’s array dimensions are 800 x 600, and each element holds a 32-bit value to represent its color.
All formats are listed from left to right, most-significant bit to least-significant bit. For example, ARGB formats are ordered from the most-significant bit channel A (alpha), to the least-significant bit channel B (blue). When traversing surface data, the data is stored in memory from least-significant bit to most-significant bit, which means that the channel order in memory is from least-significant bit (blue) to most-significant bit (alpha).
The default value for formats that contain undefined channels (Rg32, Alpha8, and so on) is 1. The only exception is the Alpha8 format, which is initialized to 000 for the three color channels. Here, we use the SurfaceFormat.Color option. The SurfaceFormat.Color is an unsigned format, 32-bit ARGB pixel format with alpha, using 8 bits per channel.
- preferredDepthFormat: This is a depth buffer containing depth data and possibly stencil data. You can control a depth buffer using a state object. The depth format includes Depth16, Depth24, and Depth24 Stencil.
- usage: This is the object of RenderTargetUsage. It determines how render target data is used once a new target is set. This enumeration has three values: PreserveContents, PlatformContents, and DiscardContents. The default value DiscardContents means whenever a rendertarget is set onto the device, the previous one will be destroyed first. On the other hand, when you choose the PreserveContents option, the data associated with the render target will be maintained if a new rendertarget is set. This method will impact the performance greatly because it stores data and copies it all back to rendertarget when you use it again. The PlatformContents will either clear or keep the data, depending on the current platform. On Xbox 360 and Windows Phone 7, the render target will discard contents. On PC, the render target will discard the contents if multi-sampling is enabled, and preserve the contents if not.
In step 6, the first part of the Draw() method gets the render target texture for the head of the character, the GraphicDevice.SetRenderTarget() sets a new render target for this device. As the application runs on Windows Phone 7 and the RenderTargetUsage is set to DiscardContents, every time a new render target is assigned onto the device, the previous one will be destroyed. From XNA 4.0 SDK, the method has some restrictions while calling. They are as follows:
- The multi-sample type must be the same for the render target and the depth stencil surface
- The formats must be compatible for the render target and the depth stencil surface
- The size of the depth stencil surface must be greater than, or equal to, the size of the render target
These restrictions are validated only while using the debug runtime when any of the GraphicsDevice drawing methods are called. Then, the following lines until the GraphicsDevice.SetRenderTarget(null) are used to adjust the camera position and the look-at target for rendering the head of the character. The block of code points out the view for transforming and rendering the model from 3D to 2D texture as render target, which will be displayed at the designated place on the Windows Phone screen. The method calling of GraphicsDevice.SetRenderTarget(null) will reset the render target currently on the graphics device for the next render target using it. It is similar to renderTarget2DRightFoot and renderTarget2DLeftFist in the second and third part of the Draw() method. The fourth part is to draw the actual character 3D model. After that, we will present all of the generated render targets on the Windows Phone 7 screen using the 2D drawing methods.
Creating a screen transition effect using RenderTarget2D
Do you remember the scene transition in Star Wars? The scene transition is a very common method for smoothly changing the movie scene from current to next. The frequent transition patterns are Swiping, Rotating, Fading, Checkerboard Scattering, and so on. With the proper transition effects, the audience will know that the plots go well when the stage changes. Besides movies, the transition effects also have a relevant application in video games, especially in 2D games. Every game state change will trigger a transition effect. In this recipe, you will learn how to create a typical transition effect using RenderTarget2D for your Windows Phone 7 game.
How to do it…
The following steps will draw a spinning squares transition effect using the RenderTarget2D technique:
- Create a Windows Phone Game named RenderTargetTransitionEffect and change Game1.cs to RenderTargetTransitionEffectGame.cs. Then, add Image1.png and Image2.png to the content project.
- Declare the indispensable variables. Insert the following code to the RenderTargetTransitionEffectGame code field:
[code]
// The first forefront and background images
Texture2D textureForeFront;
Texture2D textureBackground;
// the width of each divided image
int xfactor = 800 / 8;
// the height of each divided image
int yfactor = 480 / 8;
// The render target for the transition effect
RenderTarget2D transitionRenderTarget;
float alpha = 1;
// the time counter
float timer = 0;
const float TransitionSpeed = 1.5f;
[/code] - Load the forefront and background images, and initialize the render target for the jumping sprites transition effect. Add the following code to the LoadContent() method:
[code]
// Load the forefront and the background image
textureForeFront = Content.Load<Texture2D>(“Image1”);
textureBackground = Content.Load<Texture2D>(“Image2”);
// Initialize the render target
transitionRenderTarget = new RenderTarget2D(GraphicsDevice,
800, 480, false, SurfaceFormat.Color,
DepthFormat.Depth24, 0,
RenderTargetUsage.DiscardContents);
[/code] - Define the core method DrawJumpingSpritesTransition() for the jumping sprites transition effect. Paste the following lines into the RenderTargetTransitionEffectGame class:
[code]
void DrawJumpingSpritesTransition(float delta, float alpha,
RenderTarget2D renderTarget)
{
// Instance a new random object for generating random
//values to change the rotation, scale and position
//values of each sub divided images
Random random = new Random();
// Divide the image into designated pieces,
//here 8 * 8 = 64
// ones.
for (int x = 0; x < 8; x++)
{
for (int y = 0; y < 8; y++)
{
// Define the size of each piece
Rectangle rect = new Rectangle(xfactor * x,
yfactor * y, xfactor, yfactor);
// Define the origin center for rotation and
//scale of the current subimage
Vector2 origin =
new Vector2(rect.Width, rect.Height) / 2;
float rotation =
(float)(random.NextDouble() – 0.5f) *
delta * 20;
float scale = 1 +
(float)(random.NextDouble() – 0.5f) *
delta * 20;
// Randomly change the position of current
//divided subimage
Vector2 pos =
new Vector2(rect.Center.X, rect.Center.Y);
pos.X += (float)(random.NextDouble()) ;
pos.Y += (float)(random.NextDouble()) ;
// Draw the current sub image
spriteBatch.Draw(renderTarget, pos, rect,
Color.White * alpha, rotation, origin,
scale, 0, 0);
}
}
}
[/code] - Get the render target of the forefront image and draw the jumping sprites transition effect by calling the DrawJumpingSpritesTransition() method. Insert the following code to the Draw() method:
[code]
// Render the forefront image to render target texture
GraphicsDevice.SetRenderTarget(transitionRenderTarget);
spriteBatch.Begin();
spriteBatch.Draw(textureForeFront, new Vector2(0, 0),
Color.White);
spriteBatch.End();
GraphicsDevice.SetRenderTarget(null);
// Get the total elapsed game time
timer += (float)(gameTime.ElapsedGameTime.TotalSeconds);
// Compute the delta value in every frame
float delta = timer / TransitionSpeed * 0.01f;
// Minus the alpha to change the image from opaque to
//transparent using the delta value
alpha -= delta;
// Draw the jumping sprites transition effect
spriteBatch.Begin();
spriteBatch.Draw(textureBackground, Vector2.Zero,
Color.White);
DrawJumpingSpritesTransition(delta, alpha,
transitionRenderTarget);
spriteBatch.End();
[/code] - Build and run the application. It should run similar to the following screenshots:
How it works…
In step 2, the textureForeFront and textureBackground will load the forefront and background images prepared for the jumping sprites transition effect. The xfactor and yfactor define the size of each subdivided image used in the transition effect. transitionRenderTarget is the RenderTarget2D object that will render the foreground image into render target texture for the jumping sprites transition effect. The alpha variable will control the transparency of each subimage and timer will accumulate the total elapsed game time. The TransitionSpeed is a constant value that defines the transition speed.
In step 4, we define the core method DrawJumpingSpritesTransition() for drawing the jumping sprites effect. First of all, we instantiate a Random object, and the random value generated from the object will be used to randomly change the rotation, scale, and position values of the divided subimages in the transition effect. In the following loop, we iterate every subimage row by row and column by column. When it is located at one of the subimages, we create a Rectangle object with the pre-defined size. Then, we change the origin point to the image center; this will make the image rotate and scale in place. After that, we randomly change the rotation, scale, and the position values. Finally, we draw the current subimage on the Windows Phone 7 screen.
In step 5, we draw the forefront image first, because we want the transition effect on the forefront image. Then using the render target, transform the current view to the render target texture by putting the drawing code between the GraphicsDevice.SetRenderTarget(tr ansitionRenderTarget) and GraphicsDevice.SetRenderTarget(null) methods. Next, we use the accumulated elapsed game time to compute the delta value to minus the alpha value. The alpha will be used in the SpriteBatch.Draw() method to make the subimages of the jumping sprites change from opaque to transparent. The last part in the Draw() method is to draw the background image first, then draw the transition effect. This drawing order is important. The texture that has the transition effect must be drawn after the images without the transition effect. Otherwise, you will not see the effect you want.
it’d be great to know about the lightmapping with several objects at once (i.e. a complex level). All the tests I’ve done gave me a huge amount of textures (one per object) and I’m absolutely sure that I’m doing something wrong…
thanks!
MrK