Printing Text

Creating the Font Demo Project

A font in XNA is nothing more than a text file—at least, from the programmer’s point of view. When the project is compiled, XNA uses the text file to create a bitmap font on a memory texture and use that for printing text on the screen.

This is a time-consuming process, which is why the font is created at program startup rather than while it’s running. Let’s create a new project and add a font to it.

Creating a New XNA Project

Follow these steps to create a new XNA project in Visual C# 2010:

  1. Start up Visual Studio 2010 Express for Windows Phone (or whichever edition of Visual Studio 2010 you are using).
  2. Bring up the New Project dialog, shown in Figure 3.1, from either the Start Page or the File menu.

    Creating the Font Demo project.
    FIGURE 3.1 Creating the Font Demo project.
  3. Choose Windows Phone Game (4.0) from the list of project templates.
  4. Type in a name for the new project (the example is called Font Demo).
  5. Choose the location for the project by clicking the Browse button, or by typing the folder name directly.
  6. Click OK to create the new project.

The new project is generated by Visual Studio and should look similar to the project shown in Figure 3.2.

The newly generated Font Demo project.
FIGURE 3.2 The newly generated Font Demo project.

Adding a New Font to the Content Project

At this point, you can go ahead and run the project by pressing F5, but all you will see in the Windows Phone emulator is a blue screen. That is because we haven’t written any code yet to draw anything. Before we can print text on the screen, we have to create a font, which is added to the Content project.

In XNA 4.0, most game assets are added to the Content project within the Solution, where they are compiled or converted into a format that XNA uses. We might use the general term “project” when referring to a Windows Phone game developed with XNA, but there might be more than one project in the Solution. The “main project” will be the one containing source code for a game. Some assets, however, might be located just within the source code project, depending on how the code accesses those assets. Think of the Content project as a container for “managed” assets.

A Visual Studio “Solution” is the overall wrapper or container for a game project, and should not be confused with “projects” that it contains, including the Content project containing game assets (bitmap files, audio files, 3D mesh files, and so on).

In this example, both the Solution and the main project are called “Font Demo,” because Visual Studio uses the same name for both when a new Solution is generated. Now, let’s add a new font to the Content project. Remember that the Content project is where all game assets are located.

  1. Select the Content project in Solution Explorer to highlight it, as shown in Figure 3.3.

    Highlighting the Content project.
    FIGURE 3.3 Highlighting the Content project.
  2. Open the Project menu and choose Add New Item. Optionally, you can right-click the Content project in Solution Explorer (Font DemoContent (Content)) to bring up the context menu, and choose Add, New Item.
  3. The Add New Item dialog, shown in Figure 3.4, appears. Choose Sprite Font from the list. Leave the name as is (SpriteFont1.spritefont).

    Adding a new Sprite Font.
    FIGURE 3.4 Adding a new Sprite Font.

A new .spritefont file has been added to the Content project, as shown in Figure 3.5. Visual Studio opens the new file right away so that you can make any changes you want to the font details. The default font name is Segoe UI Mono, which is a monospaced font. This means each character of the font has the same width (takes up the same amount of horizontal space). Some fonts are proportional, which means each character has a different width (in which case, “W” and “I” are spaced quite differently, for instance).

A new Sprite Font has been added to the Content project.
FIGURE 3.5 A new Sprite Font has been added to the Content project.

The SpriteFont1.spritefont file is just a text file, like a .CS source code file, but it is formatted in the XML (Extensible Markup Language) format. You can experiment with the font options in the .spritefont descriptor file, but usually the only fields you will need to change are FontName and Size. Here is what the font file looks like with all comments removed:

[code]
<?xml version=”1.0” encoding=”utf-8”?>
<XnaContent xmlns:Graphics =
“Microsoft.Xna.Framework.Content.Pipeline.Graphics”>
<Asset Type=”Graphics:FontDescription”>
<FontName>Segoe UI Mono</FontName>
<Size>14</Size>
<Spacing>0</Spacing>
<UseKerning>true</UseKerning>
<Style>Regular</Style>
<CharacterRegions>
<CharacterRegion>
<Start>&#32;</Start>
<End>&#126;</End>
</CharacterRegion>
</CharacterRegions>
</Asset>
</XnaContent>
[/code]

Visual Studio Solution (.sln) and project (.csproj) files also contain XML-formatted information!

Table 3.1 shows the royalty-free fonts included with XNA 4.0. Note that some fonts come with italic and bold versions even though the SpriteFont description also allows for these modifiers.

XNA Fonts

Learning to Use the SpriteFont Class

We can create as many fonts as we want in an XNA project and use them at any time to print text with different styles. For each font you want to use in a project, create a new .spritefont file. The name of the file is used to load the font, as you’ll see next. Even if you want to use the same font style with a different point size, you must create a separate .spritefont file (although we will learn how to scale a font as a rendering option).

Loading the SpriteFont Asset

To use a SpriteFont asset, first add a variable at the top of the program. Let’s go over the steps:

  1. Add a new variable called SpriteFont1. You can give this variable a different name if you want. It is given the same name as the asset here only for illustration, to associate one thing with another.
    [code]
    public class Game1 : Microsoft.Xna.Framework.Game
    {
    GraphicsDeviceManager graphics;
    SpriteBatch spriteBatch;
    //create new font variable
    SpriteFont SpriteFont1;
    [/code]
  2. Create (instantiate) a new object using the SpriteFont1 variable, and simultaneously load the font with the Content.Load() method. Note the class name in brackets, <SpriteFont>. If you aren’t familiar with template programming, this can look a bit strange. This type of coding makes the code cleaner, because the Content.Load() method has the same call no matter what type of object you tell it to load.
    [code]
    protected override void LoadContent()
    {
    // Create a new SpriteBatch, which can be used to draw textures.
    spriteBatch = new SpriteBatch(GraphicsDevice);
    // TODO: use this.Content to load your game content here
    SpriteFont1 = Content.Load<SpriteFont>(“SpriteFont1”);
    }
    [/code]

If the Content class did not use a templated Load() method, we would need to call a different method for every type of game asset, such as Content.LoadSpriteFont(), Content.LoadTexture2D(), or Content.LoadSoundEffect().

There is another important reason for using a template form of Load() here: We can create our own custom content loader to load our own asset files! XNA is very extendable with this capability. Suppose you want to load a data file saved by your own custom level editor tool. Instead of manually converting the level file into text or XML, which XNA can already read, you could instead just write your own custom content loader, and then load it with code such as this: Content.Load<Level>(“level1”)

The ability to write code like this is powerful, and reflects a concept similar to “late binding.” This means the C# compiler might not know exactly what type of object a particular line of code is referring to at compile time, but the issue is sorted out later while the program is running. That’s not exactly what’s happening here, but it is a similar concept, and the easiest illustration of template programming I can think of.

These are just possibilities. Let’s get back to the SpriteFont code at hand!

Printing Text

Now that we have loaded the .spritefont asset file, and XNA has created a bitmap font in memory after running the code in LoadContent(), the font is available for use. We can use the SpriteFont1 object to print text on the screen using SpriteBatch.DrawString(). Just be sure to always have a matching pair of SpriteBatch.Begin() and SpriteBatch.End() statements around any drawing code.

Here are the steps you may follow to print some text onto the screen using the new font we have created:

  1. Scroll down to the Draw() method in the code listing.
  2. Add the code shown in bold.
    [code]
    protected override void Draw(GameTime gameTime)
    {
    GraphicsDevice.Clear(Color.CornflowerBlue);
    // TODO: Add your drawing code here
    string text = “This is the Segoe UI Mono font”;
    Vector2 position = new Vector2(20, 20);
    spriteBatch.Begin();
    spriteBatch.DrawString(SpriteFont1, text, position, Color.White);
    spriteBatch.End();
    base.Draw(gameTime);
    }
    [/code]

Run the program by pressing F5. The WP7 emulator comes up, as shown in Figure 3.6.

Printing text in the Font Demo program.
FIGURE 3.6 Printing text in the Font Demo program.

The version of SpriteBatch.DrawString() used here is the simplest version of the method, but other overloaded versions of the method are available. An overloaded method is a method such as DrawString() that has two or more different sets of parameters to make it more useful to the programmer. There are actually six versions of DrawString(). Here is an example using the sixth and most complex version. When run, the changes to the text output are dramatic, as shown in Figure 3.7!

[code]
float rotation = MathHelper.ToRadians(15.0f);
Vector2 origin = Vector2.Zero;
Vector2 scale = new Vector2(1.3f, 5.0f);
spriteBatch.DrawString(SpriteFont1, text, position, Color.White,
rotation, origin, scale, SpriteEffects.None, 0.0f);
[/code]

Experimenting with different DrawString() options.
FIGURE 3.7 Experimenting with different DrawString() options.

As you have learned in this hour, the font support in XNA takes a little time to set up, but after a font has been added, some very useful and versatile text printing capabilities are available. We can print text via the SpriteFont.DrawString() method, with many options available such as font scaling and different colors.

Getting Started with Visual C# 2010 for Windows Phone

Visual C# 2010 Express

At the time of this writing, the current version of the development tool for Windows Phone 7 is Visual Studio 2010. To make development simple for newcomers to the Windows Mobile platform, Microsoft has set up a package that will install everything you need to develop, compile, and run code in the emulator or on a physical Windows Phone device—for free. The download URL at this time is http://www.microsoft. com/express/Phone. If you are using a licensed copy of Visual Studio 2010, such as the Professional, Premium, or Ultimate edition, then you will find XNA Game Studio 4.0 and related tools at http://create.msdn.com (the App Hub website). The App Hub website, shown in Figure 2.1, also contains links to the development tools.

The App Hub website has download links to the development tools.
FIGURE 2.1 The App Hub website has download links to the development tools.

The most common Windows Phone developer will be using the free version of Visual C# 2010, called the Express edition. This continues the wonderful gift Microsoft first began giving developers with the release of Visual Studio 2005. At that time, the usual “professional” versions of Visual Studio were still available, of course, and I would be remiss if I failed to point out that a licensed copy of Visual Studio is required by any person or organization building software for business activities (including both for-profit and nonprofit). The usual freelance developer will also need one of the professional editions of Visual Studio, if it is used for profit. But any single person who is just learning, or any organization that just wants to evaluate Visual Studio for a short time, prior to buying a full license, can take advantage of the free Express editions. I speak of “editions” because each language is treated as a separate product. The professional editions include all the languages, but the free Express editions, listed here, are each installed separately:

  • Visual C# 2010 Express
  • Visual Basic 2010 Express
  • Visual C++ 2010 Express

The version of Visual Studio we will be using is called Visual Studio 2010 Express for Windows Phone. This is a “package” with the Windows Phone SDK already prepackaged with Visual C# 2010 Express. (Despite the name, “Visual Studio” here supports only the C# language.) It’s a nice package that makes it very easy to get started doing Windows Phone development. But if you are using Visual Studio 2010 Professional (or one of the other editions) along with the Windows Phone SDK, you will see a lot more project templates in the New Project dialog, shown in Figure 2.2.

The New Project dialog in Visual C# 2010 Express.
FIGURE 2.2 The New Project dialog in Visual C# 2010 Express.
  • Windows Phone Application (Visual C#)
  • Windows Phone Databound Application (Visual C#)
  • Windows Phone Class Library (Visual C#)
  • Windows Phone Panorama Application (Visual C#)
  • Windows Phone Pivot Application (Visual C#)
  • Windows Phone Game (4.0) (Visual C#)
  • Windows Phone Game Library (4.0) (Visual C#)
  • Windows Game (4.0) (Visual C#)
  • Windows Game Library (4.0) (Visual C#)
  • Xbox 360 Game (4.0) (Visual C#)
  • Xbox 360 Game Library (4.0) (Visual C#)
  • Content Pipeline Extension Library (4.0)
  • Empty Content Project (4.0) (Visual C#)

As you can see, even in this limited version of Visual Studio 2010, all the XNA Game Studio 4.0 project templates are included—not just those limited to Windows Phone. The project templates with “(4.0)” in the name come from the XNA Game Studio SDK, which is what we will be primarily using to build Windows Phone games. The first five project templates come with the Silverlight SDK. That’s all we get with this version of Visual Studio 2010. It’s not even possible to build a basic Windows application here—only Windows Phone (games or apps), Windows (game only), and Xbox 360 (obviously, game only). The first five project templates are covered in the next section, “Using Silverlight for WP7.”

Did you notice that all of these project templates are based on the C# language? Unfortunately for Visual Basic fans, we cannot use Basic to program games or apps for Windows Phone using Visual C# 2010 Express. You can install Visual Basic 2010 Express with Silverlight and then use that to make WP7 applications. XNA, however, supports only C#.

We don’t look at Xbox 360 development in this book at all. If you’re interested in the subject, see my complementary book XNA Game Studio 4.0 for Xbox 360 Developers [Cengage, 2011].

Using Silverlight for WP7

Microsoft Silverlight is a web browser plug-in “runtime.” Silverlight is not, strictly speaking, a development tool. It might be compared to DirectX, in that it is like a library, but for rich-content web apps. It’s similar to ASP.NET in that Silverlight applications run in a web browser, but it is more capable for building consumer applications (while ASP.NET is primarily for business apps). But the way Silverlight applications are built is quite different from ASP.NET—it’s more of a design tool with an editing environment called Expression Blend. The design goal of Silverlight is to produce web applications that are rich in media support, and it supports all standard web browsers (not just Internet Explorer, which is a pleasant surprise!), including Firefox and Safari on Mac.

Using Expression Blend to Build Silverlight Projects

Microsoft Expression Blend 4 is a free tool installed with the Windows Phone package that makes it easier to design Silverlight-powered web pages with rich media content support. Blend can be used to design and create engaging user experiences for Silverlight pages. Windows application support is possible with the WPF (Windows Presentation Foundation) library. A key feature of Blend is that it separates design from programming. As you can see in Figure 2.3, the New Project dialog in Blend lists the same project types found in Visual C# 2010 Express.

Expression Blend is a Silverlight development tool for web designers.
FIGURE 2.3 Expression Blend is a Silverlight development tool for web designers.

Let’s create a quick Expression Blend project to see how it works. While working on this quick first project, keep in mind that we’re not building a “Blend” project, but a “Silverlight” project—using Blend. Blend is a whole new Silverlight design and development tool, not affiliated with Visual Studio (but probably based on it). The Silverlight library is already installed on the Windows Phone emulator and actual phones.

Here’s how to create the project:

  1. Create a Windows Phone Application project using the New Project dialog. Click File, New Project.
  2. Blend creates a standard project for Windows Phone, complete with an application title and opening page for the app.
  3. Run the project with Project, Run Project, or by pressing F5. The running program is shown in Figure 2.4.
Our first project with Expression Blend.
FIGURE 2.4 Our first project with Expression Blend.

This is a useless app, but it shows the steps needed to create a new project and run it in the Windows Phone emulator. Did you notice how large the emulator window appears? That’s full size with respect to the screen resolution of WP7. As you’ll recall from the first hour, the resolution is 480×800. That is enough pixels to support 480p DVD movies, but not the 740p or 1080p HD standards. Still, DVD quality is great for a phone! And when rotated to landscape mode, 800×480 is a lot of screen real estate for a game too.

You can make quick and easy changes to the labels at the top and experiment with the design controls in the toolbox on the left. Here you can see that the application title and page title have been renamed, and some images and shapes have been added to the page. Pressing F5 again brings it up in the emulator, shown in Figure 2.5.

Now that you’ve seen what’s possible with Expression Blend’s more designer-friendly editor, let’s take a look at the same Silverlight project in Visual Studio 2010.

Silverlight Projects

The Silverlight runtime for WP7 supports some impressive media types with many different audio and video codecs, vector graphics, bitmap graphics, and animation. That should trigger the perimeter alert of any game developer worth their salt! Silverlight brings some highly interactive input mechanisms to the Web, including accelerometer motion detection, multitouch input (for devices that support it),camera, microphone input, and various phone-type features (like accessing an address book and dialing).

Making quick changes to the page is easy with Expression Blend.
FIGURE 2.5 Making quick changes to the page is easy with Expression Blend.

To find out whether your preferred web browser supports Silverlight, visit the installer web page at http://www.microsoft.com/getsilverlight/get-started/install.

The Visual Studio 2010 project templates specific to Silverlight are highlighted in bold in the list below. These are the same project templates shown in Expression Blend!

  • Windows Phone Application (Visual C#)
  • Windows Phone Databound Application (Visual C#)
  • Windows Phone Class Library (Visual C#)
  • Windows Phone Panorama Application (Visual C#)
  • Windows Phone Pivot Application (Visual C#)
  • Windows Phone Game (4.0) (Visual C#)
  • Windows Phone Game Library (4.0) (Visual C#)
  • Windows Game (4.0) (Visual C#)
  • Windows Game Library (4.0) (Visual C#)
  • Xbox 360 Game (4.0) (Visual C#)
  • Xbox 360 Game Library (4.0) (Visual C#)
  • Content Pipeline Extension Library (4.0)
  • Empty Content Project (4.0) (Visual C#)

Let’s create a quick project in Visual Studio in order to compare it with Expression Blend. You’ll note right away that it is not the same rich design environment, but is more programmer oriented.

Comparing Visual Studio with Expression Blend

Let’s create a new project in Visual C# 2010 in order to compare it with Expression Blend. Follow these steps:

  1. Open the New Project dialog with File, New Project.
  2. Next, in the New Project dialog, choose the target folder for the project and type in a project name, as shown in Figure 2.6.

    Creating a new Silverlight project in Visual C# 2010 Express.
    FIGURE 2.6 Creating a new Silverlight project in Visual C# 2010 Express.
  3. Click the OK button to generate the new project shown in the figure. Not very user-friendly, is it? First of all, double-clicking a label does not make it editable, among other limitations (compared to Expression Blend). Where are the control properties? Oh, yes, in the Properties window in Visual Studio. See Figure 2.7. This is also very data-centric, which programmers love and designers loathe. The view on the left is how the page appears on the device (or emulator); the view on the right is the HTML-like source code behind the page, which can be edited.
  4. Bring up the Properties window (if not already visible) by using the View menu. Select a control on the page, such as the application title. Scroll down in the Properties to the Text property, where you can change the label’s text, as shown in Figure 2.8. Play around with the various properties to change the horizontal alignment, the color of the text, and so on. Open the Toolbox (located on the left side of Visual Studio) to gain access to new controls such as the Ellipse control shown here.
    The new Silverlight project has been created.
    FIGURE 2.7 The new Silverlight project has been created.

    Adding content to the Silverlight page.
    FIGURE 2.8 Adding content to the Silverlight page.

XNA Game Studio

XNA Game Studio 4.0 was released in the fall of 2010. (From now on, let’s just shorten this to “XNA” or “XNA 4.0”, even though “Game Studio” is the name of the SDK, and “XNA” is the overall product name.) XNA 4.0 saw several new improvements to the graphics system, but due to the hardware of the Xbox 360, XNA is still based on Direct3D 9 (not the newer versions, Direct3D 10 or 11). This is actually very good news for a beginner, since Direct3D 9 is much easier to learn than 10 or 11. Although XNA abstracts the C++-based DirectX libraries into the C#-based XNA Framework, there is still much DirectX-ish code that you have to know in order to build a capable graphics engine in XNA. While XNA 4.0 added WP7 support, it simultaneously dropped support for Zune (the portable multimedia and music player).

I have a Zune HD, and it’s a nice device! It can play 720p HD movies and even export them to an HDTV via an adapter and HDMI cable. It plays music well too. But, like many consumers, I just did not have much incentive to go online and download games for the Zune. This is, of course, purely a subjective matter of opinion, but it’s disappointing for game developers who put effort into making games for Zune. Fortunately, the code base is largely the same (thanks to XNA and C#), so those Zune games can be easily ported to WP7 now.

Rendering states, enumerations, return values, and so forth are the same in XNA as they are in Direct3D, so it could be helpful to study a Direct3D book to improve your skills as an XNA programmer!

The project templates for Windows Phone might surprise you—there are only two! We can build a Windows Phone game or a game library. All the other templates are related to the other platforms supported by XNA.

  • Windows Phone Application (Visual C#)
  • Windows Phone Databound Application (Visual C#)
  • Windows Phone Class Library (Visual C#)
  • Windows Phone Panorama Application (Visual C#)
  • Windows Phone Pivot Application (Visual C#)
  • Windows Phone Game (4.0) (Visual C#)
  • Windows Phone Game Library (4.0) (Visual C#)
  • Windows Game (4.0) (Visual C#)
  • Windows Game Library (4.0) (Visual C#)
  • Xbox 360 Game (4.0) (Visual C#)
  • Xbox 360 Game Library (4.0) (Visual C#)
  • Content Pipeline Extension Library (4.0)
  • Empty Content Project (4.0) (Visual C#)

Let’s build a quick XNA project for Windows Phone to see what it looks like. We’ll definitely be doing a lot of this in upcoming chapters since XNA is our primary focus (the coverage of Silverlight was only for the curious—grab a full-blown Silverlight or Expression Blend book for more complete and in-depth coverage.

Creating Your First XNA 4.0 Project

Let’s create a new XNA 4.0 project in Visual C# 2010, so we can use this as a comparison with the previous project created with Expression Blend. Follow these steps:

  1. Create a new project. We’ll be basing these tutorials around Visual Studio 2010 Express for Windows Phone. The processes will be similar to using the Professional version, but you will see many more project templates in the New Project dialog. Open the File menu and choose New Project. The New Project dialog is shown in Figure 2.9.

    Creating a new XNA 4.0 project.
    FIGURE 2.9 Creating a new XNA 4.0 project.
  2. The new project has been created. Note, from Figure 2.10, the code that has been automatically generated for the XNA project. If you have ever worked with XNA before, this will be no surprise—the code looks exactly like the generated code for Windows and Xbox 360 projects!

    The new XNA 4.0 project has been created.
    FIGURE 2.10 The new XNA 4.0 project has been created.
  3. Run the project with Build, Run, or by pressing F5. The emulator will come up, as shown in Figure 2.11. Doesn’t look like much—just a blue screen! That’s exactly what we want to see, because we haven’t written any game code yet.
  4. Add a SpriteFont to the Content project. Right-click the content project, called XNA ExampleContent(Content) in the Solution Explorer. Choose Add, New Item, as shown in Figure 2.12.
    Running the XNA project in the Windows Phone emulator.
    FIGURE 2.11 Running the XNA project in the Windows Phone emulator.

    Adding a new item to the content project.
    FIGURE 2.12 Adding a new item to the content project.
  5. In the Add Item dialog, choose the Sprite Font item from the list, as shown in Figure 2.13, and leave the filename as SpriteFont1.spritefont.

    Adding a new SpriteFont content item to the project.
    FIGURE 2.13 Adding a new SpriteFont content item to the project.
  6. Create the font variable. The comments in the code listing have been removed to make the code easier to read. We’ll dig into the purpose of all this code in the next hour, so don’t be concerned with understanding all the code yet. Type in the two new bold lines of code shown here to add the font variable.
    [code]
    public class Game1 : Microsoft.Xna.Framework.Game
    {
    GraphicsDeviceManager graphics;
    SpriteBatch spriteBatch;
    //new font variable
    SpriteFont font;
    public Game1()
    {
    graphics = new GraphicsDeviceManager(this);
    Content.RootDirectory = “Content”;
    TargetElapsedTime = TimeSpan.FromTicks(333333);
    }
    [/code]
  7. Load the font. Enter the two new lines shown in bold in the LoadContent method.
    [code]
    protected override void Initialize()
    {
    base.Initialize();
    }
    protected override void LoadContent()
    {
    spriteBatch = new SpriteBatch(GraphicsDevice);
    //load the font
    font = Content.Load<SpriteFont>(“SpriteFont1”);
    }
    protected override void UnloadContent()
    {
    }
    [/code]
  8. Print a message on the screen. Using the SpriteBatch and SpriteFont objects, we can print any text message. This is done from the Draw method—add the code highlighted in bold.
    [code]
    protected override void Update(GameTime gameTime)
    {
    if (GamePad.GetState(PlayerIndex.One).Buttons.Back ==
    ButtonState.Pressed)
    this.Exit();
    base.Update(gameTime);
    }
    protected override void Draw(GameTime gameTime)
    {
    GraphicsDevice.Clear(Color.CornflowerBlue);
    //print a message
    spriteBatch.Begin();
    string text = “HELLO FROM XNA!”;
    Vector2 pos = font.MeasureString(text);
    spriteBatch.DrawString(font, text, pos, Color.White);
    spriteBatch.End();
    base.Draw(gameTime);
    }
    }
    [/code]
  9. Run the program using Debug, Start Debugging, or by pressing F5. The program will come up in the emulator, shown in Figure 2.14. Now there’s just one big problem: The font is too small, and the screen needs to be rotated to landscape mode so we can read it!
  10. Click the emulator window to cause the little control menu to appear at the upper right. There are two icons that will rotate the window left or right, allowing us to switch from portrait to landscape mode. All XNA projects will default to portrait mode by default. Landscape mode is shown in Figure 2.15.
    The text message is displayed in the emulator— sideways!
    FIGURE 2.14 The text message is displayed in the emulator— sideways!

    Rotating the emulator window to landscape mode for XNA projects.
    FIGURE 2.15 Rotating the emulator window to landscape mode for XNA projects.
  11. Enlarge the font. We’re almost done; there’s just one final thing I want to show you how to do here. Open the font file you created, SpriteFont1. spritefont. Change the size value from 14 to 36. Now rerun the project by pressing F5. The new, large font is shown in Figure 2.16.

    Enlarging the font to make it more readable.
    FIGURE 2.16 Enlarging the font to make it more readable.

XNA or Silverlight: What’s the Verdict?

We have now seen two projects developed with the two different, and somewhat competing tools: XNA and Silverlight. Which should we choose? This is really a matter of preference when it comes to developing a game. Although XNA is far more capable due to its rendering capabilities, Silverlight can be used to make a game as well, with form-based control programming. For portable, touchscreen applications, it’s a given: Silverlight. But for serious game development, XNA is the only serious option.

We covered quite a bit of information regarding Visual Studio 2010, the project templates available for Windows Phone, and the value-added tool Expression Blend. A sample project was presented using Expression Blend with a corresponding Silverlight project in Visual Studio, as well as an XNA project. We’re off to a good start and already writing quite a bit of code! In the next hour, you will create your first Windows Phone game.

Making Games for Windows Phone 7

Getting Started with Windows Phone 7

There are two ways we can develop games for Windows Phone 7: Silverlight and XNA Game Studio. Although Silverlight does have basic graphics capabilities, those capabilities are provided to support applications and are not ideally suited for games. XNA, on the other hand, was developed specifically for game development!

Before learning all about XNA Game Studio 4.0, Visual C# 2010, projects, configurations, Xbox Live, App Hub, and other great things that will interest a game developer, we need to first understand this new platform. Windows Phone 7, which we might call WP7 for short, is an operating system for smartphone devices.

In “the old days,” if you knew how to turn on a computer, you were called a “computer geek.” It didn’t really matter if you knew how to do anything with a computer; it was just assumed by many (especially in the older generations) that turning it on required knowledge of the black arts in electronics wizardry. That seems to be the case with most new technology, which people will tend to resist and perhaps even fear to a certain degree. When cars were first invented at the dawn of the automobile industry, people who drove around in a “horseless carriage” were considered snobbish, among the wealthy class—that is, until Henry Ford built a car that just about anyone could afford to buy. Not only did most people not have a computer in the early days, but most people at the time did not even begin to know how to go about buying one.

I’m speaking in terms of the time period around the mid- to late-1970s, at the dawn of the personal computer (PC) age. At that time, PCs were few and far between, and a kid who owned a Commodore PET, a Tandy TRS-80, or an Apple was a rare and lucky kid indeed! Most big businesses used big mainframe computers to do the most time-consuming tasks of any business—accounting, payroll, and taxes. But even at this time period, most white-collar employees who worked in an office did not have a PC. Imagine that! It’s unheard-of today! Today, the first thing a new employee must have is a cubicle or an office with a PC. And, not just that, but a networked PC with Internet access.

Windows Phone 7 as a Game Platform?

There was a time not too many years ago when just having a PC was enough to do your work—programming, software engineering, computer-aided design (CAD), word processing, accounting. Even in the 1980s, it was rare for every employee to have a PC at his or her desk, and even more rare for families to have a PC in their homes. A lot of kids might have had a Nintendo Entertainment System (NES) or Sega Master System (SMS) or the older Atari 2600, all of which used cartridge-based games. A step up from these video game systems were the true PCs of the time, such as the Apple II, Commodore 64, Amiga, Atari 400/800, and Atari ST. No computer enthusiasts at the time used an IBM PC at home! MS-DOS was a terrible operating system compared to the other, more user-friendly ones. If you wanted to do programming, you would naturally gravitate to the consumer PCs, not the business-oriented IBM PC. Now, at the time, the Apple Macintosh was pretty expensive and the ordinary kid would prefer an Apple II, but that was the start of the Mac, back in the 1980s (although it has been completely redesigned several times before reaching the modern OS X).

Well, today the world sure is a different place. If we just ignore how powerful computers are today, just look at all the hand-held systems—they’re everywhere! The Nintendo DS family and the Sony PlayStation Portable (PSP) family are the two leading competitors of hand-held video game systems, and they can do almost anything that their big brothers (Nintendo Wii and Sony PS3) can do, including online play. These things are everywhere! You can’t walk through a store or a mall without seeing kids carrying some sort of mobile video game system with them, not to mention phones. And it’s not just kids, but adults have their toys too, like Apple iPhone, iPod, and iPad, for which some really great games are available! One of my favorites is Plants vs Zombies by PopCap Games. You can also get the game for Xbox 360, Mac, Windows, and Nintendo DS. And you know what? Some popular games are starting to come out for Windows Phone 7 because it’s fairly easy to port an Xbox 360 game to Windows Phone 7.

So what is Windows Phone 7 all about? Obviously, since you’re reading this book, you are interested in programming games for the device. That goes without saying, but what is development for this platform really like? What’s it all about? We have to ask ourselves these questions because developing a game that you want to be taken seriously requires a pretty big investment of time, if not money. Most likely, anyone looking at Windows Phone 7 for game development is already experienced with XNA Game Studio. If you have never used this development tool, the next hour will be helpful because we’ll be creating projects and working with Visual C# quite a bit. I’ll assume that you might not have any experience with Visual Studio, but I do not want to annoy experienced developers, so bear with me a bit while we cover the basics such as these!

History of the Platform

Windows Phone 7 follows a long history of mobile devices from Microsoft, dating clear back to the Pocket PC in 2000. Pocket PC competed directly with the market leader of the time, Palm. The Palm Pilot was arguably the progenitor of all handsized mobile computers today, including cellphones.

Interestingly enough, I would not consider Apple’s iPhone as an evolutionary leap beyond Palm Pilot—ignoring the many devices that have entered the market in the intervening years of the past decade. The iPhone does not follow in the lineage of “mobile computer” dating back to the Palm Pilot and Pocket PC because it was derived from Apple’s wildly successful iPod. The iPod should have been invented by Sony, the company responsible for the “Walkman” generation of portable music players. Everyone in the 1980s and early 1990s owned a “Walkman,” regardless of the brand, in the same vein that everyone has played with a “Frisbee,” despite these being brand names with competing companies making similar products. We Americans, due to targeted advertising, come to associate whole industries with a single product name, merely out of habit.

At any rate, you might have heard the term “podcast.” The term is rather generalized today to mean audio streamed or recorded in digital form for playback on a digital media player. But the concept was invented by Apple for the iPod and iTunes (including iTunes University), which now work with video files as well as audio files. While everyone was caught up in the Napster lawsuits, Apple was busy developing iTunes and began selling music in a revolutionary new way: per track instead of per album. Have you ever heard a catchy new song on the radio and wanted to buy it for your iPod, Microsoft Zune, Creative Zen, or similar media player? Well, in the past decade, you would buy the whole CD and then rip the tracks into MP3 with software such as Windows Media Player or Winamp. This point is debatable, but I would argue that Apple iTunes proved that digital music sales can be a commercial success, highly profitable both for the recording artists and for the service provider (iTunes). Amazon is probably the second case example that proves this is now a commercially successful way to sell music.

The point is, iPod was so successful that it evolved into the iPhone and iPad, and competing companies have been trying to keep up with Apple in both of these markets now for years! The iPod and its relatives are insanely great, which is why everyone wants one. More than a fashion statement, Apple understood what the consumer wanted and made it for them. What did customers want? Not a do-everything badly device, but a do-the-most-important-thing great device. In contrast, many companies hire “experts” to conduct consumer studies, and then spend millions trying to convince customers that they really want and need that product. This might be one good way to break into a relatively unknown market or to adjust the feature set of a product according to consumer interest. But the situation Apple finds itself in today is enviable, and with that comes emulation.

The previous iteration of Windows Mobile was called Windows Phone 6.5, and over a dozen hardware manufacturers and networks supported it, from Acer to HP to Samsung. Prior to that, Windows Phone 5 revolutionized the platform with a GPU (graphics processing unit) for 3D rendering.

The current Windows Phone 7’s operating system traces its roots directly back to the original Pocket PC operating system released in 2000. Pocket PCs came with a stylus, much like the one used on a Nintendo DS. This allows for precise input coordinates, necessary for apps like a spreadsheet (a portable version of Excel called Pocket Excel was available). However, stylus input can be tedious in today’s hustle-and-bustle environment, where it is more convenient to use a thumb to do things on the device’s touchscreen. Who wants to fish out a stylus just to tap a silly pop-up button (which Microsoft developers are notoriously fond of) when a thumb or another finger will do the trick?

The online capabilities of the Sega Dreamcast video game console were made possible thanks to Windows CE. If you look at the front of the Dreamcast case, you will find a Windows CE logo.

To get technical, Windows Phone 7 is based on the Windows Mobile operating system, a new name for the classic Windows CE operating system. Windows CE goes back quite a few years. “Pocket PC” was a marketing name for Windows CE 3.1. Developers at the time used Microsoft eMbedded Visual Tools 3.0 (see Figure 1.1) to develop for Windows CE 3.1. This was a modified version of Visual Studio 6 for Windows CE that was actually a remarkable development environment! It was stable, fully featured, and free! This might be considered an early predecessor of the Express Editions now made available free by Microsoft. At the time, there were many Pocket PC models available, but the most notable ones were from Casio, HP, Dell, and Compaq.

FIGURE 1.1 Microsoft eMbedded Visual C++ 3.0.
FIGURE 1.1 Microsoft eMbedded Visual C++ 3.0.

Microsoft supported game development on the Pocket PC (Windows CE 3.1) by providing a low-level library called the Game API. It was nowhere near as powerful as DirectX for rendering; but neither was the Game API at the slower level of the Windows GDI (graphics device interface). No, the Game API did give access to the actual bits of the video memory, making it possible to write a low-level blitter (a term derived from the “bit-block transfer” form of memory copying). Many developers worked on sprite renderers and game libraries using the Game API, and a book was published on the subject—Pocket PC Game Programming: Using the Windows CE Game API, by Prima-Tech—in 2001. Some copies are still floating around if you’re curious about “the early days” and the predecessor of WP7. At the time, developers had their choice of eMbedded Visual Basic or eMbedded Visual C++, but today we’re developing games for WP7 using XNA and C#. In that 2001 book is a rudimentary game library surrounding WinMain() and the other Windows core code necessary when working in C++, as well as integrated Game API built into a series of classes.

I created one published game using the library from that early book, an indie game called Perfect Match, shown in Figure 1.2. It was sold on mobile sites such as www. Handango.com. Since I lost contact with the artist who did all the renderings, the game could not be updated or ported to any newer systems. By the way, the screen resolution was 240×320 in portrait orientation, and most games were designed to be played this way; but you’ll note that many WP7 games require the player to tilt the device sideways (landscape orientation). This is something that was not common back in the Pocket PC days, but it makes sense now.

FIGURE 1.2 Perfect Match, a Pocket PC 2000 game.

Another example from the time period is the final sample game in the book, a multiplayer game called Pocket Air Hockey, shown in Figure 1.3. This was a quick game, but even now, looking back on it, I think the chat keypad and networking code were quite good for a book example. I used the Windows Sockets (winsock) library with threading. To develop the game, I had two Pocket PCs (a Casio Cassiopeia and an HP Jornada) each equipped with a Hawking CF LAN card plugged into the top expansion port with blue CAT5 network cables going into each one. Can you imagine that? (There were also 802.11b Wi-Fi cards available for the Compact- Flash adapter port.) I just don’t think anyone was really into developing multiplayer games for this platform at the time.

FIGURE 1.3 Pocket Air Hockey, a networked multiplayer game.
FIGURE 1.3 Pocket Air Hockey, a networked multiplayer game.

There was no single processor standard for the original Pocket PC 2000 devices, but three came to be used: Hitachi SH-3, NEC VR MIPS, and StrongARM. The ARM processor would become the single standard for Pocket PC 2002. The reason there have been so many releases in recent years, compared to the past, without significant updates to the core operating system (Windows CE) is that there’s a need to keep up with the aggressive cellphone market’s demand for change, even when change is not entirely necessary. When a company releases some trivial new feature in one of its phones, all competitors must come up with a compelling reason for customers to choose their phone instead. The carrier networks (T-Mobile, AT&T, and Verizon, primarily) also push hard for new devices and plans to maintain their customers and attract new customers. So, Windows Mobile 6 might not even be recognizable between 2007 and 2009, but the changes are primarily cosmetic, but also have to do with user input and application support. This market has been chaotic, to say the least! Table 1.1 is a historical list of releases for the platform.

History of Windows Mobile
TABLE 1.1 History of Windows Mobile
History of Windows Mobile
TABLE 1.1 History of Windows Mobile

Windows Phone 7 was planned for release in 2009 with a core based on Windows CE 5.0—a core dating back to 2005. The core was just too old, so development failed. At that point, a stopgap product was released (Windows Phone 6.5) while Windows Phone 7 went back to the drawing board. The Windows Mobile team ended up rebuilding the new platform from scratch around the new Windows CE 6.0 core for release the following year (2010).

Hardware Specifications

What we have today in the WP7, a completely new operating system built from the ground up around the Windows CE 6.0 core, is a modern touch-enabled architecture with no resemblance to the Windows desktop computer operating system. It took many years, but Microsoft finally perfected the platform! No longer must mobile users tap with a stylus. A sample phone built by Samsung and connected to the AT&T network is shown in Figure 1.4. WP7 competes directly with two other smartphones in the industry today: Apple iPhone and Google Android. Apple is a closed architecture, meaning only Apple builds iPhone devices. WP7 and Android, on the other hand, are not so much mobile devices as they are operating systems. That is why there are many devices available in the Android and WP7 format—but there is only one iPhone. From a developer’s point of view, this openness makes life more difficult. Android, for instance, may be too open, with many different screen sizes and hardware specs. Developing a game for iPhone? That’s a piece of cake, as far as specifications go, because there is only one (although, admittedly, adjustments to the settings are required for iPad due to its larger screen resolution).

Table 1.2 shows the common hardware specifications among most of the models available at the time of this writing. The most notable thing about the specifications is that they now follow a basic standard across all manufacturers. Apple has proven that extreme openness and flexibility are not always desirable traits in mobile hardware. One of the difficulties facing Android developers today is the need to support many different hardware devices in a single code base. Windows Mobile developers had to deal with a similar problem in Windows Phone 6.4 and earlier versions, but as you can see, WP7 has a much simpler subset of hardware specifications. This is a good thing for developers, greatly simplifying the code, allowing developers to focus on game design and gameplay rather than hardware idiosyncrasies among the different makes and models.

A Windows Phone 7 device built by Samsung.
FIGURE 1.4 A Windows Phone 7 device built by Samsung.
Windows Phone 7 Hardware Specifications
TABLE 1.2 Windows Phone 7 Hardware Specifications

WP7 is an awesome integration of many technologies that have evolved over the years, beginning with the early Windows CE and Pocket PC devices, to the modern, powerful smartphone of today with advanced 3D rendering capabilities that truly bring cutting-edge gaming into the palm of your hand.

Windows Phone Performance Optimization, Fast! to Faster!

Optimizing your game’s performance

Games belong to a class of real-time software. This means that they are not only expected to produce the correct result, but they must also complete this within a fixed time window. In general, game developers shoot for a minimum of displaying 30 frames per second in order to produce smooth, glitch-free animations; and most prefer 60 frames per second. This means that all of the game calculations getting the player input, implementing enemy AI, moving objects, collision detection and handling, and drawing each frame must be completed within 16.7 milliseconds! When you consider that most modern video games have hundreds, or even thousands, of objects that have to be updated and drawn within that time period, it is no wonder that programmers feel they have to optimize every line of code.

However, many XNA programmers are not familiar with the tools and methods for determining when, where, how, or even if, they should optimize their code. The point of this recipe is to help you answer these questions.

Getting ready

The following section will help you to optimize your game’s performances

Design versus implementation

A common response by those who question, or even outright disagree, with the idea that optimizing the code early is a bad idea, is to point out that it is far easier to change software early in its lifecycle than after it has been written. That is, of course, very true. That is why it is important to understand the difference between the design optimization and implementation optimization.

While designing a game (or any software), you must take into account the size and complexity of your game, and select the correct data structures and algorithms that can support it. A simple 2D shooter or a platformer with no more than a hundred objects interacting at any given time can probably get away with a brute force approach for handling movements and collisions. Maintaining a simple list or an array of objects and iterating through it each frame will most likely work fine, and will be very simple to implement and debug.

However, a more complex game world, with perhaps thousands of active objects, will need an efficient method of partitioning the game space to minimize the number of object interaction tests in each frame. Similarly, games requiring detailed enemy AI will need to rely on algorithms that can produce “intelligent” actions as quickly as possible.

There are many resources available that discuss game programming algorithms. Some of them are as follows:

  • The use of quadtrees and octrees for partitioning the game world to minimize collision detection tests
  • The minimax algorithm with alpha-beta pruning for efficiently finding the “best” move in two player strategy games (please check the wiki link for more information at http://en.wikipedia.org/wiki/Alpha-beta_pruning)
  • The A* algorithm for efficient path finding (for more detail about the A* algorithm, please check the wiki link at http://en.wikipedia.org/wiki/A*_search_ algorithm)

The selection of appropriate data structures and algorithms during the design phase has a far greater impact on the eventual performance of your game than any implementation optimization you will make, as your algorithms determine the maximum number of operations your game will have to perform during each frame.

In order to demonstrate this point, imagine that for your first game you write a simple 2D shooter that relies on a brute force approach to collision detection. In every frame, you simply test every active object against every other active object to see if they intersect. As you decide to have only a limited number of enemies active at a time, it works well and easily runs at 60 frames per second.

With that experience under your belt, you now want to write a second game that is far more ambitious. This time you decide to write a Zelda-like adventure game with a large scrolling game board and hundreds of objects moving around it simultaneously. (The Legend of Zelda, an NDS game from Nintendo. You can find out more about this game at: http:// en.wikipedia.org/wiki/The_Legend_of_Zelda.) Using your existing code as a starting point, you get well into the game’s implementation before you discover that the brute force approach that worked very well in your simple game does not work so well in this new game. In fact, you may be measuring screen draws in seconds per frame instead of frames per second!

The reason is that, comparing every object against every other object is what is known as an O(n2) algorithm (for more information on estimating the algorithm time complexity, please see the classic book Introduction to Algorithm second edition, http://www.amazon. com/Introduction-Algorithms-Thomas-H-Cormen/dp/0262033844). That is, the number of operations that have to be performed is related to the square of the number of objects on which you are operating. If you have 10 objects in your game, you only have to perform a hundred tests to see if there are any collisions. If you have a hundred objects, you have to perform ten thousand tests, which may still be possible on a modern PC if each test can be done quickly enough. However, if you have five hundred just five times as many as the last example you will have to perform 250,000 collision tests. Even if each test took only 67 microseconds, you would still be using the entire 16.7 milliseconds frame time (usually at 60 frames per second) just for collision detection. The point is that it does not matter how efficiently you implement that algorithm in a code, its performance will still devolve exponentially with the number of objects in your game, and will therefore be the single greatest limiting factor to the size of your game.

Game runs slow?

Ok, so your game is playable with most of the features you want to be implemented. However, when you test the application, you find that the animation runs like a robot, the character should run, but it is crawling. What is wrong there? You might say, it is about the compiler features, such as the foreach keyword, or ask whether you need to pass the matrices by reference, not by values.

You have two choices: stop there and take a step back or fix it, and start going into each method trying to figure out how to find your way around the problem on a case-by-case basis. Maybe you will even succeed and get the game back into the runnable state that you had it in hours earlier. Maybe you are even lucky enough to have not introduced yet more bugs into the process. However, in all likelihood, you have not fixed the problem and now you have code that does not run any better than when you started, but is harder to understand, harder to debug, and has kludges in it to get around problems that you introduced trying to fix the wrong problem. Your time will be much better spent finding out where your problems are before you try to fix them.

Measuring the running time

A prototype is just a simplified version of software (in this case, your game), that focuses on one particular aspect of it. Prototypes are often used as proofs of concept to show that the software will be able to work as expected. As prototypes don’t have to deal with all of the details that the final software will, they can be written quickly so that, if necessary, different approaches can be evaluated.

Prototypes are frequently used to evaluate user interfaces, so that customers can provide early feedback. This can be useful for game programming too, as if you can implement a working display and control scheme, you may be able to find out what works and doesn’t work before you get too far along in the actual implementation of the game. However, the use of prototypes that we are concerned with here is to determine whether an algorithm is fast enough for the game we want to write. To do that, we will want to benchmark it. Benchmarking is just the process of timing how long an algorithm takes to run.

How to do it…

Fortunately, the .NET framework makes benchmarking very easy by providing the System. Debug.Stopwatch class. The Stopwatch class provides a Start and a Stop method. It keeps track of the total number of clock ticks that occur between calls to Start and Stop. Even better, like a real stopwatch, it keeps a running count of ticks between successive calls to Start and Stop. You can find out how much time has passed by querying its ElapsedTicks or ElapsedMilliseconds properties. A Reset() method lets us reset Stopwatch back to zero.

Now, follow the steps to take advantage of the Stopwatch class:

  1. As a showcase, the following code gives you a general picture on how to use the Stopwatch class for time measuring:
    [code]
    public abstract class Sprite
    {
    public Vector2 Position { get; set; }
    public Color Color { get; set; }
    // Sprite’s collision rectangle in screen coordinates.
    public BoundingRectangle BoundingBox { get; }
    public Sprite(
    string imageName,
    BoundingRectangle boundingBox);
    public virtual void Initialize();
    public virtual void LoadGraphicsContent(
    ContentManager content);
    public virtual void Update(GameTime time);
    public virtual void Draw(SpriteBatch spriteBatch);
    // Tests for collision with another Sprite. If a
    // collision occurs, it calls the Collide method for
    // both Sprites. Returns true if images collide.
    public bool TestCollision(Sprite item);
    // Called when the TestCollision method detects a
    // collision with another Sprite.
    //
    protected virtual void Collide(
    BoundingRectangle overlap,
    Sprite item);
    }
    [/code]
  2. As it is an abstract, it is intended to be used as a parent to other Sprite classes that will implement its behavior, so we will create our own TestSprite class. TestSprite will generate a random starting position, directional movement vector, and speed (in pixels per second), as shown here:
    [code]
    public override void Initialize()
    {
    // Set starting position.
    Position =
    new Vector2(
    random.Next(screenWidth),
    random.Next(screenHeight));
    // Create a random movement vector.
    direction.X = (float)random.NextDouble() * 2 – 1;
    direction.Y = (float)random.NextDouble() * 2 – 1;
    direction.Normalize();
    // Determine random speed in pixels per second.
    speed = (float)random.NextDouble() * 300 + 150;
    }
    [/code]
  3. In each frame, the following code will update its position based on its movement direction, speed, and the amount of time that has elapsed. It will also test to see if it has hit the edge of the screen, and deflect away from it:
    [code]
    public override void Update(GameTime time)
    {
    // Reset color back to white.
    Color = Microsoft.Xna.Framework.Graphics.Color.White;
    // Calculate movement vector.
    Vector2 move =
    (float)time.ElapsedGameTime.TotalSeconds *
    speed * direction;
    // Determine new position.
    UpdatePosition(move);
    }
    private void UpdatePosition(Vector2 move)
    {
    Position += move;
    if ((BoundingBox.Left < 0) ||
    (BoundingBox.Right > screenWidth))
    {
    direction.X = -direction.X;
    Position -= new Vector2(move.X, 0);
    }
    if ((BoundingBox.Top < 0) ||
    (BoundingBox.Bottom > screenHeight))
    {
    direction.Y = -direction.Y;
    Position -= new Vector2(0, move.Y);
    }
    }
    [/code]
  4. We will talk more about collision testing next. For now, we will see what it takes to time just moving our TestSprite around the screen. Inside our game, we will create a TestSprite object and call its Initialize() and LoadGraphicsContent() methods at appropriate places. And we will create SpriteBatch for our game and pass it to Draw(). Now all we need is to use Stopwatch to time it in the Update() method. In order to do this, we will create a couple of helper methods that start and stop Stopwatch, and print the amount of time it takes for each update:
    [code]
    private Stopwatch updateTimer;
    private int updates = 0;
    private int framesPerSecond;
    private void StartTimer()
    {
    updateTimer.Start();
    }
    private void StopTimer()
    {
    updateTimer.Stop();
    updates++;
    // Show the results every five seconds.
    if (updates == 5 * framesPerSecond)
    {
    Debug.WriteLine(
    updates + ” updates took ” +
    updateTimer.ElapsedTicks + ” ticks (” +
    updateTimer.ElapsedMilliseconds +
    ” milliseconds).”);
    int msPerUpdate =
    (int)updateTimer.ElapsedMilliseconds / updates;
    Debug.WriteLine(
    “Each update took ” +
    msPerUpdate + ” milliseconds.”);
    // Reset stopwatch.
    updates = 0;
    updateTimer.Reset();
    }
    }
    [/code]
  5. By putting calls to StartTimer and StopTimer around the calls to our sprite’s Update() method, we will get a report of the average time each call takes:
    [code]
    300 updates took 34931 ticks (9 milliseconds).
    Each update took 0.03 milliseconds.
    300 updates took 24445 ticks (6 milliseconds).
    Each update took 0.02 milliseconds.
    300 updates took 23541 ticks (6 milliseconds).
    Each update took 0.02 milliseconds.
    300 updates took 23583 ticks (6 milliseconds).
    Each update took 0.02 milliseconds.
    300 updates took 23963 ticks (6 milliseconds).
    Each update took 0.02 milliseconds.
    [/code]

How it works…

In step 1, the Initialize(), LoadGraphicsContent(), Update(), and Draw() methods are the standard methods for Windows Phone 7 XNA Game Programming. Additionally, it provides properties for getting and setting the position and color. For collision detection, the Collide() method called by TestCollision() tests for collision with another Sprite BoundingBox values intersect.

In step 3, an actual game may want to determine the actual point of intersection so it could deflect away from that point more realistically. If you need that level of realism, you would probably want to go ahead and implement your strategy here, so you could time it. However, all we are trying to prototype here is a basic update time, so that this version is fine for our needs.

Note that the Update() method does not test for collisions. We don’t want individual sprite testing for collisions because to do so, our Sprite class would have to know about other game objects and we would be severely limiting our design options for collision testing. Any change to our collision-testing algorithm could, and likely would, affect our Sprite class. We want to avoid anything that limits future design changes, so we will give our Sprite class the ability to test for collisions, but require another part of our code to determine what objects should be tested.

In step 6, each call took on average of 20 microseconds (on my development laptop your results will vary). However, notice that the very first set of updates took almost one and a half times as long to run as the others. That is because the first time these methods are called, the JIT compiler compiles the code and our Stopwatch is timing that as well. It is also possible, as this is a fairly small amount of code that is being called repeatedly, that some or all of it may be fitting in the cache, which will increase the speed of later calls.

These show some of the problems with benchmarking code. Another problem is that we are adding some time by using Stopwatch itself. Thus, benchmark times for prototype code can be used as a general guide, but cannot be relied upon for exact values. In fact, exact values of the time it takes for functions to run are very hard to determine. Although intended only to describe quantum phenomena, a variation of the Heisenberg Uncertainty Principle is at play here: the act of measuring something affects the thing being measured.

There’s more…

Now let’s expand our prototype to help us determine whether we can get away with a brute force approach to collision detection.

First, let’s look at the collision handling code that I have already placed in the Collide method. Remember that this gets called, for both sprites, whenever the TestCollision() method determines a collision between two sprites. All it does is set the Sprite’s color to red:

[code]
protected override void Collide(
BoundingRectangle overlap,
Sprite item)
{
// Turn the sprite red to indicate collision.
Color = Color.Red;
}
[/code]

Let’s give this a test by replacing our single TestSprite with an array of TestSprites. Every place we referenced TestSprite in the original code, we now have to loop through the array to handle all of our TestSprites. In order to make this a little easier to manage, we will refactor our original sprite.Update() call in the Update() method into a new UpdateSprites() method that updates every sprite. We will add a new HandleCollisions() method to our game to test for collisions. Finally, we will change the Update() method, so that it only calls StartTimer and StopTimer around the call to HandleCollisions(). The relevant sections look like the following code:
[code]
private TestSprite[] sprites = new TestSprite[10];
protected override void Update(GameTime gameTime)
{
if (Keyboard.GetState().IsKeyDown(Keys.Escape))
{
this.Exit();
}
UpdateSprites(gameTime);
StartTimer();
HandleCollisions();
StopTimer();
base.Update(gameTime);
}
private void UpdateSprites(GameTime gameTime)
{
foreach (Sprite sprite in sprites)
{
sprite.Update(gameTime);
}
}
private void HandleCollisions()
{
// This is brute force approach
for (int i = 0; i < sprites.Length; i++)
{
for (int j = i + 1; j < sprites.Length; j++)
{
sprites[i].TestCollision(sprites[j]);
}
}
}
[/code]

Looking at that, you may wonder why I am not using foreach for the HandleCollisions call. It is simply because with foreach, we have no way of knowing what sprites we already tested. This algorithm tests every sprite against every other sprite exactly once.

What are the results? On my machine, with 10 sprites, I get the following:
[code]
300 updates took 48827 ticks (13 milliseconds).
Each update took 0.04333333 milliseconds.
300 updates took 42466 ticks (11 milliseconds).
Each update took 0.03666667 milliseconds.
300 updates took 42371 ticks (11 milliseconds).
Each update took 0.03666667 milliseconds.
300 updates took 43086 ticks (12 milliseconds).
Each update took 0.04 milliseconds.
300 updates took 43449 ticks (12 milliseconds).
Each update took 0.04 milliseconds.
[/code]

Wow! Handling collisions for 10 sprites takes only twice as long as it did just to move one sprite. How could that be? It is partly due to the overhead of using the Stopwatch class and making method calls, and partly due to the fact that we are measuring very fast operations. Obviously, the closer you get to the resolution of the underlying timer, the more error you get in trying to time things.

Before we go on, notice also that the impact of the JIT compiler during our first set of updates is significantly less. This shows how effective the JIT compilation is and why we don’t need to worry about it affecting the performance of our game. We may take a performance hit the first time a section of code is running, but it is relatively miniscule to our overall performance.

Now let’s see what happens when we increase the number of sprites to 100:

[code]
300 updates took 2079460 ticks (580 milliseconds).
Each update took 1.933333 milliseconds.
300 updates took 2156954 ticks (602 milliseconds).
Each update took 2.006667 milliseconds.
300 updates took 2138909 ticks (597 milliseconds).
Each update took 1.99 milliseconds.
300 updates took 2150696 ticks (600 milliseconds).
Each update took 2 milliseconds.
300 updates took 2169919 ticks (606 milliseconds).
Each update took 2.02 milliseconds.
[/code]

Whether you should be impressed or dismayed depends on how you want to use this collision-handling algorithm. On one hand, averaging 2 milliseconds per frame is still a miniscule part of our 16.7 millisecond frame timing. If you are not planning to have more than a hundred sprites or so, this algorithm will suit your needs perfectly. However, looking at the relative time difference per sprite gives a completely different perspective. It takes us 50 times as long to handle 10 times the number of sprites.

How about when the number is increased to 500? I urge you to run this code, so that you can see the results for yourself!

[code]
300 updates took 28266113 ticks (7896 milliseconds).
Each update took 26.32 milliseconds.
300 updates took 28179606 ticks (7872 milliseconds).
Each update took 26.24 milliseconds.
300 updates took 28291296 ticks (7903 milliseconds).
Each update took 26.34333 milliseconds.
300 updates took 28199114 ticks (7877 milliseconds).
Each update took 26.25667 milliseconds.
300 updates took 28182787 ticks (7873 milliseconds).
Each update took 26.24333 milliseconds.
[/code]

At this time there is no way to hide the dismay. The movement is clearly getting far less than our desired 60 frames per second! In fact, just the HandleCollisions() call alone is taking almost twice our allotted 16.7 milliseconds per frame. Multiplying the number of objects by 5 increased our time by 13! The times are not increasing exactly in quadric, due to overhead, but the rate of increase is clear.

Does this mean we should never consider this algorithm? Hopefully, at this point the answer is obvious. Many games can easily get away with only having an order of a hundred or so objects active at a time, which we have clearly shown can be handled easily. The fact that the algorithm is trivial to implement and maintain makes it a no-brainer for a large number of games.

On the other hand, if you know you will need to have hundreds of objects, you will need another solution. You have two options: optimize this algorithm, or find a new one. Anyone who is experienced with code optimization will see several obvious ways to make both the algorithm and its implementation more efficient.

For starters, most games don’t actually need to test every object against every other object. Taking the Space Invasion game as an example, I don’t need to test invaders for collision with other invaders. In fact, it is almost crazy to do so.

Another obvious optimization is that the Sprite class’s BoundingBox property is adding the sprite’s current screen position to its internal BoundingRectangle every time TestCollision is called, this despite the fact that the position changes only once or twice per frame. TestCollision, on the other hand, is called once for every other sprite in the game.

In addition, the Sprite’s TestCollision code is computing the actual intersection rectangle even though we are not using it here. We could easily save some time by not computing it. However, we give ourselves more flexibility by going ahead and doing it. Remember that this is supposed to be a generic Sprite class that can be used for many games.

These suggestions don’t even get into implementation optimizations, such as always passing our BoundingBoxes by reference instead of value; and providing direct access to member variables instead of accessing them through properties. These are exactly the types of optimizations suggested by many efficiency proponents in the XNA forums. However, these also make the code less readable, harder to debug, and harder to maintain.

As Space Invasion never has more than around 60 objects on the screen at a time, the unoptimized brute force approach works just fine. In addition, that is undoubtedly true for many other games as well. However, what if your game does need more than 100 collidable objects? Should you not make those optimizations so you can handle them?

The answer is… maybe. By making some of these optimizations, we can get this same brute force algorithm to handle 500 objects at a far more reasonable 6.4 milliseconds per frame.

[code]
300 updates took 6682522 ticks (1866 milliseconds).
Each update took 6.22 milliseconds.
300 updates took 7038462 ticks (1966 milliseconds).
Each update took 6.553333 milliseconds.
300 updates took 7023610 ticks (1962 milliseconds).
Each update took 6.54 milliseconds.
300 updates took 6718281 ticks (1876 milliseconds).
Each update took 6.253334 milliseconds.
300 updates took 7136208 ticks (1993 milliseconds).
Each update took 6.643333 milliseconds.
[/code]

That is an impressive improvement and shows how significantly performance can be optimized through these techniques. However, the disadvantages mentioned earlier less maintainable and less flexible code should not be ignored. In addition, even if you do these sorts of implementation optimizations, keep in mind that this algorithm will still degrade exponentially as you add more objects. You may be able to move up from 100 to 500 objects, but it won’t get you to 1000. At some point, you need to recognize that you need a different algorithm to efficiently handle more objects, such as the one that partitions your game space, like quad trees.

Finally, remember that 6.4 milliseconds is still 40 percent of your entire frame time. If you are maintaining on the order of a thousand or more objects at a time, other parts of your code are almost certainly also going to be difficult to manage at a reasonable frame rate. Is optimizing your collision detection the best use of your time? How do you know in advance which ones to optimize? Optimizing all of them as you go will surely take you longer to write, not to mention make your code more difficult to debug and maintain.

If benchmarking shows your algorithm has problems without implementation optimizations, you are probably better off with a different algorithm.

Using the EQATEC Profiler to profile your game’s running time

Profiling your game performance is a significant part of the whole game development process. No matter how efficient the used algorithms are, or how powerful the hardware is, you still need to get sufficiently accurate CPU running time charts for different functions calling in different hardware conditions. Choosing a good profiling tool will help you to find the hot spots which consume most CPU game resources, and lead you to create the most efficient optimization. For Windows Phone, EQATEC is a good choice and in this recipe, you will learn how to use the EQATEC Profiler to profile your Window Phone game.

Getting ready

You can download the EQATEC Profiler from the official company website located at the following URL:

[code]
http://www.eqatec.com/Profiler/
[/code]

The following screenshot shows what website looks like:

EQATEC Profiler from the official company

After clicking on the Download EQATEC Profiler, a new page will let you choose the profiler version; the free version is fine for our needs. After filling out some basic information, the website will send a URL for downloading the profiler to your e-mail address. When you have installed the downloaded profiler, you are ready for profiling your Windows Phone 7 game.

How to do it…

Carry out the following steps:

  1. Run the EQATEC Profiler through the Start menu or through the root directory where the profiler binary application is located. If the profiler runs correctly, you should get the following screen:
    EQATEC Profiler through the Start menu
  2. The Browse button lets you locate the root directory of your Windows Phone 7 XAP file for profiling. When the directory is set, you will choose the XAP file which will be listed in the list box under the App path textbox. The testing XAP file and source code can be found in the bundle file of this chapter. After that, you should click on the Browse button for building the profile description file that processed the number of methods in the designated application and some application metadata.
  3. Then, after selecting the application you want to profile, click on the Run button to start the profiling. When the Run button is clicked, a prompt will pop up asking you about the device to be used for profiling, Windows Phone 7 Device or Windows Phone 7 Emulator. In the example, we chose the emulator as shown in the following screenshot:
    Windows Phone 7 Device or Windows Phone 7 Emulator
  4. Under the Run tab, if you are sure the Windows Phone 7 application is ready, it is time for profiling. The window should look similar to the following screenshot:
    Run tab, if you are sure the Windows Phone 7
  5. Now, click on the yellow Run app button. The profiler will automatically start a Windows Phone 7 emulator and connect to the emulator. Next, it will install the profiled Windows Phone 7 XAP file on the emulator. When this step is done, profiler will start to track and profile the designated application. At this moment, if you want to know the actual time of every method in your application costs, you need to click on the Take snapshot button with a timer symbol under the information list box, and a new snapshot report which includes the running time of every method will be generated. Then, click on the yellow View button once you have chosen the report you want to review.
  6. In the snapshot view window, you will find how many milliseconds every method takes. The windows will look similar to the following screenshot:
    many milliseconds every method

How it works…

The time of every method is listed in the list box:

  • Initialize() method: 693 MS
  • LoadContent() method: 671 MS
  • Draw() method: 122 MS
  • DrawModel() method: 50 MS
  • Update() method: 43 MS

You can find more details in the Details of Individual methods panel. This panel will tell you the percentage of called method costs on time of the caller. In this example, the LoadContent() method consumes 671 MS which occupies 97percent of the Initialize() method total time.

Reducing the game contents’ loading time

As you know, most of the time, before playing a game, a screen for loading game contents will show up with a running progress bar. Without this, you may feel the game is stuck and not responding. If you know that the game is loading and can see its progress, you know that all you have to do is wait. Usually, no one wants to wait too long for playing games, however. It is wasting time and can cause frustration to the user. For better user experiences, the following sections will tell you how to reduce the loading time of game contents.

Making good loading decisions

Often, the first step in reducing loading times is to understand where the current greatest expenses are. Highlighting the frequency and timing of content loading is an effective way to evaluate and adjust loading times, as well as validate that the right content (no more and no less) is being loaded in a given scenario. Consider instrumenting the following:

  • The time required to load an asset; the System.Diagnostics.Stopwatch object can be used for this
  • The frequency with which each asset has been loaded over multiple game levels, across sessions, and so on
  • The frequency with which each asset is freed
  • The average lifetime

Using the XNA content pipeline to reduce file size

Compressing the game contents into an XNB file will make a great file size reduction in building time.

For the XNA framework, assets are shared between PC, Xbox, and Windows Phone 7 platforms, and you can reuse the textures, models, and so on. If a texture is consistently scaled down for display on a Windows Phone 7, consider performing that scaling offline, rather than taking the processing penalty bandwidth and memory overhead when loading the content.

Developers may also want to exploit other texture types, such as PNG, where doing so would not contribute to already compressed assets. For sparse textures, PNG on Windows Phone will typically demonstrate superior compression to a DXT-compressed content that is brought through the XNA content pipeline. In order to use other texture types, the source files must be copied to the output directory and not compiled in the content pipeline.

Note that, while DXT-compressed assets can be used natively by Windows Phone 7 GPUs, many formats including PNG need to be expanded at runtime to a raw format of 32 bits per pixel. This expansion can lead to increased memory overhead compared to DXT compression.

In order to balance the runtime memory footprint of DXT with the loading time footprint of more aggressive compression formats, developers may choose to apply custom compression and runtime decompression to the DXT content (as built by the XNA pipeline into .xnb files), which can lead to a significant reduction in loading times. Developers should balance the loading time considerations with CPU requirements to decode their custom-encoded content, as well as with memory requirements to handle and manipulate the decompressed data. The offline custom compression and runtime title-managed decompression of the DXT content can offer a good balance of reduced size (and thus, reduced loading time) without large runtime memory costs.

Developers can also pack multiple images into a single texture, as demonstrated by the content processor in the spritesheet. We have already discussed in Chapter 4, Heads Up Display (HUD)—Your Phone Game User Interface, that spritesheets avoid DXT power-of-two restrictions imposed by the XNA content processor, and optimize the file loading (replacing many small files with one larger one).

In the realm of sound, if native audio assets from a console title are 48 kHz, consider down sampling them to 44.1 kHz (prior to applying the XNA pipeline’s own compression) for use on the phone. This will realize an immediate 8 percent savings (approximately) on storage and reading bandwidth, as well as mild CPU savings for running at the native sampling rate of the Windows Phone device (44.1 kHz).

Beyond compression, decreasing loading times can focus on data organization that focuses on the efforts of loading the content that is needed to drive to an initial interactive state, rather than preparing all possible loaded data. This is particularly important in avoiding the watchdog timer; a title that loads data for too long prior to drawing to the screen risks being terminated by the system. Developers should also give similar attention to the in-game content loading. Remember that returning to gameplay from interruptions (SMS, phone, app purchase, and so on) invalidates all the previously loaded content.

Evaluating the asynchronous background loading

Even if the game takes a substantial set-up time, there are numerous techniques to getting the user into some kind of interactive state sooner. Anything from a simplified arcade-style loading screen to cut-scenes, trivia, “did you know” facts, and other low-CPU-impact techniques can be leveraged to help smooth the setup and transition from loading to gameplay.

Loading to an initial menu state or a cut-scene, and then continuing to load additional assets in the background would seem to be appropriate strategies for masking loading times from the consumer. However, LoadContent() performs byte copies of each loaded texture asset that uses the XNA content pipeline, generating garbage. Moreover, LoadContent(), overall, will trigger the garbage collection at each megabyte of loaded data. Depending on the actual interactivity of foreground scenes, the potential CPU cost taken by garbage collection may be acceptable; playback of pre-rendered video cut-scenes takes advantage of purpose-built hardware, so the CPU utilization is typically negligible. Similarly, static or intermittently animated menu systems would likely have more success here than attempting to generate the CPU-intensive content rendered in-engine during the background loading.

Considering the custom serialization

Microsoft’s .NET framework provides an easy to use method for serializing data onto disks, using types present in the System.Xml.Serialization namespace. Simplicity always comes with tradeoffs, however; in this case, the tradeoff is the file size. The default serialization schema is verbose. The behavior of the XmlSerializer is trivially easy to change, however, and can result in significant savings in file sizes.

As an example, let’s consider the following class definition:

[code]
public class TestClass
{
public int count;
public float size;
public bool enabled;
public string
LongNameOfAMinorFieldThatDoesntNeedALongNameInTheFile = “test”;
}
[/code]

The preceding class definition, when serialized with the default XmlSerializer, produces the following XML:

[code]
<?xml version=”1.0″?>
<TestClass xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
xmlns:xsd=”http://www.w3.org/2001/XMLSchema”>
<count>0</count>
<size>0</size>
<enabled>false</enabled>
<LongNameOfAMinorFieldThatDoesntNeedALongNameInTheFile>test
</LongNameOfAMinorFieldThatDoesntNeedALongNameInTheFile>
</TestClass>
[/code]

The default behavior of XmlSerializer is to treat each public field or property as an XML element. This generates quite a bit of extra data in the file; this XML file uses 332 bytes on the disk to serialize four fields. With a few simple changes, we can get significantly smaller files from XmlSerializer. Consider the following class declaration:

[code]
public class TestClass2
{
[XmlAttribute(AttributeName=”count”)]
public int count;
[XmlAttribute(AttributeName=”size”)]
public float size;
[XmlAttribute(AttributeName=”enable”)]
public bool enabled;
[XmlAttribute(AttributeName = “longName”)]
public string
LongNameOfAMinorFieldThatDoesntNeedALongNameInTheFile = “test”;
}
[/code]

With XmlAttribute added to properties, the XmlSerializer treats the field as attributes rather than elements, and gives the attributes alternative names. The resulting XML is the following:
[code]
<?xml version=”1.0″?>
<TestClass2 xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”
xmlns:xsd=”http://www.w3.org/2001/XMLSchema” count =”0″ size =”0″
enable =”false” longName =”test” />
[/code]

The serialized file has significantly less wasted text. The file size also shrank to 167 bytes. This is a saving of roughly 50 percent, and a more reasonable file size to serialize four fields. Modifying your serialization code to prefer the XML attributes to XML elements will often result in similar savings. Even if you don’t perform renaming, as we did in this example, you will generally get close to a 50 percent reduction, as every XmlElement has to have a closing tag, while attributes don’t.

Avoid using XmlAttribute for complex types, or for collections of types. The space savings are minimal in these cases, and the resulting file is considerably more difficult to read. For larger amounts of data, consider writing a custom binary serialization code. In all cases, ensure that you time any new code to confirm any realized performance gains over the default Serializer settings.

Improving game performance with garbage collection

Discussing the garbage collector (GC) that runs on Windows Phone 7 devices is helpful for the Windows Phone 7 game developer. Anyone who has programmed in XNA for Windows or Xbox 360 before knows the GC well.

Value types versus reference types

One of the first things you must understand is the difference between value types and reference types. Value types such as int, float, Vector3, Matrix, and struct (this includes nullable types; a nullable type such as BOOL is just a special struct) live on the stack. The GC does not care about the stack. Well, technically, it cares slightly, but only to the extent that the system begins to run low on memory, and you would have to be trying very hard to get enough items on the stack to cause the system to run low on memory. So don’t worry about calling “new Vector3()” or “Matrix.CreateTranslation()” in your methods that run regularly (such as Update and Draw) it is just a stack allocation and it won’t anger the GC.

Classes are an entirely different matter. Classes, arrays (including arrays of value types, for example, int[ ]), collections (List<>, Dictionary<>, and so on.), and strings (yes, strings) are all reference types and they live on the heap. The heap is the GC’s caring. It pays attention to everything that shows up on the heap and to everything that no longer has any business there, but is still hanging around.

Defining a true value checking method

Take a look at the following code listing:

[code]
void CheckForTrue(bool value)
{
string trueText = “The value is true.”;
string falseText = “The value is false.”;
if (value == true)
{
Console.WriteLine(trueText);
}
else
{
Console.WriteLine(falseText);
}
return;
}
[/code]

Every time this method runs, trueText and falseText will both be allocated on the heap and will “go out of scope” when the the method is run. In other words, “gone out of scope” simply means that there are no more references to an object. A string declared with const never goes out of scope, and thus does not matter to GC for all practical purposes. This is also true for any object declared as static readonly, as once it is created it exists forever. However, the same is not true for a normal static, though many might mistakenly assume so. A static object without the readonly keyword applied to it will generally exist for the life of a program. However, if it is ever set to null, then unless there is some other reference to it, it goes out of scope and is subject to garbage collection.

Technically, the GC runs for every 1 MB of heap allocation. Whenever the GC is running, it takes time to comb through the heap and destroy any objects that are no longer in scope. Depending on how many references you have and how complex nesting of objects is, this can take a bit of time. In XNA, the clock is on a fixed time-step by default and in Windows Phone 7, the default frame rate is 30 FPS. This means that there are 33.3333333 milliseconds available for Update() and Draw() methods to finish their CPU-side tasks. Draw prepares things on the CPU-side, then hands over the actual drawing to the GPU which, being a separate processor, does not usually affect the Update/Draw side of things, except for stalls, but those are beyond the scope of this book and most people will never run into them anyway. If they finish ahead of time, the CPU hangs out and waits until it is time to run Update() again. If not, then the system takes notice that it is running behind and will skip as many draws as necessary to catch back up.

This is where the GC comes in. Normally, your code will complete just fine within 33.33 milliseconds, thereby maintaining a nice even 30 FPS (if your code does not normally complete within that time, you will see serious constant performance problems that may even cause your game to crash after a little while if XNA gets so far behind that it throws up its hands and surrenders). However, when the GC runs, it eats into that time. If you have kept the heap nice and simple, the GC will run nice and fast and this likely won’t matter. However, keeping a simple heap that the GC can run through quickly is a difficult programming task that requires a lot of planning and/or rewriting, and even then is not fool proof (sometimes, you just have a lot of stuff on the heap in a complex game with many assets). A much simpler option assuming you can do it is to limit or even eliminate all allocations during gameplay. You will obviously be allocating heap memory when you first start the game (for example, when loading assets in the LoadContent() method), and you will be allocating memory when loading levels, if you have a game with levels and decide to load each one in an interstitial screen. You will also be allocating memory when changing game screens. However, a small stutter from a couple of dropped frames in between levels or while switching screens is not a big concern the player is not going to accidentally fall off a cliff or get hit by an enemy projectile or anything when those things are happening. In fact, sometimes it makes a lot of sense to intentionally trigger the GC right before the game is going to (re)start. Triggering the GC resets the 1 MB counter and can prevent situations where the counter is at .94 MB when the level begins, such that even a small number of minimal allocations that would otherwise be perfectly acceptable, can cause problems.

Therefore, the goal is to minimize heap allocations. How do we do that? Well, the biggest contributors are needlessly creating new objects in your Update or Draw cycle and boxing value types. First, a quick note on boxing; the simplest example of boxing is casting a value type like int or enum to object in order to pass it as a state. Boxing is a great feature of .NET, but not recommended for game programming because of the heap allocations that can trigger the GC. So keep an eye out for it and try not to do it.

Another big contributor is creating new reference types. Every new instance of an object causes a heap allocation and increases that counter ever so slightly. There are several coding practices that will help you to eliminate needless heap allocation and increase performance for your game.

Using StringBuilder for string operations

Make any strings that never change into const strings.

Where you need strings that change, consider using System.Text.StringBuilder (visit http://msdn.microsoft.com/en-us/library/system.text. stringbuilder.aspx for more information on StringBuilder). All XNA methods that take a string (for example, SpriteBatch.DrawString) will also take a StringBuilder object. Make sure to use one of the constructors which take a default capacity and set it to a value high enough to hold as many characters as you plan, plus a few extra for good measure. If the internal array is large, it will never have to resize itself, and thus will never generate any heap allocations after it is created!

Drawing integer in string without garbage

If you need to draw an int value, such as a score or the number of lives a player has, consider using the following block of code (thanks to Stephen Styrchak):

[code]
public static class SpriteBatchExtensions
{
private static string[] digits = { “0”, “1”, “2”, “3”, “4”, “5”,
“6”, “7”, “8”, “9” };
private static string[] charBuffer = new string[10];
private static float[] xposBuffer = new float[10];
private static readonly string minValue =
Int32.MinValue.ToString(CultureInfo.InvariantCulture);
// Extension method for SpriteBatch that draws an integer
// without allocating any memory. This function avoids garbage
// collections that are normally caused by calling
// Int32.ToString or String.Format. the equivalent of calling
// spriteFont.MeasureString on
// value.ToString(CultureInfo.InvariantCulture).
public static Vector2 DrawInt32(this SpriteBatch spriteBatch,
SpriteFont spriteFont, int value,
Vector2 position, Color color)
{
Vector2 nextPosition = position;
if (value == Int32.MinValue)
{
nextPosition.X = nextPosition.X +
spriteFont.MeasureString(minValue).X;
spriteBatch.DrawString(spriteFont, minValue, position,
color);
position = nextPosition;
}
else
{
if (value < 0)
{
nextPosition.X = nextPosition.X +
spriteFont.MeasureString(“-“).X;
spriteBatch.DrawString(spriteFont, “-“, position,
color);
value = -value;
position = nextPosition;
}
int index = 0;
do
{
int modulus = value % 10;
value = value / 10;
charBuffer[index] = digits[modulus];
xposBuffer[index] = spriteFont.MeasureString
(digits[modulus]).X;
index += 1;
}
while (value > 0);
for (int i = index – 1; i >= 0; –i)
{
nextPosition.X = nextPosition.X + xposBuffer[i];
spriteBatch.DrawString(spriteFont, charBuffer[i],
position, color);
position = nextPosition;
}
}
return position;
}
}
[/code]

Taking advantage of the list for sprites

If you have a Sprites class, for example, create an object pool to reuse it rather than letting it fall out of scope and creating a new one each time one ceases to exist in the game and each time you need a new one. As an example, create a generic List<> of your Sprites class (refer http://msdn.microsoft.com/en-us/library/6sh2ey19. aspx for more information on lists). Use the List<> constructor overload that takes a default capacity and make sure to set it to a value high enough to contain all the objects of that sort and will exist at one time in your game (for example, 300). Then, use a for loop to go through and create all of the objects in the list up to the capacity. Add a public bool IsAlive { get; set; } property to your class to keep track of which ones are being used at any particular time. When you need a new one, loop through the list until you find one, where IsAlive is false. Take that one, set IsAlive to true, set the other properties (such as its position, direction, and so on.) to their appropriate values, and continue. When doing collision detection, loop through using a for or a foreach loop and process only the objects for which IsAlive is true. The same approach should be followed for updating and drawing them. Whenever one is no longer needed (for example, when it collides with something or it goes off screen), simply set its IsAlive to false and it will now be available for reuse without any memory allocation. If you want to be creative, you can expand on this further in several different ways. You could keep a count of the number of live objects, so that once you have processed that number in your update and draw methods, you can use the break keyword to get out of the loop early, rather than go all the way to the end. Alternatively, you could keep two lists: one for storing live objects and one for dead objects, and move objects between the two lists as appropriate.

Preferring struct rather than class when just an instance is needed

If you do want to create something, you can just create a new “instance” of each Update() or Draw() method. Try creating a struct instead of a class. Structures can perform most of the things that classes can (the major limitation being that they cannot inherit from another structure, a class, or anything else, but they can implement interfaces). Moreover, structures live on the stack and not on the heap, so unless you have a reference type, like a string or a class, as a field or property of the structure, you will not generate any trash using a structure. Remember, though, that an array of structures is a reference type (as are all arrays), and thus lives on the heap and counts towards the GC trigger limit whenever created.

Avoiding use of LINQ in game developing

Don’t use LINQ. It looks cool. It makes your code shorter, simpler, and perhaps even easier to read. However, LINQ queries can easily become a big source of trash. They are fine in your startup code, as you are going to generate trash there anyway, just by loading assets and preparing game resources. However, don’t use it in Update(), Draw(), or any other method that gets called during the gameplay.

Minimizing the use of ToString()

Minimize use of ToString(). At a minimum, it creates a string, which lives on the heap (refer to the Drawing integer in string without garbage section discussed earlier in this chapter). If you do need to use ToString(), try to limit how often it is called. If the string only changes every level, then generate it only once at the beginning of the level. If it only changes when a certain value changes, then generate it only when that value changes. Any limits you can set are worth it. The amount of time it takes to check a Boolean condition is so small as to be almost non-existent. You could probably fit tens and even hundreds of thousands of true/false checks for the time it takes the GC to run on a complex heap.

Windows Phone Special Effects

Using dual texture effects

Dual texture is very useful when you want to map two textures onto a model. The Windows Phone 7 XNA built-in DualTextureEffect samples the pixel color from two texture images. That is why it is called dual texture. The textures used in the effect have their own texture coordinate and can be mapped individually, tiled, rotated, and so on. The texture is mixed using the pattern:

[code]
finalTexture.color = texture1.Color * texture2.Color;
finalTexture.alpha = texture1.Alpha * texture2.Alpha;
[/code]

The color and alpha of the final texture come from a separate computation. The best practice of DualTextureEffect is to apply the lightmap on the model. In computer graphics, computing the lighting and shadows is a big performance job in real time. The lightmap texture is pre-computed and stored individually. A lightmap is a data set comprising the different surfaces of a 3D model. This will save the performance cost on lighting computation. Sometimes, you might want to use the ambient occlusion effect, which is costly. At this point, lightmap can be used as a texture, then mapped to the special model or scene for realistic effect. As the lightmap is pre-computed in 3D modeling software (you will learn how to deal with this in 3DS MAX), it is easy for you to use the most complicated lighting effects (shadows, ray-tracing, radiosity, and so on.) in Windows Phone 7. You can use the dual texture effect if you just want the game scene to have shadows and lighting. In this recipe, you will learn how to create the lightmap and apply it on your game model using the DualTextureEffect.

How to do it…

The following steps show you the process for creating the lightmap in 3DS MAX and how to use the lightmap in your Windows Phone 7 game using DualTextureEffect:

  1. Create the Sphere lightmap in 3DS MAX 2011. Open 3DS MAX 2011, in the Create panel, click the Geometry button, then create a sphere by choosing the Sphere push button, as shown in the following screenshot:
    the Geometry button
  2. Add the texture to the Material Compact Editor and apply the material to the sphere. Click the following menu items of 3DS MAX 2011: Rendering | Material Editor | Compact Material Editor. Choose the first material ball and apply the texture you want to the material ball. Here, we use the tile1.png, a checker image, which you can find in the Content directory of the example bundle file. The applied material ball looks similar to the following screenshot:
    Material Compact Editor
  3. Apply the Target Direct Light to the sphere. In the Create panel—the same panel for creating sphere—click the Lights button and choose the Target Direct option. Then drag your mouse over the sphere in the Perspective viewport and adjust the Hotspot/Beam to let the light encompass the sphere, as shown in the following screenshot:
    the Perspective viewport
  4. Render the Lightmap. When the light is set as you want, the next step is to create the lightmap. After you click the sphere that you plan to build the lightmap for, click the following menu items in 3DS MAX: Rendering | Render To Texture. In the Output panel of the pop-up window, click the Add button. Another pop-up window will show up; choose the LightingMap option, and then click Add Elements, as shown in the following screenshot:
    Rendering | Render To Texture
  5. After that, change the setting of the lightmap:
    • Change the Target Map Slot to Self-Illumination in the Output panel.
    • Change the Baked Material Settings to Output Into Source in the Baked Material panel.
    • Change the Channel to 2 in the Mapping Coordinates panel.
    • Finally, click the Render button. The generated lightmap will look similar to the following screenshot:
      the Render buttonBy default, the lightmap texture type is .tga, and the maps are placed in the images subfolder of the folder where you installed 3DS MAX. The new textures are flat. In other words, they are organized according to groups of object faces. In this example, the lightmap name is Sphere001LightingMap.tga.
  6. Open the Material Compact Editor again by clicking the menu items Rendering | Material Editor | Compact Material Editor. You will find that the first material ball has a mixed texture combined with the original texture and the lightmap. You can also see that Self-Illumination is selected and the value is Sphere001LightingMap. tga. This means the lightmap for the sphere is applied successfully.
  7. Select the sphere and export to an FBX model file named DualTextureBall.FBX, which will be used in our Windows Phone 7 game.
  8. From this step, we will render the lightmap of the sphere in our Windows Phone 7 XNA game using the new built-in effect DualTextureEffect. Now, create a Windows Phone Game project named DualTextureEffectBall in Visual Studio 2010 and change Game1.cs to DualTextureEffectBallGame.cs. Then, add the texture file tile1.png, the lightmap file Sphere001LightingMap.tga, and the model DualTextureBall.FBX to the content project.
  9. Declare the indispensable variables in the DualTextureEffectBallGame class. Add the following code to the class field:
    [code]
    // Ball Model
    Model modelBall;
    // Dual Texture Effect
    DualTextureEffect dualTextureEffect;
    // Camera
    Vector3 cameraPosition;
    Matrix view;
    Matrix projection;
    [/code]
  10. Initialize the camera. Insert the following code to the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(0, 50, 200);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,
    1.0f, 1000.0f);
    [/code]
  11. Load the ball model and initialize the DualTextureEffect. Paste the following code to the LoadContent() method:
    [code]
    // Load the ball model
    modelBall = Content.Load<Model>(“DualTextureBall”);
    // Initialize the DualTextureEffect
    dualTextureEffect = new DualTextureEffect(GraphicsDevice);
    dualTextureEffect.Projection = projection;
    dualTextureEffect.View = view;
    // Set the diffuse color
    dualTextureEffect.DiffuseColor = Color.Gray.ToVector3();
    // Set the first and second texture
    dualTextureEffect.Texture =
    Content.Load<Texture2D>(“tile1”);
    dualTextureEffect.Texture2 =
    Content.Load<Texture2D>(“Sphere001LightingMap”);
    Define the DrawModel() method in the class:
    // Draw model
    private void DrawModel(Model m, Matrix world,
    DualTextureEffect effect)
    {
    foreach (ModelMesh mesh in m.Meshes)
    {
    // Iterate every part of current mesh
    foreach (ModelMeshPart meshPart in mesh.MeshParts)
    {
    // Change the original effect to the designed
    // effect
    meshPart.Effect = effect;
    // Update the world matrix
    effect.World *= world;
    }
    mesh.Draw();
    }
    }
    [/code]
  12. Draw the ball model using DualTextureEffect on the Windows Phone 7 screen. Add the following lines to the Draw() method:
    [code]
    // Rotate the ball model around axis Y.
    float timer =
    (float)gameTime.ElapsedGameTime.TotalSeconds;
    DrawModel(modelBall, Matrix.CreateRotationY(timer),
    dualTextureEffect);
    [/code]
  13. Build and run the example. It should run as shown in the following screenshot:
    DualTextureEffect
  14. If you comment the following line in LoadContent() to disable the lightmap texture, you will find the difference when lightmap is on or off:
    [code]
    dualTextureEffect.Texture2 =
    Content.Load<Texture2D>(“Sphere001LightingMap”);
    [/code]
  15. Run the application without lightmap. The model will be in pure black as shown in the following screenshot:
    dual texture effects

How it works…

Steps 1–6 are to create the sphere and its lightmap in 3DS MAX 2011.

In step 8, the modelBall is responsible for loading and holding the ball model. The dualTextureEffect is the object of XNA 4.0 built-in effect DualTextureEffect for rendering the ball model with its original texture and the lightmap. The following three variables cameraPosition, view, and projection represent the camera.

In step 10, the first line is to load the ball model. The rest of the lines initialize the DualTextureEffect. Notice, we use the tile1.png for the first and original texture, and the Sphere001LightingMap.tga for the lightmap as the second texture.

In step 11, the DrawModel() method is different from the definition. Here, we need to replace the original effect of each mesh with the DualTextureEffect. When we iterate the mesh parts of every mesh of the current model, we assign the effect to the meshPart.Effect for applying the DualTextureEffect to the mesh part.

Using environment map effects

In computer games, environment mapping is an efficient image-based lighting technique for aligning the reflective surface with the distant environment surrounding the rendered object. In Need for Speed, produced by Electronic Arts, if you open the special visual effect option while playing the game, you will find the car body reflects the scene around it, which may be trees, clouds, mountains, or buildings. They are amazing and attractive. This is environment mapping, it makes games more realistic. The methods for storing the surrounding environment include sphere mapping and cube mapping, pyramid mapping, and the octahedron mapping. In XNA 4.0, the framework uses cube mapping, in which the environment is projected onto the six faces of a cube and stored as six square textures unfolded into six square regions of a single texture. In this recipe, you will learn how to make a cubemap using the DirectX texture tool, and apply the cube map on a model using EnvironmentMappingEffect.

Getting ready

Cubemap is used in real-time engines to fake refractions. It’s way faster than ray-tracing because they are only textures mapped as a cube. So that’s six images (one for each face of the cube).

For creating the cube map for the environment map effect, you should use the DirectX Texture Tool in the DirectX SDK Utilities folder. The latest version of Microsoft DirectX SDK can be downloaded from the URL http://www.microsoft.com/downloads/en/ details.aspx?FamilyID=3021d52b-514e-41d3-ad02-438a3ba730ba.

How to do it…

The following steps lead you to create an application using the Environment Mapping effect:

  1. From this step, we will create the cube map in DirectX Texture Tool. Run this application and create a new Cube Map. Click the following menu items: File | New Texture. A window will pop-up; in this window, choose the Cubemap Texture for Texture Type; change the dimension to 512 * 512 in the Dimensions panel; set the Surface/Volume Format to Four CC 4-bit: DXT1. The final settings should look similar to the following screenshot:
    Cubemap Texture
  2. Set the texture of every face of the cube. Choose a face for setting the texture by clicking the following menu items: View | Cube Map Face | Positive X, as shown in the following screenshot:
    Cube Map Face | Positive X
  3. Then, apply the image for the Positive X face by clicking: File | Open Onto This Cubemap Face, as shown in the following screenshot:
    Open Onto This Cubemap Face
  4. When you click the item, a pop-up dialog will ask you to choose a proper image for this face. In this example, the Positive X face will look similar to the following screenshot:
    Positive X face will look similar
  5. It is similar for the other five faces, Negative X, Positive Y, Negative Y, Positive Z, and Negative Z. When all of the cube faces are appropriately set, we save cubemap as SkyCubeMap.dds. The cube map will look similar to the following figure:
    Negative X, Positive Y, Negative Y, Positive Z, and Negative Z
  6. From this step, we will start to render the ball model using the XNA 4.0 built-in effect called EnvironmentMapEffect. Create a Windows Phone Game project named EnvironmentMapEffectBall in Visual Studio 2010 and change Game1.cs to EnvironmentMapEffectBallGame.cs. Then, add the ball model file ball.FBX, ball texture file silver.jpg, and the generated cube map from DirectX Texture Tool SkyCubemap.dds to the content project.
  7. Declare the necessary variables of the EnvironmentMapEffectBallGame class. Add the following lines to the class:
    [code]
    // Ball model
    Model modelBall;
    // Environment Map Effect
    EnvironmentMapEffect environmentEffect;
    // Cube map texture
    TextureCube textureCube;
    // Ball texture
    Texture2D texture;
    // Camera
    Vector3 cameraPosition;
    Matrix view;
    Matrix projection;
    [/code]
  8. Initialize the camera. Insert the following lines to the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(2, 3, 32);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,
    1.0f, 100.0f);
    [/code]
  9. Load the ball model, ball texture, and the sky cube map. Then initialize the environment map effect and set its properties. Paste the following code in the LoadContent() method:
    [code]
    // Load the ball model
    modelBall = Content.Load<Model>(“ball”);
    // Load the sky cube map
    textureCube = Content.Load<TextureCube>(“SkyCubeMap”);
    // Load the ball texture
    texture = Content.Load<Texture2D>(“Silver”);
    // Initialize the EnvironmentMapEffect
    environmentEffect = new EnvironmentMapEffect(GraphicsDevice);
    environmentEffect.Projection = projection;
    environmentEffect.View = view;
    // Set the initial texture
    environmentEffect.Texture = texture;
    // Set the environment map
    environmentEffect.EnvironmentMap = textureCube;
    environmentEffect.EnableDefaultLighting();
    // Set the environment effect factors
    environmentEffect.EnvironmentMapAmount = 1.0f;
    environmentEffect.FresnelFactor = 1.0f;
    environmentEffect.EnvironmentMapSpecular = Vector3.Zero;
    [/code]
  10. Define the DrawModel() of the class:
    [code]
    // Draw Model
    private void DrawModel(Model m, Matrix world,
    EnvironmentMapEffect environmentMapEffect)
    {
    foreach (ModelMesh mesh in m.Meshes)
    {
    foreach (ModelMeshPart meshPart in mesh.MeshParts)
    {
    meshPart.Effect = environmentMapEffect;
    environmentMapEffect.World = world;
    }
    mesh.Draw();
    }
    }
    [/code]
  11. Draw and rotate the ball with EnvironmentMapEffect on the Windows Phone 7 screen. Insert the following code to the Draw() method:
    [code]
    // Draw and rotate the ball model float time = (float)gameTime.TotalGameTime.TotalSeconds; DrawModel(modelBall, Matrix.CreateRotationY(time * 0.3f) * Matrix.CreateRotationX(time), environmentEffect);
    [/code]
  12. Build and run the application. It should run similar to the following screenshot:
    environment map effects

How it works…

Steps 1 and 2 use the DirectX Texture Tool to generate a sky cube map for the XNA 4.0 built-in effect EnvironmentMapEffect.

In step 4, the modelBall loads the ball model, environmentEffect will be used to render the ball model in EnvironmentMapEffect, and textureCube is a cube map texture. The EnvironmentMapEffect will receive the texture as an EnvironmentMap property; texture represents the ball texture; the last three variables cameraPosition, view, and projection are responsible for initializing and controlling the camera.

In step 6, the first three lines are used to load the required contents including the ball model, texture, and the sky cube map. Then, we instantiate the object of EnvironmentMapEffect and set its properties. environmentEffect.Projection and environmentEffect. View are for the camera; environmentEffect.Texture is for mapping ball texture onto the ball model; environmentEffect.EnvironmentMap is the environment map from which the ball model will get the reflected color and mix it with its original texture.

The EnvironmentMapAmount is a float that describes how much of the environment map could show up, which also means how much of the cube map texture will blend over the texture on the model. The values range from 0 to 1 and the default value is 1.

The FresnelFactor makes the environment map visible independent of the viewing angle. Use a higher value to make the environment map visible around the edges; use a lower value to make the environment map visible everywhere. Fresnel lighting only affects the environment map color (RGB values); alpha is not affected. The value ranges from 0.0 to 1.0. 0.0 is used to disable the Fresnel Lighting. 1.0 is the default value.

The EnvironmentMapSpecular implements cheap specular lighting, by encoding one or more specular highlight patterns into the environment map alpha channel, then setting the EnvironmentMapSpecular to the desired specular light color.

In step 7, we replace the default effect of every mesh part of the model meshes with the EnvironmentMapEffect, and draw the mesh with replaced effect.

Rendering different parts of a character into textures using RenderTarget2D

Sometimes, you want to see a special part of a model or an image, and you also want to see the original view of them at the same time. This is where, the render target will help you. From the definition of render target in DirectX, a render target is a buffer where the video card draws pixels for a scene that is being rendered by an effect class. In Windows Phone 7, the independent video card is not supported. The device has an embedded processing unit for graphic rendering. The major application of render target in Windows Phone 7 is to render the viewing scene, which is in 2D or 3D, into 2D texture. You can manipulate the texture for special effects such as transition, partly showing, or something similar. In this recipe, you will discover how to render different parts of a model into texture and then draw them on the Windows Phone 7 screen.

Getting ready

Render target, by default, is called the back buffer. This is the part of the video memory that contains the next frame to be drawn. You can create other render targets with the RenderTarget2D class, reserving new regions of video memory for drawing. Most games render a lot of content to other render targets besides the back buffer (offscreen), then assemble the different graphical elements in stages, combining them to create the final product in the back buffer.

A render target has a width and height. The width and height of the back buffer are the final resolution of your game. An offscreen render target does not need to have the same width and height as the back buffer. Small parts of the final image can be rendered in small render targets, and copied to another render target later. To use a render target, create a RenderTarget2D object with the width, height, and other options you prefer. Then, call GraphicsDevice.SetRenderTarget to make your render target the current render target. From this point on, any Draw calls you make will draw into your render target because the RenderTarget2D is the subclass of Texture2D. When you are finished with the render target, call GraphicsDevice.SetRenderTarget to a new render target (or null for the back buffer).

How to do it…

In the following steps, you will learn how to use RenderTarget2D to render different parts of a designated model into textures and present them on the Windows Phone 7 screen:

  1. Create a Windows Phone Game project named RenderTargetCharacter in Visual Studio 2010 and change Game1.cs to RenderTargetCharacterGame.cs. Then, add the character model file character.FBX and the character texture file Blaze. tga to the content project.
  2. Declare the required variables in the RenderTargetCharacterGame class field. Add the following lines of code to the class field:
    [code]
    // Character model
    Model modelCharacter;
    // Character model world position
    Matrix worldCharacter = Matrix.Identity;
    // Camera
    Vector3 cameraPosition;
    Vector3 cameraTarget;
    Matrix view;
    Matrix projection;
    // RenderTarget2D objects for rendering the head, left //fist,
    and right foot of character
    RenderTarget2D renderTarget2DHead;
    RenderTarget2D renderTarget2DLeftFist;
    RenderTarget2D renderTarget2DRightFoot;
    [/code]
  3. Initialize the camera and render targets. Insert the following code to the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(0, 40, 350);
    cameraTarget = new Vector3(0, 0, 1000);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Initialize the RenderTarget2D objects with different sizes
    renderTarget2DHead = new RenderTarget2D(GraphicsDevice,
    196, 118, false, SurfaceFormat.Color,
    DepthFormat.Depth24, 0,
    RenderTargetUsage.DiscardContents);
    renderTarget2DLeftFist = new RenderTarget2D(GraphicsDevice,
    100, 60, false, SurfaceFormat.Color,
    DepthFormat.Depth24,
    0, RenderTargetUsage.DiscardContents);
    renderTarget2DRightFoot = new
    RenderTarget2D(GraphicsDevice, 100, 60, false,
    SurfaceFormat.Color, DepthFormat.Depth24, 0,
    RenderTargetUsage.DiscardContents);
    [/code]
  4. Load the character model and insert the following line of code to the LoadContent() method:
    [code]
    modelCharacter = Content.Load<Model>(“Character”);
    [/code]
  5. Define the DrawModel() method:
    [code]
    // Draw the model on screen
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.DiffuseColor = Color.White.ToVector3();
    effect.World =
    transforms[mesh.ParentBone.Index] * world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  6. Get the rendertargets of the right foot, left fist, and head of the character. Then draw the rendertarget textures onto the Windows Phone 7 screen. Insert the following code to the Draw() method:
    [code]
    // Get the rendertarget of character head
    GraphicsDevice.SetRenderTarget(renderTarget2DHead);
    GraphicsDevice.Clear(Color.Blue);
    cameraPosition = new Vector3(0, 110, 60);
    cameraTarget = new Vector3(0, 110, -1000);
    view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
    Vector3.Up);
    DrawModel(modelCharacter, worldCharacter, view,
    projection);
    GraphicsDevice.SetRenderTarget(null);
    // Get the rendertarget of character left fist
    GraphicsDevice.SetRenderTarget(renderTarget2DLeftFist);
    GraphicsDevice.Clear(Color.Blue);
    cameraPosition = new Vector3(-35, -5, 40);
    cameraTarget = new Vector3(0, 5, -1000);
    view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
    Vector3.Up);
    DrawModel(modelCharacter, worldCharacter, view,
    projection);
    GraphicsDevice.SetRenderTarget(null);
    // Get the rendertarget of character right foot
    GraphicsDevice.SetRenderTarget(renderTarget2DRightFoot);
    GraphicsDevice.Clear(Color.Blue);
    cameraPosition = new Vector3(20, -120, 40);
    cameraTarget = new Vector3(0, -120, -1000);
    view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
    Vector3.Up);
    DrawModel(modelCharacter, worldCharacter, view,
    projection);
    GraphicsDevice.SetRenderTarget(null);
    // Draw the character model
    cameraPosition = new Vector3(0, 40, 350);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    GraphicsDevice.Clear(Color.CornflowerBlue);
    DrawModel(modelCharacter, worldCharacter, view,
    projection);
    // Draw the generated rendertargets of different parts of
    // character model in 2D
    spriteBatch.Begin();
    spriteBatch.Draw(renderTarget2DHead, new Vector2(500, 0),
    Color.White);
    spriteBatch.Draw(renderTarget2DLeftFist, new Vector2(200,
    220),
    Color.White);
    spriteBatch.Draw(renderTarget2DRightFoot, new Vector2(500,
    400),
    Color.White);
    spriteBatch.End();
    [/code]
  7. Build and run the application. The application will run as shown in the following screenshot:
    RenderTarget2D

How it works…

In step 2, the modelCharacter loads the character 3D model and the worldCharacter represents the world transformation matrix of the character. The following four variables cameraPosition, cameraTarget, view, and projection are used to initialize the camera. Here, the cameraTarget will have the same Y value as the cameraPosition and large enough Z value, which is far away behind the center, because we want the camera’s look-at direction to be parallel to the XZ plane. The last three RenderTarget2D objects, renderTarget2DHead, renderTarget2DLeftFist, and renderTarget2DRightFoot, are responsible for rendering the different parts of the character from 3D real-time view to 2D texture.

In step 3, we initialize the camera and the three render targets. The initialization code for the camera is nothing new. The RenderTarget2D has three overloaded constructers, and the most complex one is the third. If you understand the third, the other two are easy. This constructor looks similar to the following code:

[code]
public RenderTarget2D (
GraphicsDevice graphicsDevice,
int width,
int height,
bool mipMap,
SurfaceFormat preferredFormat,
DepthFormat preferredDepthFormat,
int preferredMultiSampleCount,
RenderTargetUsage usage
)
[/code]

Let’s have a look at what all these parameters stand for:

  • graphicsDevice: This is the graphic device associated with the render target resource.
  • width: This is an integer, in pixels, of the render target. You can use graphicsDevice.PresentationParameters.BackBufferWidth to get the current screen width. Because the RenderTarget2D is a subclass of Texture2D, the value for width and height of RenderTarget2D objects are used to define the size of the final RenderTarget2D texture. Notice, the maximum size for Texture2D in Windows Phone 7 is less than 2048, so the width value of RenderTarget2D cannot be beyond this limitation.
  • height: This is an integer, in pixels, of the render target. You can use graphicsDevice.PresentationParameters.BackBufferHeight to get the current screen height. The additional information is similar to the width parameter.
  • mipMap: This is true to enable a full mipMap chain to be generated, otherwise false.
  • preferredFormat: This is the preferred format for the surface data. This is the format preferred by the application, which may or may not be available from the hardware. In the XNA Framework, all two-dimensional (2D) images are represented by a range of memory called a surface. Within a surface, each element holds a color value representing a small section of the image, called a pixel. An image’s detail level is defined by the number of pixels needed to represent the image and the number of bits needed for the image’s color spectrum. For example, an image that is 800 pixels wide and 600 pixels high with 32 bits of color for each pixel (written as 800 x 600 x 32) is more detailed than an image that is 640 pixels wide and 480 pixels tall with 16 bits of color for each pixel (written as 640 x 480 x 16). Likewise, the more detailed image requires a larger surface to store the data. For an 800 x 600 x 32 image, the surface’s array dimensions are 800 x 600, and each element holds a 32-bit value to represent its color.

    All formats are listed from left to right, most-significant bit to least-significant bit. For example, ARGB formats are ordered from the most-significant bit channel A (alpha), to the least-significant bit channel B (blue). When traversing surface data, the data is stored in memory from least-significant bit to most-significant bit, which means that the channel order in memory is from least-significant bit (blue) to most-significant bit (alpha).

    The default value for formats that contain undefined channels (Rg32, Alpha8, and so on) is 1. The only exception is the Alpha8 format, which is initialized to 000 for the three color channels. Here, we use the SurfaceFormat.Color option. The SurfaceFormat.Color is an unsigned format, 32-bit ARGB pixel format with alpha, using 8 bits per channel.

  • preferredDepthFormat: This is a depth buffer containing depth data and possibly stencil data. You can control a depth buffer using a state object. The depth format includes Depth16, Depth24, and Depth24 Stencil.
  • usage: This is the object of RenderTargetUsage. It determines how render target data is used once a new target is set. This enumeration has three values: PreserveContents, PlatformContents, and DiscardContents. The default value DiscardContents means whenever a rendertarget is set onto the device, the previous one will be destroyed first. On the other hand, when you choose the PreserveContents option, the data associated with the render target will be maintained if a new rendertarget is set. This method will impact the performance greatly because it stores data and copies it all back to rendertarget when you use it again. The PlatformContents will either clear or keep the data, depending on the current platform. On Xbox 360 and Windows Phone 7, the render target will discard contents. On PC, the render target will discard the contents if multi-sampling is enabled, and preserve the contents if not.

In step 6, the first part of the Draw() method gets the render target texture for the head of the character, the GraphicDevice.SetRenderTarget() sets a new render target for this device. As the application runs on Windows Phone 7 and the RenderTargetUsage is set to DiscardContents, every time a new render target is assigned onto the device, the previous one will be destroyed. From XNA 4.0 SDK, the method has some restrictions while calling. They are as follows:

  • The multi-sample type must be the same for the render target and the depth stencil surface
  • The formats must be compatible for the render target and the depth stencil surface
  • The size of the depth stencil surface must be greater than, or equal to, the size of the render target

These restrictions are validated only while using the debug runtime when any of the GraphicsDevice drawing methods are called. Then, the following lines until the GraphicsDevice.SetRenderTarget(null) are used to adjust the camera position and the look-at target for rendering the head of the character. The block of code points out the view for transforming and rendering the model from 3D to 2D texture as render target, which will be displayed at the designated place on the Windows Phone screen. The method calling of GraphicsDevice.SetRenderTarget(null) will reset the render target currently on the graphics device for the next render target using it. It is similar to renderTarget2DRightFoot and renderTarget2DLeftFist in the second and third part of the Draw() method. The fourth part is to draw the actual character 3D model. After that, we will present all of the generated render targets on the Windows Phone 7 screen using the 2D drawing methods.

Creating a screen transition effect using RenderTarget2D

Do you remember the scene transition in Star Wars? The scene transition is a very common method for smoothly changing the movie scene from current to next. The frequent transition patterns are Swiping, Rotating, Fading, Checkerboard Scattering, and so on. With the proper transition effects, the audience will know that the plots go well when the stage changes. Besides movies, the transition effects also have a relevant application in video games, especially in 2D games. Every game state change will trigger a transition effect. In this recipe, you will learn how to create a typical transition effect using RenderTarget2D for your Windows Phone 7 game.

How to do it…

The following steps will draw a spinning squares transition effect using the RenderTarget2D technique:

  1. Create a Windows Phone Game named RenderTargetTransitionEffect and change Game1.cs to RenderTargetTransitionEffectGame.cs. Then, add Image1.png and Image2.png to the content project.
  2. Declare the indispensable variables. Insert the following code to the RenderTargetTransitionEffectGame code field:
    [code]
    // The first forefront and background images
    Texture2D textureForeFront;
    Texture2D textureBackground;
    // the width of each divided image
    int xfactor = 800 / 8;
    // the height of each divided image
    int yfactor = 480 / 8;
    // The render target for the transition effect
    RenderTarget2D transitionRenderTarget;
    float alpha = 1;
    // the time counter
    float timer = 0;
    const float TransitionSpeed = 1.5f;
    [/code]
  3. Load the forefront and background images, and initialize the render target for the jumping sprites transition effect. Add the following code to the LoadContent() method:
    [code]
    // Load the forefront and the background image
    textureForeFront = Content.Load<Texture2D>(“Image1”);
    textureBackground = Content.Load<Texture2D>(“Image2”);
    // Initialize the render target
    transitionRenderTarget = new RenderTarget2D(GraphicsDevice,
    800, 480, false, SurfaceFormat.Color,
    DepthFormat.Depth24, 0,
    RenderTargetUsage.DiscardContents);
    [/code]
  4. Define the core method DrawJumpingSpritesTransition() for the jumping sprites transition effect. Paste the following lines into the RenderTargetTransitionEffectGame class:
    [code]
    void DrawJumpingSpritesTransition(float delta, float alpha,
    RenderTarget2D renderTarget)
    {
    // Instance a new random object for generating random
    //values to change the rotation, scale and position
    //values of each sub divided images
    Random random = new Random();
    // Divide the image into designated pieces,
    //here 8 * 8 = 64
    // ones.
    for (int x = 0; x < 8; x++)
    {
    for (int y = 0; y < 8; y++)
    {
    // Define the size of each piece
    Rectangle rect = new Rectangle(xfactor * x,
    yfactor * y, xfactor, yfactor);
    // Define the origin center for rotation and
    //scale of the current subimage
    Vector2 origin =
    new Vector2(rect.Width, rect.Height) / 2;
    float rotation =
    (float)(random.NextDouble() – 0.5f) *
    delta * 20;
    float scale = 1 +
    (float)(random.NextDouble() – 0.5f) *
    delta * 20;
    // Randomly change the position of current
    //divided subimage
    Vector2 pos =
    new Vector2(rect.Center.X, rect.Center.Y);
    pos.X += (float)(random.NextDouble()) ;
    pos.Y += (float)(random.NextDouble()) ;
    // Draw the current sub image
    spriteBatch.Draw(renderTarget, pos, rect,
    Color.White * alpha, rotation, origin,
    scale, 0, 0);
    }
    }
    }
    [/code]
  5. Get the render target of the forefront image and draw the jumping sprites transition effect by calling the DrawJumpingSpritesTransition() method. Insert the following code to the Draw() method:
    [code]
    // Render the forefront image to render target texture
    GraphicsDevice.SetRenderTarget(transitionRenderTarget);
    spriteBatch.Begin();
    spriteBatch.Draw(textureForeFront, new Vector2(0, 0),
    Color.White);
    spriteBatch.End();
    GraphicsDevice.SetRenderTarget(null);
    // Get the total elapsed game time
    timer += (float)(gameTime.ElapsedGameTime.TotalSeconds);
    // Compute the delta value in every frame
    float delta = timer / TransitionSpeed * 0.01f;
    // Minus the alpha to change the image from opaque to
    //transparent using the delta value
    alpha -= delta;
    // Draw the jumping sprites transition effect
    spriteBatch.Begin();
    spriteBatch.Draw(textureBackground, Vector2.Zero,
    Color.White);
    DrawJumpingSpritesTransition(delta, alpha,
    transitionRenderTarget);
    spriteBatch.End();
    [/code]
  6. Build and run the application. It should run similar to the following screenshots:
    transition effect using RenderTarget2D

How it works…

In step 2, the textureForeFront and textureBackground will load the forefront and background images prepared for the jumping sprites transition effect. The xfactor and yfactor define the size of each subdivided image used in the transition effect. transitionRenderTarget is the RenderTarget2D object that will render the foreground image into render target texture for the jumping sprites transition effect. The alpha variable will control the transparency of each subimage and timer will accumulate the total elapsed game time. The TransitionSpeed is a constant value that defines the transition speed.

In step 4, we define the core method DrawJumpingSpritesTransition() for drawing the jumping sprites effect. First of all, we instantiate a Random object, and the random value generated from the object will be used to randomly change the rotation, scale, and position values of the divided subimages in the transition effect. In the following loop, we iterate every subimage row by row and column by column. When it is located at one of the subimages, we create a Rectangle object with the pre-defined size. Then, we change the origin point to the image center; this will make the image rotate and scale in place. After that, we randomly change the rotation, scale, and the position values. Finally, we draw the current subimage on the Windows Phone 7 screen.

In step 5, we draw the forefront image first, because we want the transition effect on the forefront image. Then using the render target, transform the current view to the render target texture by putting the drawing code between the GraphicsDevice.SetRenderTarget(tr ansitionRenderTarget) and GraphicsDevice.SetRenderTarget(null) methods. Next, we use the accumulated elapsed game time to compute the delta value to minus the alpha value. The alpha will be used in the SpriteBatch.Draw() method to make the subimages of the jumping sprites change from opaque to transparent. The last part in the Draw() method is to draw the background image first, then draw the transition effect. This drawing order is important. The texture that has the transition effect must be drawn after the images without the transition effect. Otherwise, you will not see the effect you want.

Using IronPython for Administration Tasks

Understanding the Command Line

The command line is a text-based environment that some users never even see. You type a command and the computer follows it — nothing could be simpler. In fact, early PCs relied on the command line exclusively (even earlier systems didn’t even have a console and instead relied on punched tape, magnetic tape, punched cards, or other means for input, but let’s not go that far back). Some people are amazed at the number of commands that they can enter at the command line and the usefulness of those commands even today. A few administrators still live at the command line because they’re used to working with it. The following sections give you a better understanding of the command line and how it functions.

Newer versions of Windows (such as Vista and Windows 7) display a command prompt with reduced privileges as a security precaution. Many command line utilities require administrator privileges to work properly. To open an administrator command prompt when working with a newer version of Windows, right-click the Command Prompt icon in the Start menu and choose Run As Administrator from the context menu. You may have to provide a password to complete the command. When the command prompt opens, you have full administrator privileges, which let you execute any of the command line applications.

Understanding the Need for Command Line Applications

Many administrators today work with graphical tools. However, the graphical tools sometimes have problems — perhaps they’re slow or they don’t offer a flexible means of accomplishing a task. For this reason, good administrators also know how to work at the command line. A command line application can accomplish with one well-constructed command what a graphical application may require hundreds of mouse clicks to do — for example, the FindStr utility that lets you find any string in any file. Using FindStr is significantly faster than any Windows graphical search application and always provides completely accurate results. In addition, there’s that option of searching any file — many search applications skip executables and other binary files. Give it a try right now. Open a command prompt, change directories to the root directory (CD ), and type FindStr /M /S “Your Name“ and press Enter. You’ll find every file on the hard drive that contains your name.

In some cases, the administrator must work at the command line. If you’ve taken a look at Windows Server 2008 Server Core edition, you know that it doesn’t include much in the way of a graphical interface. In fact, this version of Windows immediately opens a command processor when you start it. There’s no desktop, no icons, nothing that looks even remotely like a graphical interface. In fact, many graphical applications simply don’t work in Server Core because it lacks the required DLLs. When faced with this environment, you must know how to use command line applications.

Don’t get the idea that command line applications are a panacea for every application ailment or every administrator need. Command line applications share some common issues that prompted the development of graphical applications in the first place. Here are the issues you should consider when creating a command line application of your own:

  • Isn’t intuitive or easy to learn.
  • Requires the user to learn arcane input arguments.
  • Relies on the user to open a separate command prompt.
  • Is error prone.
  • Output results can simply disappear when starting the application without opening a separate command prompt.

Of course, you wouldn’t even be reading this chapter if command line applications didn’t also provide some benefits. In fact, command line applications are the only answer for certain application needs. Here are the benefits of using a command line application.

  • Fast, no GUI to slow things down
  • Efficient, single command versus multiple mouse clicks
  • Usable in automation, such as batch files
  • Less development time, no GUI code to write
  • Invisible when executed in the background

Command line applications can have other benefits. For example, a properly written, general command line application can execute just fine on more than one platform. Even if you use .NET-specific functionality, there’s a very good chance that you can use an alternative, such as Mono (http://www.mono-project.com/Main_Page), to run your application on other platforms. Adding a GUI always complicates matters and makes your application less easy to move.

Reading Data from the Command Line

You have a multitude of options when working with data from the command line. Precisely which method you use depends on what you’re trying to achieve. If you merely want to see what the command line contains, you should use the Python approach because it’s fast and easy. However, Python doesn’t provide the widest range of command line processing features — it tends to focus on Unix methodologies. If you want additional flexibility in working with the command line options, you might use the .NET approach instead. The following sections describe both techniques.

Using the Python Method

Most programming languages provide some means of reading input from the command line and Python is no exception. As an IronPython developer, you also have full access to the Python method of working with the command line. While you’re experimenting, you may simply want to read the command line arguments. Listing 10-1 shows how to perform this task.

Listin g 10-1: Displaying the command line arguments

[code]
# Perform the required imports.
import sys
# Obtain the number of command line arguments.
print ‘The command line has’, len(sys.argv), ‘arguments.n’
# List the command line arguments.
for arg in sys.argv:
print arg
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
[/code]

Developers who have worked with C or C++ know that the main() function can include the argc (argument count) and argv (argument vector — a type of array) arguments. Python includes the argv argument as part of the sys module. To obtain the argc argument, you use the len(sys.argv) function call. The example relies on a simple for loop to display each of the arguments, as shown in Figure 10-1.

Python makes it easy to list the command line arguments.

Of course, you’ll want to expand beyond simply listing the command line arguments into doing something with them. Listing 10-2 shows an example of how you could parse command line arguments for the typical Windows user.

Listin g 10-2: Using the Python approach to parse command line arguments

[code]
# Perform the required imports.
import sys
import getopt
# Obtain the command line arguments.
def main(argv):
try:
# Obtain the options and arguments.
opts, args = getopt.getopt(argv, ‘Dh?g:s’, [‘help’, ‘Greet=’, ‘Hello’])
# Parse the command line options.
for opt, arg in opts:
# Display help when requested.
if opt in (‘-h’, ‘-?’, ‘–help’):
usage()
sys.exit()
# Tell the user we’re in debug mode.
if opt in (‘-D’):
print ‘Application in Debug mode.’
# Display a user greeting.
if opt in (‘-g’, ‘–Greet’):
print ‘Good to see you’, arg.strip(‘:’)
# Say hello to the user.
if opt in (‘-s’, ‘–Hello’):
print ‘Hello!’
# Parse the command line arguments.
for arg in args:
# Display help when requested.
if arg.upper() in (‘/?’, ‘/HELP’):
usage()
sys.exit()
# Tell the user we’re in Debug mode.
elif arg in (‘/D’):
print ‘Application in Debug mode.’
# Display a user greeting.
elif ‘/GREET’ in arg.upper() or ‘/G’ in arg.upper():
print ‘Good to see you’, arg.split(‘:’)[1]
# Say hello to the user.
elif arg.upper() in (‘/S’, ‘/HELLO’):
print ‘Hello!’
# User has provided bad input.
else:
raise getopt.GetoptError(‘Error in input.’, arg)
# The user supplied command line contains illegal arguments.
except getopt.GetoptError:
# Display the usage information.
usage()
# exit with an error code.
sys.exit(2)
# Call main() with only the relevant arguments.
if __name__ == “__main__“:
main(sys.argv[1:])
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
This example actually begins at the bottom of the listing with an if statement:
if __name__ == “__main__“:
main(sys.argv[1:])
[/code]

Many of your IronPython applications will use this technique to pass just the command line arguments to the main() function. As shown in Figure 10-1, the first command line argument is the name of the script and you don’t want to attempt processing it.

Python assumes that everyone works with Linux or some other form of Unix. Consequently, it only supports the short dash (–) directly for command line options. An option is an input that you can parse without too much trouble because Python does most of the work for you. Options use a single dash for a single letter (short option) or a double dash for phrases (long option). Anything that doesn’t begin with a dash, such as something that begins with a slash (/) is an argument. Unfortunately, most of your Windows users will be familiar with arguments, not options, so your application should process both.

The code begins by separating options and arguments that you’ve defined. The getopt.getopt() method requires three arguments:

  • The list of options and arguments to process
  • A list of short options
  • A list of long options

In this example, argv contains the list of options and arguments contained in the command line, except for the script name. Each option and argument is separated by a space in the original string.

The list of short options is ‘Dh?g:s‘. Notice that you don’t include a dash between each of the options — Python includes them for you automatically. Each of the entries is a different command line switch, except for the colon. So, this application accepts –D, –h, –?, –g:, and –s as command line switches. The command line switches are case sensitive. The colon after –g signifies that the user must also provide a value as part of the command line switch.

The list of long options includes [‘help‘, ‘Greet=‘, ‘Hello‘]. Notice that you don’t include the double dash at the beginning of each long option. As with the short versions of the command line switch, these command line switches are case sensitive. The command line switches for this example are:

  • –D: Debug mode
  • –h, –?, and ––help: Help
  • –g:Username and ––Greet:Username: Greeting that includes the user’s name
  • –s and ––Hello: Says hello to the user without using a name

At this point, the code can begin processing opts and args. In both cases, the code relies on a for loop to perform the task. However, notice that opts relies on two arguments, opt and arg, while args relies on a single argument arg. That’s because opts and args are stored differently. The opts version of -g:John appears as [(‘–g‘, ‘:John‘)], while the args version appears as [‘/g:John‘]. Notice that opts automatically separates the command line switch from the value for you.

Processing opts takes the same course in every case. The code uses an if statement such as if opt in (‘–h‘, ‘–?‘, ‘––help‘) to determine whether the string appears in opt. In most cases, the code simply prints out a value for this example. The help routine calls on usage(), which is explained in the “Providing Command Line Help” section of the chapter. Calling sys.exit() automatically ends the application. If the application detects any command line options that don’t appear in your list of command line options to process, it raises the getopt.GetoptError() exception. Standard practice for Python applications is to display usage information using usage() and then exit with an error code (of 2 in this case by calling sys.exit(2)).

Now look at the args processing and you see something different. Python doesn’t provide nearly as much automation in this case. In addition, your user will likely expect / command line switches to behave like those for most Windows applications (case insensitive). The example handles this issue by using a different if statement, such as if arg.upper() in (‘/?‘, ‘/HELP‘). Notice that the options use a slash, not a dash.

Argument processing relies on a single if statement, rather than individual if statements. Consequently, the second through the last command line switches actually rely on an elif clause. Python won’t automatically detect errors in / command line switches. Therefore, your code also requires an else clause that raises the getopt.GetoptError() event manually.

Remember that arguments are single strings, not command line switch and value pairs. You need some method to split the command line switch from the value. The code handles this case using elif ‘/GREET‘ in arg.upper() or ‘/G‘ in arg.upper() where it compares each command line switch individually. In addition, it relies on arg.split(‘:‘)[1] to display the value. The argument processing routine shows that you can accommodate both Linux and Windows users quite easily with your application.

It’s time to test the example. Figure 10-2 shows the output of using IPY CmdLine2 .py –D –s –g:John /Hello /g:John.

An IronPython application can accommodate both – and / command line switches.

Using the .NET Method

The .NET method of working with command line arguments is similar to the Python method, but there are distinct differences. When you design your application, you should use one technique of parsing the command line or the other because mixing the two will almost certainly result in application errors. Listing 10-3 shows a simple example of the .NET method.

Listin g 10-3: Using the .NET approach to list command line arguments

[code]
# Perform the required imports.
import System
# Obtain the number of command line arguments.
print ‘The command line has’,
print len(System.Environment.GetCommandLineArgs()),
print ‘arguments.n’
# List the command line arguments.
for arg in System.Environment.GetCommandLineArgs():
print arg
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
[/code]

This example also relies on len() to obtain the number of command line arguments contained in System.Environment.GetCommandLineArgs(). As before, the code relies on a for loop to process the command line arguments. You might expect that the results would also be the same, but look at Figure 10-3 and compare it to Figure 10-1. Notice that the .NET method outputs not only the script name, but also the name of the script processor and its location on the hard drive. Using the .NET method can have benefits if you need to verify the location of IPY.EXE on the user’s system.

It’s time to see how you might parse a command line using the .NET method. Many of the techniques are similar, but there are significant differences because .NET lacks any concept of options versus arguments. In short, you use a single technique to process both in .NET. Listing 10-4 shows how to parse a command line using the .NET method.

The .NET method produces different results than the Python method.

Listin g 10-4: Using the .NET approach to parse command line arguments

[code]
# Perform the required imports.
from System import ArgumentException, Array, String
from System.Environment import GetCommandLineArgs
import sys
print ‘.NET Version Outputn’
try:
# Obtain the number of command line arguments.
Size = GetCommandLineArgs().Count
# Check the number of arguments.
if Size < 3:
# Raise an exception if there aren’t any arguments.
raise ArgumentException(‘Invalid Argument’, arg)
else:
# Create an array that has just command line arguments in it.
Arguments = Array.CreateInstance(String, Size – 2)
Array.Copy(GetCommandLineArgs(), 2, Arguments, 0, Size – 2)
# Parse the command line options.
for arg in Arguments:
# Display help when requested.
if arg in (‘-h’, ‘-?’, ‘/?’, ‘–help’) or arg.upper() in (‘/H’, ‘/HELP’):
usage()
sys.exit()
# Tell the user we’re in Debug mode.
elif arg in (‘-D’, ‘/D’):
print ‘Application in Debug mode.’
# Display a user greeting.
elif ‘-g’ in arg or ‘–Greet’ in arg or ‘/G’ in arg.upper() or
‘/GREET’ in arg.upper():
print ‘Good to see you’, arg.split(‘:’)[1]
# Say hello to the user.
elif arg in (‘-s’, ‘–Hello’) or arg.upper() in (‘/S’, ‘/HELLO’):
print ‘Hello!’
else:
raise ArgumentException(‘Invalid Argument’, arg)
except ArgumentException:
usage()
sys.exit(2)
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
[/code]

The .NET implementation is a little simpler than the Python implementation — at least if you want to use both kinds of command line switches. This example begins by importing the required .NET assemblies. The example also relies on sys to provide the exit() function.

The code begins by checking the number of arguments. When using .NET parsing, you must have at least three command line arguments to receive any input. The example uses the ArgumentException() method to raise an exception should the user not provide any inputs.

In the IronPython example, the code uses a special technique to get rid of the script name. The .NET method also gets rid of the application name and the script name. In this case, the code creates a new array, Arguments, to hold the command line arguments. You must make Arguments large enough to hold all of the command line arguments, so the code uses the Array .CreateInstance() method to create an Array object with two fewer elements than the original array provided by GetCommandLineArgs(). The Array.CreateInstance() method requires two inputs: the array data type and the array length. The Array.Copy() method moves just the command line arguments to Arguments. The Array.Copy() method requires five inputs: source array, source array starting element, destination array, destination array starting element, and the number of elements to copy.

At this point, the code can begin parsing the input arguments. Notice that unlike the Python method, you can parse all the permutations in a single line of code using the .NET method. The example provides the same processing as the Python method example, so that you can compare the two techniques. As with the Python method, the .NET method raises an exception when the user doesn’t provide correct input. The result is that the example displays usage instructions for the application. Figure 10-4 shows the output from this example.

Parsing arguments produces the same results in .NET as it does with Python.

Providing Command Line Help

Your command line application won’t have a user interface — just a command. While some people can figure out graphical applications by pointing here and clicking there, figuring out a command line application without help is nearly impossible. The methods used to understand an undocumented command line application are exotic and usually require advanced debugging techniques, time spent in the registry, lots of research online, and more than a little luck. If you seriously expect someone to use your command line application, you must provide help.

Unlike a graphical application, you won’t need tons of text and screenshots to document most command line applications. All you really need is a little text that’s organized in a certain manner. Most command line applications use the same help format, which makes them easier to understand and use. However, not all command line applications provide all the help they really need. In order to provide your command line application with superior help, you need to consider the five following elements:

  • Application description
  • Application calling syntax
  • Command line switch summary and description
  • Usage examples
  • (Optional) Other elements

The following sections describe all these elements and help you understand why they’re important. Of course, every command line application is different, so you’ll want to customize the suggestions in the following sections to meet your particular needs. The point is, you must provide the user with help of some kind.

Creating an Application Description

Many of the command line applications you see lack this essential feature. You ask for help and the application provides you with syntax and a quick overview of the command line switches. At the outset, you have little idea of what the application actually does and how the developer envisioned your using it. After a little experimentation, you might still be confused and have a damaged system as well.

An application description doesn’t have to be long. In fact, you can make it a single sentence. If you can’t describe your command line application in a single sentence, it might actually be too big — characteristically, command line applications are small and agile. Of course, there are exceptions and you may very well need an entire paragraph to describe your application. The big mistake is writing a huge tome. Most people using your application have worked with computers for a long time, so a shorter description normally works fine.

As a minimum, your application description should include the application name so the user can look for additional information online. The description should tell the user what the application does and why you created it.

Describing the Application Calling Syntax

Applications have a calling syntax — a protocol used to interact with the application. Unfortunately, you won’t have access to any formatting when writing your application help screen. Developers have come up with some methods to show certain elements over the years and you should use these methods for your command line syntax. Consider the following command line:

[code]
MyApp <Filename> [-S] [-s] [-D [-U[:<Name>]]] [-X | -Y | -Z | <Delta>] [-?]
[/code]

Believe it or not, all these strange looking symbols do have a meaning and you need to consider them for your application. Any item that appears in square brackets ([]), such as [–S], is optional. The user doesn’t need to provide it to use the application.

Anything that appears in angle brackets (<>), such as <Filename>, is a variable. The user replaces this value with some other value. Normally, you provide a descriptive name for the variable. For example, when you see <Filename>, you know that you need to provide the name of a file. In this case, <Filename> isn’t optional — the user must provide it unless asking for help. It’s understood that requesting help, normally –? or /?, doesn’t require any other input.

Command line switches within other command line switches are dependent on that command line switch. For example, you can use –D alone. However, if you want to use –U, you must also provide –D. In this case, –U is dependent on –D. Notice that you can use –U alone or you can include a <Name> variable with it. When you use the <Name> variable, the command line switch sequence must appear as –D –U:<Name>.

Sometimes a command line switch is mutually exclusive with other command line switches or even variables. For example, the [–X | –Y | –Z | <Delta>] sequence says that you can provide –X or –Y or –Z or <Delta>, but you can’t provide more than one of them.

Most Windows command line applications are case insensitive. However, there are notable exceptions to this rule. If you find that you must make your application case sensitive, be sure to use the correct case for the command line syntax. For example, –S isn’t the same as –s for this application and the command line syntax shows that. You should also note that the application is case sensitive in other areas of your help screen because some users won’t notice the difference in case.

Some developers will simply use [Options] for the command line syntax if you can use any of the command line switches at any time, or simply ignore them completely. There isn’t anything wrong with this approach, especially when your application defaults to showing the help screen when the user doesn’t provide any command line switches. However, make absolutely certain that your application truly doesn’t have a unique calling syntax before you use this approach.

Documenting the Command Line Switches

No matter how simple or complex the application, you need to document every command line switch. Most application writers use anywhere from one to three sentences to document the command line switch unless it’s truly complex. The command line switch documentation should focus on the purpose of the command line switch. Save any examples you want to provide for the usage examples portion of the help screen.

You must document every command line switch or the user won’t know it exists. Placing alternative command line switches together is a good idea because it reduces the complexity of the help screen. The order in which you place the command line switches depends on the purpose and complexity of your application. However, most developers use one of the following ordering techniques:

  • Alphabetical: Useful for longer lists of command line switches because alphabetical order can make it easier to find a particular command line switch in the list.
  • Syntactical: Developers especially like to see the command line switches in syntactical order. After viewing the syntax, the developer can find the associated command line switch description quickly.
  • Order of potential usage: Placing the command line switches in order of popularity means that the user doesn’t have to search the entire list to find a particular command line switch description. This approach is less useful on long or complex lists because you really don’t know how the user will work with the application.
  • Order of required use: In some cases, an application requires that a user place the command line switches in a particular order. For example, when creating a storyboard effect with a command line application, you want the user to know which command line switch to use first.

Some command line switch lists become quite long. In this case, you might want to group like command line switches together and place them in groups on the help screen. For example, you might have a set of command line switches that affects input and another that affects output. You could create two groups, one for each task, on your help screen to make finding a particular command line switch easier.

Showing Usage Examples

Most users won’t really understand your command line application unless you provide some usage examples. A usage example should show the command line and its result — if you do this, then you get that as output. Precisely how you put the examples together depends on your application and its intended audience. An application designed for advanced users can probably get by with fewer examples, while a complex application requires more examples. The usage examples should be non-trivial. You should try to show common ways in which you expect the user to work with your application.

Putting Everything Together

Now that you have a basic understanding of the required help screen elements, it’s time to look at an example. Listing 10-5 shows a typical usage() function. It displays help information to users who need it, using simple print() statements.

Listin g 10-5: Creating a help screen for your application

[code]
# Create a usage() function.
def usage():
print ‘Welcome to the command line example.’
print ‘This application shows basic command line argument reading.’
print ‘nUsage:’
print ‘tIPY CmdLine2.py [Options]‘
print ‘nOptions:’
print ‘t-D: Places application in debug mode.’
print ‘t-h or -? or –help: Displays this help message.’
print ‘t-g:<Name> or –Greet:<Name>: Displays a simple greeting.’
print ‘t-s or –Hello: Displays a simple hello message.’
print ‘nExamples:’
print ‘tIPY CmdLine2.py -s outputs Hello!’
print ‘tIPY CmdLine2.py -g:John outputs Good to see you John’
print ‘tYou can use either the – or / as command line switches.’
print ‘tFor example, IPY CmdLine2.py /s outputs Hello!’
[/code]

Notice the use of formatting in the code. The code places section titles at the left and an extra space below the previous section. Section content is indented so it appears as part of the section. Figure 10-5 shows the output from this code. Even though this help screen is quite simple, it provides everything needed for someone to use the example application to test command line switches.

Including Other Elements

Some command line application help screens become enormous and hard to use. In fact, some of Microsoft’s own utilities have help that’s several layers deep. Just try drilling into the Net utility sometime and you’ll discover just how cumbersome the help can become. Of course, you do want to document everything for the user. As an alternative, some command line application developers will provide an overview as part of the application, and then include a URL for detailed material online. It’s not a perfect solution because you can’t always count on the user having an Internet connection, but it does work most of the time.

You don’t have to stop with simple information redirection as part of your help. Some utilities include a phone number (just in case the user really is lacking that Internet connection). E‑mail addresses aren’t unusual, and some developers get creative in providing other helpful tips. It’s also important to take ownership of your application by including a company or developer name. If copyright is important, then you should provide a copyright notice as well. The thing is to make it easy for someone to identify your command line application without cluttering up the help screens too much.

The application help screen is simple, but helpful.

To break the help screens up, you might want to include layered help. Typing MyApp /? might display an overview, while MyApp /MySwitch /? provides detailed information. Microsoft uses this approach with several of its utilities. If you use layered help, make sure you mention it on the overview help screen, or most users will think that the overview is all they get in the way of useful information.

Special settings require a section as well. For example, IPY.EXE provides access to some application features through environment variables. These environment variables appear in a separate section of the help screen.

Applications that could damage application data or the system as a whole in some way require warnings. Too few command line applications provide warnings, so command line applications have gotten a reputation for being dangerous — only experts need apply. The fact is that many of these applications would be quite easy to use with the proper warning information. However, don’t go too far in protecting the user by providing messages that request the user confirm a particular task. Using confirmations would reduce the ability of developers to use the command line applications for batch processing and automation needs.

Given that your application might inadvertently damage something when the user misuses it, you might also want to include fixes and workarounds as part of your help. Unfortunately, it’s the nature of command line utilities that the actions they perform are one-way — once done, you can’t undo them.

Interacting with the Environment

The application environment consists of a number of elements. Of course, you need to consider whether the application uses a character mode interface or a graphical interface. The platform on which the application runs is also a consideration. Depending on the application’s purpose, you may need to consider background task management as part of the picture. Most developers understand that these elements, and more, affect the operation of the application. However, some developers miss out on a special environmental feature, the environment variable. Using environment variables makes it possible to communicate settings to your application at a number of different levels in a way that command line switches can’t. In fact, you may not even realize it, but there are several different levels of environment variables with which you can control an application, making the variables quite flexible. The following sections describe environment variables and their use in IronPython.

Understanding Environment Variables

Environment variables are simply a kind of storage location managed by the operating system. When you open a command prompt, you can see a list of environment variables by typing Set and pressing Enter. Figure 10-6 shows the environment variables on my system. The environment variables (or at least their values) will differ on your machine, so you should take a look at them. If you want to see the value of a particular environment variable, type Set VariableName (such as Set USERNAME) and press Enter. To remove an environment variable, simply type Set VariableName= (with no value) and press Enter. (Never remove environment variables you didn’t create because some of your applications could, or more likely will, stop working.)

Most computers have a wealth of environment variables.

As you can see from Figure 10-6, environment variables appear as a name/value pair. An environment variable with a specific name has a certain value. Some environment variables in this list are common to all Windows machines. For example, the system wouldn’t be able to find applications without the Path environment variable. Environment variables such as COMPUTERNAME and USERNAME can prove helpful for your applications. You can also discover facts such as the processor type and system drive using environment variables.

It’s possible to create environment variables using a number of techniques. However, the method used to create the environment variable determines its scope (personal or global), visibility (command prompt only or command prompt and Windows application), and longevity (session or permanent). For example, if you type Set MyVar=Hello (notice that there are no quotes for the value) and press Enter, you create a personal environment variable that lasts for the current session and is visible only in the command prompt window. You can see any environment variable by typing Echo %VarName% and pressing Enter. Try it out with MyVar. Type Echo %MyVar% and press Enter to see the output shown in Figure 10-7.

Use the Echo command to see environment variable content.

The most common way to set a permanent environment variable is to click Environment Variables on the Advanced tab of the System Properties dialog box. You see the Environment Variables dialog box shown in Figure 10-8. This dialog box has two environment variable settings areas. The upper area manages personal settings that affect just one person — the current user. The lower area manages environment variables that affect everyone who uses the system.

Personal environment variables affect just one person; system environment variables affect everyone.
Figure 10-8: Personal environment variables affect just one person; system environment variables affect everyone.

To create a new environment variable, simply click New. You see the New User Variable (shown in Figure 10-9) or the New System Variable dialog box. In both cases, you type an environment variable name in the Variable Name field and an environment variable value in the Variable Value field. Click OK and you see the environment variable added to the appropriate list. Editing an environment variable is just as easy. Simply highlight the environment variable you want to change in the list and click Edit. You’ll see a dialog box similar to the one shown in Figure 10-9 where you can change the environment variable value. To remove an environment variable, simply highlight its entry in the list and click Delete.

Any changes you make to environment variables won’t show up until you close and reopen any command prompt windows. Windows provides the current set of environment variables to every command prompt window when it opens the window, but it doesn’t perform updates.

The interesting thing about environment variables you set using the Environment Variables dialog box is that they are also available to Windows applications. You can read these environment variables just as easily in a graphical application as you can in a character mode application

Create an environment variable by supplying a name/value pair.
Figure 10-9: Create an environment variable by supplying a name/value pair.

You may find that you want to create environment variables for just the command prompt. Of course, you can always use the Set command approach described earlier in this section. However, most developers will want something a little more automated. If you need to set command line–only environment variables for the entire machine, then you need to modify the AutoExec.NT file found in the WINDOWSsystem32 folder of your system. Figure 10-10 shows a typical view of this file.

Some people forget that AutoExec.NT contains environment variables.
Figure 10-10: Some people forget that AutoExec.NT contains environment variables.

Simply open the file using a text editor, such as Notepad (don’t use WordPad), and add a Set command to it. Every time someone opens a command prompt, Windows reads this file and uses the settings in it to configure the command prompt window. Many people forget that the AutoExec.NT file even exists, but it’s a valuable way to add Set commands in certain cases.

It’s also possible to set individualized command prompt environment variables for a specific application. In this case, create a batch (.BAT) file using a text editor. Add Set commands to it for the application, and then add a line to start the application, such as IPY MyApp.py. In short, you can make environment variables appear whenever and wherever you want by simply using the correct method to create them.

Using the Python Method

Python provides operating system–generic methods of reading and writing variables. As with many things in IronPython, the Python techniques work great across platforms, but probably won’t provide the greatest flexibility. The following sections describe the techniques you use to read and set environment variables using the Python method.

Reading the Environment Variables Using Python

This example looks at a new Python module, os, which contains a number of interesting classes. In this case, you use the environ class, which provides access to the environment variables and lets you manipulate them in various ways, as shown in Listing 10-6.

Listin g 10-6: Displaying the environment variables using the Python method

[code]
# Import the required Python modules.
import os
# Obtain the environment variable keys.
Variables = os.environ.keys()
# Sort the keys in alphabetic order.
Variables.sort()
# Display the keys and their associated values.
for Var in Variables:
print ‘%30s %s’ % (Var,os.environ[Var])
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
[/code]

The code begins by importing the required modules, as normal. It then places the list of environment variable keys, the names, in Variables using os.environ.keys(). In most cases, you want to view the environment variables in sorted order because there are too many of them to simply peruse a list, so the code sorts the list using Variables.sort().

At this point, the code is ready to display the list. It uses a simple for loop to perform the task. Notice the use of formatting to make the output more readable. Remember that the values don’t appear in the Variables list, so you must obtain them using os.environ[Var]. Figure 10-11 shows typical output from this example.

The environment variables are displayed in alphabetical order.
Figure 10-11: The environment variables are displayed in alphabetical order.

Setting the Environment Variables Using Python

Python makes it relatively easy to set environment variables. However, the environment variables you create using IronPython affect only the current command prompt session and the current user. Consequently, if you start another application in the current session (see the section “Starting Other Command Line Applications” later in the chapter for details), it can see the environment variable, but if you start an application in a different session or start a graphical application, the environment variable isn’t defined. In addition, changes you make to existing environment variables affect only

the current session. Nothing is permanent. Listing 10-7 shows how to modify environment variables using the Python method.

Listin g 10-7: Setting an environment variable using the Python method

[code]
# Import the required Python modules.
import os
# Create a new environment variable.
os.environ.__setitem__(‘MyVar’, ‘Hello’)
# Display its value on screen.
print ‘MyVar =’, os.environ[‘MyVar’]
# Change the environment variable and show the results.
os.environ.__setitem__(‘MyVar’, ‘Goodbye’)
print ‘MyVar =’, os.environ[‘MyVar’]
# Delete the variable, and then try to show it.
try:
os.environ.__delitem__(‘MyVar’)
print ‘MyVar =’, os.environ[‘MyVar’]
except KeyError as (KeyName):
print ‘Can‘t display’, KeyName
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
[/code]

Setting and changing an environment variable use the same method, os.environ.__setitem__(). In both cases, you supply a name/value pair (MyVar/Hello). When you want to see the value of the environment variable, you request the value by supplying the name, such as os.environ[‘MyVar‘] for this example.

Deleting an environment variable requires use of os.environ.__delitem__(). In this case, you supply only the name of the environment variable you want to remove.

If you try to display an environment variable that doesn’t exist, the interpreter raises a KeyError exception. The example shows the result of trying to print MyVar after you remove it using os.environ.__delitem__(). Figure 10-12 shows the output from this example.

IronPython makes it easy to set, modify, and delete environment variables for the current session.
Figure 10-12: IronPython makes it easy to set, modify, and delete environment variables for the current session.

Using the .NET Method

Working with environment variables using the .NET method isn’t nearly as easy as working with them using the Python method. Then again, you can make permanent environment variable changes using .NET. In fact, .NET provides support for three levels of environment variables.

  • Process: Affects only the current process and any processes that the current process starts
  • User: Affects only the current user
  • Machine: Affects all users of the host system

An important difference between the Python and .NET methods is that any change you make using the .NET method affects both command line and graphical applications. You have significant control over precisely how and where an environment variable change appears because you specify precisely what level the environment variable should affect. The following sections provide more information on reading and setting environment variables using the .NET method.

Reading the Environment Variables Using .NET

As previously mentioned, the .NET method is more flexible than the Python method, but also requires a little extra work on your part. Some of the extra work comes in the form of flexibility. The .NET method provides several ways to obtain environment variable data.

  • Use one of the Environment class properties to obtain a standard environment variable value. You can find a list of these properties at http://msdn.microsoft.com/library/ system.environment_properties.aspx.
  • Check a specific environment variable using GetEnvironmentVariable().
  • Obtain all the environment variables for a particular level using GetEnvironmentVariables() with an EnvironmentVariableTarget enumeration value.
  • Obtain all the environment variables regardless of level using GetEnvironmentVariables().

It’s important to note that these techniques let you answer questions such as whether a particular environment variable is a standard or custom setting. You can also determine whether the environment variable affects the process, user, or machine as a whole. In short, you obtain more information using the .NET method, but at the cost of additional complexity. Listing 10-8 shows how to read environment variables using each of the .NET methods.

Listin g 10-8: Displaying the environment variables using the .NET method

[code]
# Obtain access to Environment class properties.
from System import Environment
# Obtain all of the Environment class methods.
from System.Environment import *
# Import the EnvironmentVariableTarget enumeration.
from System import EnvironmentVariableTarget
# Display specific, standard environment variables.
print ‘Standard Environment Variables:’
print ‘tCurrent Directory:’, Environment.CurrentDirectory
print ‘tOS Version:’, Environment.OSVersion
print ‘tUser Name:’, Environment.UserName
# Display any single environment variable.
print ‘nSpecific Environment Variables:’
print ‘tIronPython Path:’, GetEnvironmentVariable(‘IronPythonPath’)
print ‘tSession Name:’, GetEnvironmentVariable(‘SessionName’)
# Display a particular kind of environment variable.
print ‘nUser Level Environment Variables:’
for Var in GetEnvironmentVariables(EnvironmentVariableTarget.User):
print ‘t%s: %s’ % (Var.Key, Var.Value)
# Display all of the environment variables in alphabetical order.
print ‘nAll of the environment variables.’
# Create a list to hold the variable names.
Keys = GetEnvironmentVariables().Keys
Variables = []
for Item in Keys:
Variables.Add(Item)
# Sort the resulting list.
Variables.sort()
# Display the result.
for Var in Variables:
print ‘t%s: %s’ % (Var, GetEnvironmentVariable(Var))
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
[/code]

The code begins by importing some .NET assemblies. Notice that the example reduces clutter by importing only what the code actually needs.

As mentioned earlier, you can obtain standard environment variable values by using the correct property value from the System.Environment class. In this case, the code retrieves the current directory, operating system version, and the user name, as shown in Figure 10-13.

The next code segment in Listing 10-8 shows how to obtain a single environment variable. All you need is the GetEnvironmentVariable() with a variable name, such as, IronPythonPath.

If you want to work with the environment variables found at a particular level, you use GetEnvironmentVariables() with an EnvironmentVariableTarget enumeration value, as shown in the next code segment in Listing 10-8. Unless you create a custom environment variable, you won’t see any output at the EnvironmentVariableTarget.Process level.

You might remember from Listing 10-6 the ease of sorting the environment variables when using the Python method. Sorting the environment variables when using the .NET method isn’t nearly as easy because the .NET method relies on a System.Collections.Hashtable object for the output of the GetEnvironmentVariables() method call. The easiest method to sort the environment variables is to obtain a list of the keys using GetEnvironmentVariables() .Keys, the Keys object; place them in a list object, Variables; and then sort as normal using Variables.sort().

The .NET method provides multiple ways to obtain environment variables.
Figure 10-13: The .NET method provides multiple ways to obtain environment variables.

Now that the code has a sorted list, it uses a for loop to enumerate each environment variable using GetEnvironmentVariable(). Figure 10-13 does show the entire list, but when you try the example, you’ll see that the list is indeed sorted. There are definitely times where .NET objects will cause problems for your IronPython application and this is one of them.

Setting the Environment Variables Using .NET

The .NET method provides some additional setting capabilities when compared to the Python method. For one thing, you can make the environment variable settings permanent. The reason for this difference is that the .NET method lets you write the settings directly to the registry. You won’t manipulate the registry directly, but the writing does take place in the background, just as it would if you used the Environment Variables dialog box.

You do have some limitations. For example, you can’t change an Environment class property value. This restriction makes sense because you don’t want to change an environment variable that a number of applications might need. Listing 10-9 shows how to set environment variables as needed.

Listin g 10-9: Setting an environment variable using the .NET method

[code]
# Obtain access to Environment class properties.
from System import Environment
# Obtain all of the Environment class methods.
from System.Environment import *
# Import the EnvironmentVariableTarget enumeration.
from System import EnvironmentVariableTarget
# Create a temporary process environment variable.
SetEnvironmentVariable(‘MyVar’, ‘Hello’)
print ‘MyVar =’, GetEnvironmentVariable(‘MyVar’)
# Create a permanent user environment variable.
SetEnvironmentVariable(‘Var2’, ‘Goodbye’, EnvironmentVariableTarget.User)
print ‘Var2 =’, GetEnvironmentVariable(‘Var2’)
print ‘Var2 =’, GetEnvironmentVariable(‘Var2’, EnvironmentVariableTarget.User)
raw_input(‘nOpen the Environment Variables dialog box…’)
# Delete the temporary and permanent variables.
print ‘nDeleting the variables…’
SetEnvironmentVariable(‘MyVar’, None)
SetEnvironmentVariable(‘Var2’, None, EnvironmentVariableTarget.User)
print ‘MyVar =’, GetEnvironmentVariable(‘MyVar’)
print ‘Var2 =’, GetEnvironmentVariable(‘Var2’, EnvironmentVariableTarget.User)
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
[/code]

The example begins with the usual assembly imports. It then creates a new environment variable using the SetEnvironmentVariable() method. If you call SetEnvironmentVariable() without specifying a particular level, then the .NET Framework creates a temporary process environment variable that only lasts for the current session.

The next step creates a permanent user environment variable. In this case, you must supply an EnvironmentVariableTarget enumeration value as the third argument. This portion of the example also demonstrates something interesting. If you create a new permanent environment variable in a process, the .NET Framework won’t update that process (or any other process for that matter). Consequently, the first call to GetEnvironmentVariable() fails, as shown in Figure 10-14.

To see the environment variable, you must either restart the process or you must call GetEnvironmentVariable() with an EnvironmentVariableTarget enumeration value. As a result, the second call succeeds. At this point, the example pauses so you can open the Environment Variables dialog box and see for yourself that the environment variable actually does exist as a permanent value.

Deleting an environment variable is as simple as setting it to None using the SetEnvironmentVariable() method. However, you need to delete permanent environment variables by including the EnvironmentVariableTarget enumeration value, or the .NET Framework won’t delete it. Unlike the Python method, you won’t get an error when checking for environment variables that don’t exist using the .NET method. Instead, you’ll get a value of None, as shown in Figure 10-14.

You can create permanent environment variables using the .NET method.
Figure 10-14: You can create permanent environment variables using the .NET method.

Environment Variable Considerations

Some developers don’t think too hard about how the changes they make to the environment will affect other applications. One application, which will remain nameless, actually changed the path environment variable and caused other applications to stop working. Users won’t tolerate such behavior because it impedes their ability to perform useful work. In addition, companies lose a lot of money when administrators have to devote time to fixing such problems.

The standard rules for using environment variables is that you should only read environment variables created by others. You may find a situation where you need to change a non-standard environment variable, but proceed with extreme caution. It’s never allowed to change a standard environment variable, such as USERNAME, created by the operating system because doing so can cause a host of problems.

If you want to have an environment variable you can change, create a custom environment variable specifically for your application. Even if you have to copy the value of another environment variable into this custom environment variable, you can be sure you won’t cause problems for other applications if you always use custom environment variables for your application.

Starting Other Command Line Applications

You can start other applications using IronPython. In fact, Python provides a number of techniques for performing this task. If you’ve worked with a .NET language for a while, you know that the .NET Framework also provides several methods of starting applications. However, most developers want to do something simple with the applications they start as subprocesses. For example, you might want to get the operating system to perform a task that IronPython won’t perform for you directly.

IronPython sports a plethora of methods to execute external applications. However, the simplest of these methods is os.popen(). Using this method, you can quickly open an external application, obtain any output it provides, and work with that output in your application. These three steps are all that many developers need. Listing 10-10 shows how to use os.popen() to execute an external application.

Listin g 10-10: Starting applications directly in IronPython

[/code]
# Import the required module.
import os
# Open a copy of Notepad.
os.popen(‘Notepad C:/Test.TXT’)
# Use the Dir command to get a directory listing and display it.
Listing = os.popen(‘Dir C:\ /OG /ON’)
for File in Listing.readlines():
print File,
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
[/code]

This example begins by opening a copy of Notepad with C:/Test.TXT. Notice that the command uses a slash, not a backslash. In many cases, you can use a standard slash to avoid having to use a double backslash (\) in your command. When this command executes, you see a copy of Notepad open with the file loaded. Of course, you need to create C:/Test.TXT before you execute the example to actually see the file loaded into Notepad.

In some cases, you need to read the output from a command after it executes. For example, you might want to obtain a directory listing using particular command line switches. The second part of the example shows how to perform this task. When the Dir command returns, Listing 10-10 has a directory listing in it similar to the one shown in Figure 10-15. In this case, you must provide the double backslash because, for some reason, Dir won’t work with the / when called from IronPython.

Use the results of executing a command to display results in IronPython.
Figure 10-15: Use the results of executing a command to display results in IronPython.

If you really need high-powered application management when working with IronPython, then you want to use the subprocess module, which contains a single method, Popen(). This approach is for those few who really need extreme control over the applications they execute. You can read about this module at http://docs.python.org/library/subprocess.html. The os module also has a number of popen() versions, ranging from popen() to popen4(). Generally, if popen() won’t meet your needs, it’s probably a good idea to use the subprocess .Popen() method because it provides better support for advanced functionality.

Providing Status Information

Your administrative application often performs tasks without much user interaction, which means that the user might not even be aware of errors that occur. Consequently, you need to provide some means of reporting status information. The following sections provide a quick overview of some techniques you can use to report status information to the user.

Reporting Directly to the User

The time honored method of reporting status information to the user is to display it directly onscreen. In fact, most of the applications in this book use this approach. If you know that the user will be watching the display or at least checking it from time-to-time, it’s probably a good idea to provide direct information. Make sure you provide all the details, including error numbers and strings as appropriate. Depending on the skill of the user, you’ll want to provide messages that are both friendly and easy to understand. Otherwise, less-skilled users are apt to do something rash because they don’t understand what the message is telling them.

If you know that less skilled users will rely on your application, you should provide a secondary method of reporting status information such as an event log. Log files are also helpful, but can prove troublesome for the administrator to access from a remote location. The Microsoft Management Console (MMC) provides easy methods for administrators to gain access to remote event logs as necessary.

You can probably provide a remote paging system or similar contact techniques for the administrator as well. However, such methods are somewhat complex and not directly supported by IronPython through the Python libraries. The implementation of these techniques is outside the scope of this book. However, you’ll probably want to use a .NET Framework methodology, such as the one described at http://code.msdn.microsoft.com/sendemail, to perform this task.

Creating Log Files

At one time, administrators relied on text log files to store information from applications. However, most applications today output complex information that’s hard to read within a text file. If you plan to create log files for your application, you probably want to store them in XML format to make them easy to ready and easy to import into a database.

Using the Event Log

Many applications rely on the event log as a means to output data to the administrator. Of all of the methods that Microsoft has created for outputting error and status information, the event log has been around the longest and is the most successful. Fortunately, for the IronPython developer, using the event log is extremely easy and it’s the method that you should use most often. Listing 10-11 shows just how easy it is to write an event log entry.

Listin g 10-11: Writing an event log entry

[code]
# Import the required assemblies.
from System.Diagnostics import EventLog, EventLogEntryType
# Create the event log entry.
ThisEntry = EventLog(‘Application’, ‘Main’, ‘SampleApp’)
# Write data to the entry.
ThisEntry.WriteEntry(‘This is a test!’, EventLogEntryType.Information)
# Pause after the debug session.
raw_input(‘Event log entry written…’)
[/code]

The EventLog() constructor accepts a number of different inputs. The form shown in the example defines the log name, machine name, and the application name. In most cases, this is all the information you need to start writing event log entries.

After you create ThisEntry, you can use it to begin writing event log entries as needed using the WriteEntry() method. The WriteEntry() is overloaded to accept a number of information formats — the example shows what you’ll commonly use for simple entries. You can see other forms of the WriteEntry() method at http://msdn.microsoft.com/library/system.diagnostics .eventlog.writeentry.aspx.

In this case, the WriteEntry() provides a message and defines the kind of event log entry to create. You can also create warning, error, success audit, and failure audit messages. Figure 10-16 shows the results of running this example.

The example outputs data to the event log.
Figure 10-16: The example outputs data to the event log.

Windows Phone Embedding Audio in your Game

Controlling an audio file

Music has the ability to impact the emotions of human beings. Different styles of music produce different feelings. Fast tempo music makes people feel nervous or excited; when the tempo becomes slow, people feel relaxed and safe. In modern video games, music often plays an important role in creating the atmosphere. Game designers and composers work together to tailor the music to the plot. When a player goes into a beautiful scene, a euphonious and harmonious song will accompany it; when immersed in a dangerous situation, the music will sound oppressive. In this recipe, you will learn how to use the media technology to control music in your Windows Phone 7 game.

Getting ready

In Windows Phone 7 XNA, a song or music is encapsulated as a Song class. This class provides the metadata of a song and includes the song’s name, artist, album, and duration. The Name and Duration are the most direct properties for your song. We will use them in our example. As a class design consideration, the Song class does not have a Play() method to play a song. You should use the MediaPlayer class, which provides methods and properties for playing songs. To control song playback, we can use the following methods:

  • Play(): The Play() method kicks off the song.
  • Stop(): The Stop() method stops the song playing and sets the playing position to the song’s beginning.
  • Pause(): The Pause() method also stops the song playing, the difference between the Pause() and the Stop() method is that the Pause() method will not reset the playing position to the start. Instead, it keeps the playing position at the place where the song is paused.
  • Resume(): The Resume() method replays the song from the paused position.

Besides these essential methods, if you have more than one song in your playing queue, the MoveNext() and MovePrevious() methods will help you to do circular playing. The two methods move to the next or previous song in the queue. They operate as if playing the queue was circular. That is, when the last song is playing the MoveNext() method moves to the first song and when the first song is playing the MovePrevious() method moves to the last song. If the IsShuffled property is set, the MoveNext() and MovePrevious() methods will randomly choose a song in the playing queue to play. To use these methods, there is no need to instantiate the MediaPlayer class, which is actually a static class. You can directly call the method using the following pattern:

[code]
MediaPlayer.MethodName()
[/code]

In this example, we will play, pause, resume, and stop a song using the MediaPlayer and Song classes. Besides this, the song’s name, playing state, and position will be displayed on the Windows Phone 7 screen.

How to do it…

The following steps show you how to load, play, pause, and resume a song in the Windows Phone7 XNA application:

  1. Create a Windows Phone Game named PlayMusic, and change Game1.cs to PlayMusicGame.cs. Add the audio file music.wma and the sprite font file gameFont.spritefont to the content project.
  2. Declare the indispensable variable. Add the following lines to the field of the PlayMusicGame class:
    [code]
    // SpriteFont object for showing song information
    SpriteFont font;
    // Text presents the song playing position
    string textPlayingPosition = “”;
    // Song’s name
    string textSongName;
    // Playing state
    MediaState PlayingState;
    // Song object stores a song
    Song song;
    [/code]
  3. Load the game font and the audio file. Then get the song’s name and play it. Add the following code to the LoadContent() method:
    [code]
    // Load the game font
    font = Content.Load<SpriteFont>(“gameFont”);
    // Load the song
    song = Content.Load<Song>(“music”);
    // Get the song’s name
    textSongName = song.Name;
    // Play the song
    MediaPlayer.Play(song);
    [/code]
  4. Use the MediaPlayer class to play, pause, resume, and stop the song. Meanwhile, update the playing state and position. Insert the following code to the Update() method:
    [code]
    // Get the tapped position
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    // If the tap gesture is valid
    if (GraphicsDevice.Viewport.Bounds.Contains(point))
    {
    // If the the media player is playing, pause the
    // playing song
    if (MediaPlayer.State == MediaState.Playing)
    {
    MediaPlayer.Pause();
    PlayingState = MediaState.Paused;
    }
    else if (MediaPlayer.State == MediaState.Paused)
    {
    // If the song is paused and not stopped, resume
    //it
    MediaPlayer.Resume();
    PlayingState = MediaState.Playing;
    }
    else if (MediaPlayer.State == MediaState.Stopped)
    {
    // If the song is stopped, replay it
    MediaPlayer.Play(song);
    PlayingState = MediaState.Playing;
    }
    }
    }
    if (MediaPlayer.State == MediaState.Playing)
    {
    // If the song’s playing position meets the end, stop
    // playing
    if (MediaPlayer.PlayPosition == song.Duration)
    {
    MediaPlayer.Stop();
    PlayingState = MediaState.Stopped;
    }
    // Show the song’s playing position and the total duration
    textPlayingPosition = MediaPlayer.PlayPosition.ToString()
    + ” / ” + song.Duration.ToString();
    }
    [/code]
  5. Draw the song’s name, playing state, and playing position on the Windows Phone 7 screen. Add the following code to the Draw() method:
    [code]
    spriteBatch.Begin();
    // Draw the instruction text
    spriteBatch.DrawString(font, “Tap Screen to Play, Pause and ”
    +”Resume the Song”, new Vector2(0, 0), Color.White);
    // Draw the song’s name
    spriteBatch.DrawString(font, “Song’s Name: ” + textSongName,
    new Vector2(0,200 ), Color.White);
    // Draw the song’s playing state
    spriteBatch.DrawString(font, “State: ” +
    PlayingState.ToString(),
    new Vector2(0, 240), Color.White);
    spriteBatch.DrawString(font, textPlayingPosition,
    new Vector2(0, 280), Color.White);
    spriteBatch.End();
    [/code]
  6. Now, build and run the application; it should run as shown in the following screenshot:

Controlling an audio file

How it works…

In step 2, font will be used to show the instructions, song’s name, playing state, and position; the texPlayingPosition stores the song’s current playing position; textSongName saves the song’s name; PlayingState shows the song’s playing state: Playing, Paused, or Stopped; the song is the object of the Song class and will be used to process and load an audio file.

In step 4, the first part of the Update() method is to check whether the player taps on the Windows Phone 7 screen; if he/she does, it has three MediaState options for controlling the song. When the song is Playing, the tap gesture will pause it; if the song is Paused, the gesture will replay it from the paused position using the MediaPlayer.Resume() method; once the song’s current MediaState is Stopped, we will replay the song from the beginning when a valid tap gesture takes place. The second part is about updating the song’s current playing position with its total duration using the two properties of the Song class: PlayPosition and Duration. Besides these, when the current playing position equals the song’s duration, it means the song has ended. In this example, we will replay the song once. You can change it in another way by moving to the next song.

Adding sound effects to your game

“The sound effects are artificially created or enhanced sounds or sound processes used to emphasize artistic or other content of films, television shows, live performance, animation, video game, music, or other media.” – Wiki.

When you are playing games such as Counter Strike or Quake, the sound you hear while firing is the sound effect. Every weapon has its corresponding sound effect. In the early times, the sound effect came from the sound synthesis, a kind of midi rhythm. Today, the game studio can sample the sound from real instances. In making a racing game, the engine sound of every car is different. The game studio can record the sound from a real car, maybe a Lamborghini or a Panamera Turbo, to make the game more realistic. The Windows Phone 7 XNA framework simplifies the work needed to control sound effects. It is up to you to use the sound effects in your game. In this recipe, you will learn how to make your game more interesting by applying sound effects.

Getting ready

In XNA, a SoundEffect contains the audio data and metadata (such as wave data and loop information) loaded from a sound file. You can create multiple SoundEffectInstance objects, and play them from a single SoundEffect. These objects share the resources of that SoundEffect. The only limit to the number of loaded SoundEffect instances is memory. A loaded SoundEffect will continue to hold its memory resources throughout its lifetime. When a SoundEffect instance is destroyed, all SoundEffectInstance objects previously created by that SoundEffect will stop playing and become invalid. Unlike the Song class, SoundEffect has a Play() method; usually, the sound effect is fast and short, plays once and then stops. If you do not want to loop a sound, the Play() method is enough. Otherwise, you should create an instance of the sound effect using the SoundEffect. CreateInstance() method. As the basic metadata, the SoundEffect class also has Name and Duration properties, and it is easy to get the name of any sound effect and its duration. The DistanceScale and DopplerScale properties will help you to simulate a realistic 3D sound effect, especially when you use the SoundEffectInstance.Apply3D() method.

For DistanceScale, if sounds are attenuating too fast, which means that the sounds get quiet too quickly as they move away from the listener, you need to increase the DistanceScale. If sounds are not attenuating fast enough, decrease the DistanceScale. This property will also affect Doppler sound.

The DopplerScale changes the relative velocities of emitters and listeners. If sounds are shifting (pitch) too much for the given relative velocity of the emitter and listener, decrease the DopplerScale. If sounds are not shifting enough for the given relative velocity of the emitter and listener, increase the DopplerScale.

In this example, we will use the SoundEffect class to play two different weapons’ sounds.

How to do it…

The following steps present a complete guide for controlling a sound effect in a Windows Phone 7 XNA game using a .wav file:

  1. Create a Windows Phone Game named PlaySoundEffect, and change the Game1. cs to PlaySoundEffectGame.cs. Add the audio file Laser.wav, MachineGun. wav and the sprite font file gameFont.spritefont to the project content.
  2. Add the following code as the required variables to the field of PlaySoundEffectGame class:
    [code]
    // Sprite font for showing the name of current sound effect
    SpriteFont font;
    // Current weapon’s name
    string CurrentWeapon;
    // Sound effect variables
    SoundEffect SoundEffectLaser;
    SoundEffect soundEffectMachineGun;
    // Current Sound effect
    SoundEffect soundEffectPlaying;
    [/code]
  3. Enable the Hold gesture for Windows Phone 7 TouchPanel. Add the following line to the Initialize() method:
    [code]
    // Enable the hold gesture
    TouchPanel.EnabledGestures = GestureType.Hold;
    [/code]
  4. Load the game font, sound effects of weapons, and set the current sound effect of a weapon for playing. Paste the following code into the LoadContent() method:
    [code]
    // Load the font
    font = Content.Load<SpriteFont>(“gameFont”);
    // Load the sound effect of laser gun
    SoundEffectLaser = Content.Load<SoundEffect>(“Laser”);
    // Load the sound effect of machine gun
    soundEffectMachineGun = Content.Load<SoundEffect>(“MachineG
    un”);
    // Set the sound effect of laser to the current sound effect
    // for playing
    soundEffectPlaying = SoundEffectLaser;
    // Set the name of current sound effect
    CurrentWeapon = “Laser”;
    [/code]
  5. Play the sound effect and use the Hold gesture to switch the sound effects between different weapons. Add the following code to the Update() method:
    [code]
    // Play the current sound effect when tap on the screen
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    if (GraphicsDevice.Viewport.Bounds.Contains(point))
    {
    if (soundEffectPlaying != null)
    {
    soundEffectPlaying.Play();
    }
    }
    }
    // Using Hold gesture to change the sound effect of weapons
    while(TouchPanel.IsGestureAvailable)
    {
    // Read the gesture
    GestureSample gestures = TouchPanel.ReadGesture();
    if (gestures.GestureType == GestureType.Hold)
    {
    // If the Hold gesture is taking place, change the
    // sound effect.
    if (soundEffectPlaying.Equals(soundEffectLaser))
    {
    soundEffectPlaying = soundEffectMachineGun;
    CurrentWeapon = “Machine Gun”;
    }
    else if (soundEffectPlaying.Equals
    (soundEffectMachineGun))
    {
    soundEffectPlaying = soundEffectLaser;
    CurrentWeapon = “Laser”;
    }
    }
    }
    [/code]
  6. Draw the instructions and the name of the current sound effect. Insert the following code to the Draw() method.
    [code]
    spriteBatch.Begin();
    // Draw the instructions
    spriteBatch.DrawString(font, “Tap and hold on for changing
    your” + “weapon.nTap for firing”, new Vector2(0,0), Color.
    White);
    // Draw the current weapon’s name
    spriteBatch.DrawString(font, “Current Weapon: ” +
    CurrentWeapon, new Vector2(0, 70), Color.White);
    spriteBatch.End();
    [/code]
  7. Build and run the application. It should run as shown in the screenshot to the left. When you tap the screen and hold it for a few seconds, the sound effect will be something similar to the screenshot on the right:
    Adding sound effects to your game

How it works…

In step 2, the font will be used to draw the name of the current sound effect and the controlling instructions; the CurrentWeapon indicates the name of the current weapon; soundEffectLaser and soundEffectMachineGun, the SoundEffect instances, individually represent the laser and machine gun sounds; soundEffectPlaying is the currently playing sound effect.

In step 3, we use the Hold gesture to switch the playing sound effect. It is required to enable the gesture type in TouchPanel.

In step 5, the first part is to check whether the user taps on the Windows Phone 7 screen. If so, then play the current sound effect if it is not null. The second part is to switch the sound effect for playing using the Hold gesture. If the on-going gesture is Hold, we will alternate the sound effects between the laser and the machine gun for playing.

Adding stereo sounds to your game

Sometimes, the music and simple sound effects are not enough for you, if you are pursuing the realistic feeling. You cannot determine the place where the sound comes from in your game world. If you have experience of playing Counter-Strike, it is easy to know how many enemies are near you when you stop moving, by listening to the sound. This technique is called Stereo Sound. It uses two or more independent audio channels through a symmetrical configuration of loudspeakers to create the impression of the sound heard from different directions, similar to natural hearing. For the stereo sound, Windows Phone 7 XNA simulates a sound emitter and listener, so that when the position of the emitter is changing, the listener will get a processed sound effect according to the distance between them. In this recipe, you will learn how to use the XNA framework to implement a stereo sound.

Getting ready

In this example, we will use the SoundEffectInstance class with its methods to simulate a 3D sound effect. SoundEffectInstance provides the single playing, paused, and stopped methods to control an instance of sound effect. You can create a SoundEffectInstance by calling the SoundEffect.CreateInstance() method. Initially, the SoundEffectInstance is created as stopped, but you can play it by calling the SoundEffectInstance.Play() method. The volume, panning, and pitch of SoundEffectInstance can be modified by setting the Volume, Pitch, and Pan properties. On Windows Phone 7, a game can have a maximum of 16 total playing SoundEffectInstance instances at one time, combined across all loaded SoundEffect objects. Attempts to play a SoundEffectInstance beyond this limit will fail.

The SoundEffectInstance.Apply3D() method simulates the 3D sound effect. It receives two parameters, the object of AudioEmitter and AudioListener classes. This method will calculate the 3D audio values between an AudioEmitter and an AudioListener object, and will apply the resulting values to the SoundEffectInstance instance. If you want to apply the 3D effect to a SoundEffectInstance, you must call the method before you call the SoundEffectInstance.Play()method. Calling this method automatically sets the Windows Phone 7 speaker mix for any sound played by this SoundEffectInstance to a value calculated by the difference in Position property values between the listener and the emitter. In preparation for the mix, the sound is converted to mono. Any stereo information in this sound is discarded.

How to do it…

The following steps give you a complete guide to implementing a stereo sound effect:

  1. Create a Windows Phone Game named PlayStereoSound and change the Game1. cs to PlayStereoSoundGame.cs. Add the audio file drums.wma and the model file BallLowPoly.fbx to the project content.
  2. Declare the essential variables to the field of the PlayStereoSoundGame class. Add the following code to the class:
    [code]
    // Sound effect object loads the sound effect file
    SoundEffect soundEffect;
    // Instance of a SoundEffect sound.
    SoundEffectInstance soundEffectInstance;
    // AudioEmitter and AudioListener simulate 3D audio effects
    AudioEmitter emitter;
    AudioListener listener;
    // The world position represents the 3D position for
    AudioEmitter
    Vector3 objectPos;
    // A ball for visually presenting the varying AudioEmitter
    // world position.
    Model modelBall;
    Matrix worldBall = Matrix.Identity;
    // Camera
    Vector3 cameraPosition;
    Matrix view;
    Matrix projection;
    [/code]
  3. Initialize the camera, the audio emitter, and the audio listener. Add the following code to the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(0, 30, 50);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    1.0f, 1000.0f);
    // Initialize the AudioEmitter and AudioListener
    emitter = new AudioEmitter();
    listener = new AudioListener();
    [/code]
  4. Load the ball model and the drum sound effect. Then, create the instance of the drum sound effect and apply the audio emitter to the audio listener of the instance. Finally, play the sound effect. Add the following code to the LoadContent() method:
    [code]
    // Load the ball model
    modelBall = Content.Load<Model>(“BallLowPoly”);
    // Load the sound effect
    soundEffect = Content.Load<SoundEffect>(“drums”);
    // Create an instance of the sound effect
    soundEffectInstance = soundEffect.CreateInstance();
    // Apply 3D position to the sound effect instance
    soundEffectInstance.Apply3D(listener, emitter);
    soundEffectInstance.IsLooped = true;
    // Play the sound
    soundEffectInstance.Play();
    [/code]
  5. Rotate the audio emitter around the Y axis. Add the following lines to the Update() method:
    [code]
    // Rotate around axis Y
    objectPos = new Vector3(
    (float)Math.Cos(gameTime.TotalGameTime.TotalSeconds) / 2,
    0,
    (float)Math.Sin(gameTime.TotalGameTime.TotalSeconds) /
    2);
    // Update the position of the audio emitter
    emitter.Position = objectPos;
    // Apply the new position of the audio emitter to the audio
    listener
    soundEffectInstance.Apply3D(listener, emitter);
    [/code]
  6. Define the DrawModel() method. Add the following code to the PlayStereoSoundGame class:
    [code]
    // Draw the 3D model
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.World = transforms[mesh.ParentBone.Index] *
    world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  7. Draw the ball model and rotate it around the Y axis to coincide with the position of the audio emitter. Add the following code to the Draw() method:
    [code]
    // Draw the rotating ball
    DrawModel(modelBall, worldBall *
    Matrix.CreateTranslation(objectPos * 30), view,
    projection);
    [/code]
  8. Build and run the application, and it should run similar to the following screenshots:
    Adding stereo sounds to your game

How it works…

In step 2, the soundEffect will be used to load the sound effect file; the soundEffectInstance plays and applies the 3D sound effect; emitter and listener will combine with each other to simulate the 3D audio effects; objectPos represents the position changes around the Y axis, the latest value will be used to update the position value of the AudioEmitter object; modelBall loads the ball model; worldBall stores the world position of a ball model in 3D; the next three variables cameraPosition, view, and project depict the camera.

In step 4, after loading the ball model sound effect, we create a SoundEffectInstance object using the soundEffect.CreateInstance() method. Note that it is required to call the SoundEffectInstance.Apply3D() method with AudioListener and AudioEmitter objects before the Play() method. If you do not do so, the next time you call the Apply3D() method, it will throw an exception.

In step 5, we compute the objectPos for rotating around the Y axis. The X value comes from the Math.Cos() method; the Z value comes from the Math.Sin(). These two factors are equal to the value of a round in the XZ plain. After that, we use the newly created objectPos to update the position of the emitter and then call the SoundEffectInstance. Apply3D() method to re-calculate the playing 3D sound effect in the surroundings. In step 7, for the world parameter of the DrawModel() method, we use the latest objectPos to update the translation of the ball model in the 3D world. This will make the ball rotate around the Y axis along with the position of the sound effect emitter.

 

Windows Phone Collision Detection #Part 2

Mapping a tapped location to 3D

To pick an object in a 3D game is relevant in terms of real-time strategy. For example, in StarCraft2, you can choose a construction from a 2D panel, then a semi-transparent 3D model will show up in the game view that lets you choose the best location for building the construction. Or, you can select your army by just clicking the left button of the mouse and drawing a rectangle covering the units you want to control. All of these happen between 2D and 3D. This is magic! Actually, this technique maps the clicking position in screen coordinate from 2D to 3D world. In this recipe, you will learn how this important mapping method works in the Windows Phone 7 game.

How to do it…

The following steps will lead you to make your own version of picking an object in a 3D game:

  1. Create the Windows Phone Game project named Pick3DModel, change Game1. cs to Pick3DModelGame.cs and add a new Marker.cs to the project. Then, we create a Content Pipeline Extension Library called ModelVerticesPipeline, replace ContentProcessor1.cs with ModelVerticesPipeline.cs. After that, add the model file BallLowPoly.FBX and image Marker.png to the content project.
  2. Create ModelVerticesProcessor in ModelVerticesProcessor.cs of the ModelVerticesPipeline project. Since the ray-triangle collision detection between ray and model needs the triangle information, in the extension model process, we will extract all of the model vertices and the model global bounding sphere. The returned bounding sphere will serve for the ray-sphere collision detection, before model ray-triangle collision detection, as a performance consideration; the extracted vertices will be used to generate the triangles for the model ray-triangle collision detection when the ray collides with the model bounding sphere. Add the definition of ModelVerticesProcessor class to ModelVerticesProcessor.cs.
  3. ModelVerticesProcessor inherits from the ModelProcessor for extracting extra model vertices. The beginning of the class should be:
    [code]
    publicclass ModelVerticesProcessor : ModelProcessor {. . .}
    [/code]
  4. Add the variable vertices to store the model vertices in the ModelVerticesProcessor class field.
    [code]
    List<Vector3> vertices = new List<Vector3>();
    [/code]
  5. Override the Process()method of the ModelVerticesProcessor class. This is the main method in charge of processing the content. In the method, we extract the model vertices and BoundingSphere, and store them into the ModelContent. Tag property as a Dictionary object.
    [code]
    // Chain to the base ModelProcessor class.
    ModelContent model = base.Process(input, context);
    // Look up the input vertex positions.
    FindVertices(input);
    // Create a dictionary object to store the model vertices and
    // BoundingSphere
    Dictionary<string, object> tagData =
    new Dictionary<string, object>();
    model.Tag = tagData;
    // Store vertex information in the tag data, as an array of
    // Vector3.
    tagData.Add(“Vertices”, vertices.ToArray());
    // Also store a custom bounding sphere.
    tagData.Add(“BoundingSphere”,
    BoundingSphere.CreateFromPoints(vertices));
    return model;
    [/code]
  6. Define the FindVertices() method of the ModelVerticesProcessor class:
    [code]
    // Helper for extracting a list of all the vertex positions in
    // a model.
    void FindVertices(NodeContent node)
    {
    // Convert the current NodeContent to MeshContent if it is
    // a mesh
    MeshContent mesh = node as MeshContent;
    if (mesh != null)
    {
    // Get the absolute transform of the mesh
    Matrix absoluteTransform = mesh.AbsoluteTransform;
    // Iterate every geometry in the mesh
    foreach (GeometryContent geometry in mesh.Geometry)
    {
    // Loop over all the indices in geometry.
    // Every group of three indices represents one
    // triangle.
    foreach (int index in geometry.Indices)
    {
    // Get the vertex position
    Vector3 vertex =
    geometry.Vertices.Positions[index];
    // Transform from local into world space.
    vertex = Vector3.Transform(vertex,
    absoluteTransform);
    // Store this vertex.
    vertices.Add(vertex);
    }
    }
    }
    // Recursively scan over the children of this node.
    foreach (NodeContent child in node.Children)
    {
    FindVertices(child);
    }
    }
    [/code]
  7. Now, build the ModelVerticesPipeline project. You will get the runtime library ModelVerticesProcessor.dll in which the ModelVerticesProcessor stores.
  8. In the next few steps, we will define the Marker class in Marker.cs of the Pick3DModel project.
  9. The Marker class inherits from DrawableGameComponent. Add the variables to the Marker class field:
    [code]
    // SpriteBatch for drawing the marker texture
    SpriteBatch spriteBatch;
    // ContentManager for loading the marker texture
    ContentManager content;
    // Marker texture
    Texture2D texMarker;
    // Texture origin position for moving or rotation
    Vector2 centerTexture;
    // Texture position on screen
    publicVector2 position;
    [/code]
  10. Add the constructor of the Marker class.
    [code]
    public Marker(Game game, ContentManager content)
    : base(game)
    {
    this.content = content;
    }
    [/code]
  11. Implement the LoadContent() method, which will load the marker texture and define the texture origin position.
    [code]
    protected override void LoadContent()
    {
    spriteBatch = new SpriteBatch(GraphicsDevice);
    texMarker = content.Load<Texture2D>(“Marker”);
    centerTexture = new Vector2(texMarker.Width / 2,
    texMarker.Height / 2);
    base.LoadContent();
    }
    [/code]
  12. Let the marker inside the Windows Phone 7 screen. We define the Update() method to achieve this.
    [code]
    // Calculate where the marker’s position is on the screen. The
    // position is clamped to the viewport so that the marker
    // can’t go off the screen.
    publicoverridevoid Update(GameTime gameTime)
    {
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    position.X = touches[0].Position.X;
    position.Y = touches[0].Position.Y;
    }
    base.Update(gameTime);
    }
    [/code]
  13. Define the CalculateMarkerRay() method that calculates a world space ray starting at the camera’s eye and pointing in the direction of the cursor. The Viewport.Unproject() method is used to accomplish this.
    [code]
    publicRay CalculateMarkerRay(Matrix projectionMatrix,
    Matrix viewMatrix)
    {
    // Create 2 positions in screenspace using the tapped
    // position. 0 is as close as possible to the camera, 1 is
    // as far away as possible.
    Vector3 nearSource = newVector3(position, 0f);
    Vector3 farSource = newVector3(position, 1f);
    // Use Viewport.Unproject to tell what those two screen
    // space positions would be in world space.
    Vector3 nearPoint =
    GraphicsDevice.Viewport.Unproject(nearSource,
    projectionMatrix, viewMatrix, Matrix.Identity);
    Vector3 farPoint =
    GraphicsDevice.Viewport.Unproject(farSource,
    projectionMatrix, viewMatrix, Matrix.Identity);
    // Find the direction vector that goes from the nearPoint
    // to the farPoint and normalize it
    Vector3 direction = farPoint – nearPoint;
    direction.Normalize();
    // Return a new ray using nearPoint as the source.
    returnnewRay(nearPoint, direction);
    }
    [/code]
  14. From this step we begin to compute the ray-model collision and draw the collided triangle and model mesh on Windows Phone 7 in the game main class Pick3DModelGame. Now, insert the lines to the class field as data member:
    [code]
    // Marker Ray
    Ray markerRay;
    // Marker
    Marker marker;
    // Model object and model world position
    Model modelObject;
    Matrix worldModel = Matrix.Identity;
    // Camera view and projection matrices
    Matrix viewMatrix;
    Matrix projectionMatrix;
    // Define the picked triangle vertex array
    VertexPositionColor[] pickedTriangle =
    {
    newVertexPositionColor(Vector3.Zero, Color.Black),
    newVertexPositionColor(Vector3.Zero, Color.Black),
    newVertexPositionColor(Vector3.Zero, Color.Black),
    };
    // Vertex array to represent the selected model
    VertexPositionColor[] verticesModel;
    VertexBuffer vertexBufferModel;
    // The flag indicates whether the ray collides with the model
    float? intersection;
    // The effect of the object is to draw the picked triangle
    BasicEffectwireFrameEffect;
    // The wire frame render state
    staticRasterizerState WireFrame = newRasterizerState
    {
    FillMode = FillMode.WireFrame,
    CullMode = CullMode.None
    };
    [/code]
  15. Initialize the camera, marker, and wireFrameEffect. Add the code to the Initialize() method:
    [code]
    // Intialize the camera
    viewMatrix = Matrix.CreateLookAt(new Vector3(0, 5, 15),
    Vector3.Zero, Vector3.Up);
    projectionMatrix = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.ToRadians(45.0f),
    GraphicsDevice.Viewport.AspectRatio, .01f, 1000);
    // Initialize the marker
    marker = new Marker(this, Content);
    Components.Add(marker);
    wireFrameEffect = new BasicEffect(graphics.GraphicsDevice);
    [/code]
  16. Load the ball model and read the ball vertices. Then initialize the vertex array of the ball model for drawing the ball mesh on the Windows Phone 7 screen. Insert the following code in to the LoadContent() method:
    [code]
    // Load the ball object
    modelObject = Content.Load<Model>(“BallLowPoly”);
    // Read the vertices
    Dictionary<string, object> tagData =
    (Dictionary<string, object>)modelObject.Tag;
    Vector3[] vertices = (Vector3[])tagData[“Vertices”];
    // Initialize the model vertex array for drawing on screen
    verticesModel = new VertexPositionColor[vertices.Length];
    for (int i = 0; i < vertices.Length; i++)
    {
    verticesModel[i] =
    new VertexPositionColor(vertices[i], Color.Red);
    }
    vertexBufferModel = new VertexBuffer(
    GraphicsDevice, VertexPositionColor.VertexDeclaration,
    vertices.Length, BufferUsage.WriteOnly);
    vertexBufferModel.SetData(verticesModel);
    [/code]
  17. Define the ray-model collision detection method UpdatePicking() in the Pick3DModelGame class.
    [code]
    void UpdatePicking()
    {
    // Look up a collision ray based on the current marker
    // position.
    markerRay = marker.CalculateMarkerRay(projectionMatrix,
    viewMatrix);
    // Keep track of the closest object we have seen so far,
    // so we can choose the closest one if there are several
    // models under the cursor.
    float closestIntersection = float.MaxValue;
    317
    Vector3 vertex1, vertex2, vertex3;
    // Perform the ray to model intersection test.
    intersection = RayIntersectsModel(markerRay, modelObject,
    worldModel, out vertex1, out vertex2,out vertex3);
    // Check whether the ray-model collistion happens
    if (intersection != null)
    {
    // If so, is it closer than any other model we might
    // have previously intersected?
    if (intersection < closestIntersection)
    {
    // Store information about this model.
    closestIntersection = intersection.Value;
    // Store vertex positions so we can display the
    // picked triangle.
    pickedTriangle[0].Position = vertex1;
    pickedTriangle[1].Position = vertex2;
    pickedTriangle[2].Position = vertex3;
    }
    }
    }
    [/code]
  18. Define the RayIntersectsModel() method in the Pick3DModelGame class:
    [code]
    float? RayIntersectsModel(Ray ray, Model model, Matrix
    modelTransform, out Vector3 vertex1, out Vector3 vertex2,
    out Vector3 vertex3)
    {
    bool insideBoundingSphere;
    vertex1 = vertex2 = vertex3 = Vector3.Zero;
    Matrix inverseTransform = Matrix.Invert(modelTransform);
    ray.Position = Vector3.Transform(ray.Position,
    inverseTransform);
    ray.Direction = Vector3.TransformNormal(ray.Direction,
    inverseTransform);
    // Look up our custom collision data from the Tag property
    // of the model.
    Dictionary<string, object> tagData =
    (Dictionary<string, object>)model.Tag;
    BoundingSphere boundingSphere =
    (BoundingSphere)tagData[“BoundingSphere”];
    if (boundingSphere.Intersects(ray) == null)
    {
    // If the ray does not intersect the bounding sphere,
    // there is no need to do the the ray-triangle
    // collision detection
    insideBoundingSphere = false;
    return null;
    }
    else
    {
    // The bounding sphere test passed, do the ray-
    // triangle test
    insideBoundingSphere = true;
    // Keep track of the closest triangle we found so far,
    // so we can always return the closest one.
    float? closestIntersection = null;
    // Loop over the vertex data, 3 at a time for a
    // triangle
    Vector3[] vertices = (Vector3[])tagData[“Vertices”];
    for (int i = 0; i < vertices.Length; i += 3)
    {
    // Perform a ray to triangle intersection test.
    float? intersection;
    RayIntersectsTriangle(ref ray,
    ref vertices[i],
    ref vertices[i + 1],
    ref vertices[i + 2],
    out intersection);
    // Does the ray intersect this triangle?
    if (intersection != null)
    {
    // If so, find the closest one
    if ((closestIntersection == null) ||
    (intersection < closestIntersection))
    {
    // Store the distance to this triangle.
    closestIntersection = intersection;
    // Transform the three vertex positions
    // into world space, and store them into
    // the output vertex parameters.
    Vector3.Transform(ref vertices[i],
    ref modelTransform, out vertex1);
    Vector3.Transform(ref vertices[i + 1],
    ref modelTransform, out vertex2);
    Vector3.Transform(ref vertices[i + 2],
    ref modelTransform, out vertex3);
    }
    }
    }
    return closestIntersection;
    }
    }
    [/code]
  19. Draw the ball on the Windows Phone 7 screen with the model heighted wireframe and the picked triangle. Add the following code to the Draw() method:
    [code]
    GraphicsDevice.BlendState = BlendState.Opaque;
    GraphicsDevice.DepthStencilState = DepthStencilState.Default;
    // Draw model
    DrawModel(modelObject, worldModel);
    // Draw the model wire frame
    DrawPickedWireFrameModel();
    // Draw the outline of the triangle under the cursor.
    DrawPickedTriangle();
    [/code]
  20. Now we should give the definitions of the called methods:
    [code]
    DrawPickedWireFrameModel(), DrawPickedTriangle(), and DrawModel().
    [/code]
  21. Define the DrawPickedWireFrameModel() method in the Pick3DModelGame class:
    [code]
    void DrawPickedWireFrameModel()
    {
    if (intersection != null)
    {
    GraphicsDevice device = graphics.GraphicsDevice;
    device.RasterizerState = WireFrame;
    device.DepthStencilState = DepthStencilState.None;
    // Activate the line drawing BasicEffect.
    wireFrameEffect.Projection = projectionMatrix;
    wireFrameEffect.View = viewMatrix;
    wireFrameEffect.CurrentTechnique.Passes[0].Apply();
    // Draw the triangle.
    device.DrawUserPrimitives(PrimitiveType.TriangleList,
    verticesModel, 0, verticesModel.Length / 3);
    // Reset renderstates to their default values.
    device.RasterizerState =
    RasterizerState.CullCounterClockwise;
    device.DepthStencilState = DepthStencilState.Default;
    }
    }
    [/code]
  22. Implement the DrawPickedTriangle() method in the Pick3DModelGame class:
    [ocde]
    void DrawPickedTriangle()
    {
    if (intersection != null)
    {
    GraphicsDevice device = graphics.GraphicsDevice;
    // Set line drawing renderstates. We disable backface
    // culling and turn off the depth buffer because we
    // want to be able to see the picked triangle outline
    // regardless of which way it is facing, and even if
    // there is other geometry in front of it.
    device.RasterizerState = WireFrame;
    device.DepthStencilState = DepthStencilState.None;
    // Activate the line drawing BasicEffect.
    wireFrameEffect.Projection = projectionMatrix;
    wireFrameEffect.View = viewMatrix;
    wireFrameEffect.VertexColorEnabled = true;
    wireFrameEffect.CurrentTechnique.Passes[0].Apply();
    // Draw the triangle.
    device.DrawUserPrimitives(PrimitiveType.TriangleList,
    pickedTriangle, 0, 1);
    // Reset renderstates to their default values.
    device.RasterizerState =
    RasterizerState.CullCounterClockwise;
    device.DepthStencilState = DepthStencilState.Default;
    }
    }
    [/code]
  23. Give the definition of the DrawModel() method in the Pick3DModelGame class:
    [code]
    private void DrawModel(Model model, Matrix worldTransform)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.PreferPerPixelLighting = true;
    effect.View = viewMatrix;
    effect.Projection = projectionMatrix;
    effect.World = transforms[mesh.ParentBone.Index] *
    worldTransform;
    }
    mesh.Draw();
    }
    }
    [/code]
  24. Now, build and run the application. It should run as shown in the following screenshots:
    Mapping a tapped location

How it works…

In step 5, after processing the model basic information as usual in the base class, we call the FindVertices()method to get all of the model vertices. After that, the dictionary object tagData will receive the vertices information and the generated BoundingSphere from the model vertices, the tagData will be assigned to ModelContent.Tag for the game application to read the model vertices and BoundingSphere from the model XNB file.

In step 6, the first line is to convert the current NodeContent to MeshContent if the current ModelContent holds a model mesh and not a bone or other types. If mesh, an object of MeshContent, is not null, we begin to extract its vertices. First of all, the code reads the mesh.AbsoluteTransform for transforming the model vertices from object coordinate to world coordinate. Then, we iterate the geometry of the current mesh to get the vertices. In the loop for looping over each of the vertices, we use the Vector3.Transform() with absoluteTransform matrix to actually transform the vertex from object coordinate to world. After that, the transformed vertex will be saved to the vertices collection. When all of the vertices of the current mesh are processed, the code will deal with the current child content for retrieving the vertices.

In step 8, the spriteBatch is the main object in charge of rendering the texture on the Windows Phone 7 screen; the content object of ContentManager manages the game contents; texMarker represents the marker texture; the centerTexture specifies the origin point of texture for rotating and moving; the variable position holds the texture position on screen.

In step 10, the constructor receives the Game and ContentManager objects. The game object provides GraphicsDevice and the content offers access to the texture file.

In step 13, this is the key method to generate the ray from the screen coordinates to the world coordinates. The nearSource is used for generating the closest point to the camera; farSource is for the point that is far away. Then, call the Viewport.Unproject() method to generate the nearPoint and farPoint. After that, convert the nearSource and farSource from screen space to the nearPoint and farPoint in world space. Next, use the unprojected points farPoint and nearPoint to compute the ray direction. Finally, return the new ray object with the nearPoint and normalized direction.

In step 14, the markerRay specifies the ray from the tapped position to world space; marker is the visual sign on screen that indicates the start point of markerRay; modelObject will load the model; worldModel stands for the transformation matrix of modelObject; the view and projection will be used to initialize the camera and help generate the markerRay; pickedTriangle is the triangle vertex array which will be used to draw the triangle on the model where it collides with the markerRay; the verticesModel reads and stores all of the model vertices and will serve the picked model draw in wireframe; intersection indicates the collision state. If not null, the value is the distance between the intersection point and the markerRay start point. The final WireFrame defines the device render state.

Implementing sphere-triangle collision detection

In FPS game, when the character moves forward to a building or a wall and contacts the object, it will stop and stand there. And you know there is no object around you, because the camera is your eye in the FPS game. If you wonder how the game developers achieve this, you will find the answer in this recipe.

How to do it…

The following steps will show you the best practice of applying the sphere-triangle collision detection for first-person perspective camera:

  1. Create a Windows Phone Game project named CameraModelCollision, change Game1.cs to CameraModelCollisionGame.cs. Meanwhile, add Triangle. cs and TriangleSphereCollisionDetection.cs to the project. Then, create a Content Pipeline Extension Library project named MeshVerticesProcessor and replace the ContentProcessor1.cs with MeshVerticesProcessor.cs. After that, insert the 3D model file BigBox.fbx and sprite font file gameFont. spriteFont to the content project.
  2. Define the MeshVerticesProcessor class in MeshVerticesProcessor.cs of MeshVerticesProcessor project. The class is the same as the processor defined in the Implementing BoundingSphere collision detection in a 3D game recipe.
  3. Implement the Triangle class in Triangle.cs in the CameraModelCollision project.
    Declare the necessary data members of the Triangle class. Add the following lines to the class field:
    [code]
    // The triangle corners
    public Vector3 A;
    public Vector3 B;
    public Vector3 C;
    [/code]
  4. Define the constructors for the class:
    [code]
    // Constructor
    public Triangle()
    {
    A = Vector3.Zero;
    B = Vector3.Zero;
    C = Vector3.Zero;
    }
    // Constructor
    public Triangle(Vector3 v0, Vector3 v1, Vector3 v2)
    {
    A = v0;
    B = v1;
    C = v2;
    }
    [/code]
  5. Implement the Normal class of the Triangle class. This method returns a unit length normal vector perpendicular to the plane of the triangle.
    [code]
    public void Normal(outVector3 normal)
    {
    normal = Vector3.Zero;
    Vector3 side1 = B – A;
    Vector3 side2 = C – A;
    normal = Vector3.Normalize(Vector3.Cross(side1, side2));
    }
    [/code]
  6. Define the InverseNormal() method of the Triangle class. This method gets a normal that faces away from the point specified (faces in).
    [code]
    // Get a normal that faces away from the point specified // (faces in) public void InverseNormal(ref Vector3 point, out Vector3 inverseNormal) { Normal(out inverseNormal); // The direction from any corner of the triangle to the //point Vector3 inverseDirection = point – A; // Roughly facing the same way
    if (Vector3.Dot(inverseNormal, inverseDirection) > 0)
    {
    // Same direction therefore invert the normal to face
    // away from the direction to face the point
    Vector3.Multiply(ref inverseNormal, -1.0f,
    out inverseNormal);
    }
    }
    [/code]
  7. Create the TriangleSphereCollisionDetection class. This class contains the methods to take the triangle sphere collision detection.
    Define the IsSphereCollideWithTringles() method. This method is the root method that kicks off the sphere triangle collision detection:
    [code]
    public static bool IsSphereCollideWithTringles(
    List<Vector3> vertices,
    BoundingSphere boundingSphere, out Triangle triangle)
    {
    bool result = false;
    triangle = null;
    for (int i = 0; i < vertices.Count; i += 3)
    {
    // Create triangle from the tree vertices
    Triangle t = new Triangle(vertices[i], vertices[i + 1],
    vertices[i + 2]);
    // Check if the sphere collides with the triangle
    result = SphereTriangleCollision(ref boundingSphere,
    ref t);
    if (result)
    {
    triangle = t;
    return result;
    }
    }
    return result;
    }
    [/code]
  8. Implement the SphereTriangleCollision() method. This method will generate a ray from the center of the sphere and perform the ray-triangle collision check:
    [code]
    private static bool SphereTriangleCollision(
    ref BoundingSphere sphere, ref Triangle triangle)
    {
    Ray ray = new Ray();
    ray.Position = sphere.Center;
    // Create a vector facing towards the triangle from the
    // ray starting point.
    Vector3 inverseNormal;
    triangle.InverseNormal(
    ref ray.Position, out inverseNormal);
    ray.Direction = inverseNormal;
    // Check if the ray hits the triangle
    float? distance = RayTriangleIntersects(ref ray,
    ref triangle);
    if (distance != null && distance > 0 &&
    distance <= sphere.Radius)
    {
    // Hit the surface of the triangle
    return true;
    }
    return false;
    }
    [/code]
  9. Give the definition of RayTriangleIntersects() to the TriangleSphereCollisionDetection class. This is the method that performs the ray-triangle collision detection and returns a distance value if the collision takes place:
    [code]
    public static float? RayTriangleIntersects(ref Ray ray, ref
    Triangle triangle)
    {
    float? result;
    RayIntersectsTriangle(ref ray, ref triangle.A,
    ref triangle.B, ref triangle.C, out result);
    return result;
    }
    [/code]
  10. Add MeshVerticesProcessor.dll to the content project reference list, and change the processor of BigBox.FBX to MeshVerticesProcessor, as shown in the following screenshot:
    the processor of BigBox.FBX to MeshVerticesProcessor
  11. From this step, we will begin to take the real-time collision between the camera bounding sphere and the model in the main game project CameraModelCollision. Add the code to the CameraModelCollision class field:
    [code]
    // SpriteFont for showing instructions
    SpriteFont font;
    // Box model
    Model modelBox;
    // Box model world transformation
    Matrix worldBox = Matrix.Identity;
    // Camera position and look at target
    Vector3 cameraPosition;
    Vector3 targetOffset;
    // Camera view and projection matrices
    public Matrix view;
    public Matrix projection;
    // Camera BoundingSphere
    BoundingSphere boundingSphereCamera;
    // Vertices of Box Model
    List<Vector3> verticesBox;
    // Collided triangle
    Triangle triangleCollided;
    // Normal of collided triangle
    Vector3 normalTriangle;
    // The moving forward flag
    bool ForwardCollide;
    bool BackwardCollide;
    // The top and bottom hit regions on screen
    Rectangle TopHitRegion;
    Rectangle BottomHitRegion;
    [/code]
  12. Initialize the camera and hit regions. Add the code to the Initialize() method in the CameraModelCollision class:
    [code]
    // Initialize camera
    cameraPosition = new Vector3(0, 5, 50);
    targetOffset = new Vector3(0, 0, -1000);
    view = Matrix.CreateLookAt(cameraPosition, targetOffset,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Initialize the top and bottom hit regions
    Viewport viewport = GraphicsDevice.Viewport;
    TopHitRegion = new Rectangle(0, 0, viewport.Width,
    viewport.Height / 2);
    BottomHitRegion = new Rectangle(0, viewport.Height / 2,
    viewport.Width, viewport.Height / 2);
    [/code]
  13. Load the box model and initialize the camera bounding sphere. Insert the following code into the LoadContent() method in the CameraModelCollision class:
    [code]
    // Load the game font font = Content.Load<SpriteFont>(“gameFont”); // Load the box model modelBox = Content.Load<Model>(“BigBox”);
    // Get the vertex collection of box model
    verticesBox = ((Dictionary<string,
    List<Vector3>>)modelBox.Tag)[“Box001”];
    // Create the BoundingSphere of camera
    boundingSphereCamera = new BoundingSphere(cameraPosition, 5);
    [/code]
  14. Move the camera and take camera-sphere collision detection. Insert the code into the Update()method in the CameraModelCollision class.
    [code]
    // Check whether the tapped position is inside the TopHitRegion
    // or BottomHitRegion
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    if (TopHitRegion.Contains(point))
    {
    if (!ForwardCollide)
    {
    // If the tapped position is within the
    // TopHitRegion and the camera has not collided
    // with the model, move the camera forward
    view.Translation += new Vector3(0, 0, 1);
    }
    }
    if (BottomHitRegion.Contains(point))
    {
    // If the tapped position is within the
    // BottomHitRegion and the camera has not
    // collided with the model, move the camera
    // backward
    if (!BackwardCollide)
    {
    view.Translation -= new Vector3(0, 0, 1);
    }
    }
    }
    // Update the center position of camera bounding sphere
    boundingSphereCamera.Center = view.Translation;
    // Detect the collision between camera bounding sphere and
    // model triangles
    TriangleSphereCollisionDetection.IsSphereCollideWithTriangles(
    verticesBox, boundingSphereCamera,
    out triangleCollided);
    // If the collision happens, the collided triangle
    // is not null
    if (triangleCollided != null)
    {
    // Get the normal of the collided triangle
    triangleCollided.Normal(out normalTriangle);
    // Get the direction from the center of camera
    // BoundingSphere to the collided triangle
    Vector3 Direction = view.Translation – triangleCollided.A;
    // If the camera faces the model, the dot
    // product between the triangle normal
    // and direction is less than 0
    float directionChecker =
    Vector3.Dot(normalTriangle, Direction);
    if (directionChecker < 0)
    {
    ForwardCollide = true;
    }
    }
    else
    {
    ForwardCollide = false;
    }
    [/code]
  15. Define the DrawModel() method in the CameraModelCollision class to draw the 3D model.
    [code]
    // Draw model
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.PreferPerPixelLighting = true;
    effect.EnableDefaultLighting();
    effect.DiffuseColor = Color.White.ToVector3();
    effect.World = transforms[mesh.ParentBone.Index] *
    world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  16. Draw the model and instructions on the Windows Phone 7 screen. Add the code to the Draw() method:
    [code]
    // Draw the box model
    DrawModel(modelBox, worldBox, view, projection);
    // Draw the instructions
    spriteBatch.Begin();
    spriteBatch.DrawString(font, “1.Tap the top half of screen”
    + “for moving the camera forwardn2.Tap the bottom half”
    + “of” screen for moving the camera backward.”,
    new Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  17. Now, build and run the application. It should run as shown in the following screenshots:
    sphere-triangle

How it works…

In step 3, A, B, and C represent the three corners of a triangle; you can use them to calculate the triangle edges.

In step 5, first we calculate the two edges of the triangle. Then we use the Vector3. Cross() method to get the normal vector and the Vector3.Normalize() method to normalize the normal vector to a unit length vector.

In step 6, first we get the normal of the triangle. Then, calculate the direction from the triangle corner A to the point outside the triangle. After that, we examine the return value of the Vector3.Dot() method between the triangle normal vector and the direction from the triangle to the outside point. If the dot product is greater than 0, this means the two vectors are in the same direction or on the same side.

In step 7, this method goes through all of the vertices of a model and creates a triangle in every three vertices. With the triangle t and the given boundingSphere, it calls the SphereTriangleCollision() method to take the sphere triangle collision detection. If the result is true, it means the sphere triangle collision happens and the collided triangle will be returned. If not true, the method return value will be false and the triangle t will be null.

In step 8, the first line is to initialize a ray object with the original information. Then, we assign the translation of the sphere center to the ray.Position. After that, we use the Triangle. InverseNormal() method for getting the direction from the point to the current triangle. Now, the ray is ready, the next part is to take the core ray triangle collision detection using the RayTriangleIntersects() method. If the returned distance is not null, greater than zero and less than the radius of the given bounding sphere, a ray triangle collision happens. The method will return true to the caller.

In step 9, insert the definition of the inner RayIntersectsTriangle() method to the class, which we had discussed in the Implementing ray-triangle collision detection recipe in this chapter. Refer to the recipe for a detailed explanation.

In step 11, the font is responsible for showing the instructions; the modelBox loads the box model; worldBox stands for the transformation of the box model; the following four variables cameraPosition, targetOffset, view, and projection are used to initialize the camera; boundingSphereCamera is the bounding sphere around the camera; verticesBox holds the vertices of the box model; triangleCollided specifies the triangle when sphere-triangle collision happens; normalTriangle stores the normal vector of the collided triangle; ForwardCollide and BackwardCollide show that the camera is moving forward or backward; TopHitRegion and BottomHitRegion are the hit regions if you want to move the camera forward or backward.

In step 12, the camera target is -1000 at the Z-axis for realistic viewing when you move the camera. TopHitRegion occupies the top half of the screen; BottomHitRegion takes up the bottom half of the screen.

In step 13, after loading the box model and getting its vertices, we initialize the boundingSphereCamera with a radius of about five units at the camera position.

In step 14, the first part is to check whether the tapped position is inside the TopHitRegion or BottomHitRegion to move the camera forward or backward. After that, we should update the position of the camera bounding sphere, as this is important for us to take the collision detection between camera bounding sphere and model triangles. In the next line, we call the TriangleSphereCollisionDetection.IsSphereCollideWithTriangles() method to detect the collision detection. If the returned triangle is not null, we will calculate the dot product between the camera ray direction and the normal of the collided triangle. If it is less than zero, it means the camera is moving forward, otherwise, it is moving backward.

Making a 3D ball move along a curved surface

No doubt, the real modern 3D games are much more complex; they are not a simple ball or a box with a few triangles. Thousands of polygons for games is common, millions is not unheard of. As a technique, the differences in how to do collision detection between different shape objects are not that much. You should already know the core concept or idea behind how to do it. In this recipe, you will learn the idea of dealing with collisions between models of different shapes.

How to do it…

The following steps will show you how to perform collision detection between a ball and a curved surface:

  1. Create a Windows Phone Game project named BallCollideWithCurve, change Game1.cs to BallCollideWithCurveGame.cs. Then, add triangle.cs and TriangleSphereCollisionDetection.cs to the project. Next, add the Content Pipeline Extension Project named MeshVerticesProcessor, replace the ContentProcessor1.cs with MeshVerticesProcessor.cs. After that, insert the model file ball.FBX and CurveSurface.FBX to the content project.
  2. Define the MeshVerticesProcessor in MeshVerticesProcessor.cs of the MeshVerticesProcessor project. The class definition is the same as the class in the Implementing BoundingBox collision detection in a 3D game recipe in this chapter. For the full explanation, please refer to that recipe.
  3. Define Triangle in Triangle.cs and TriangleSphereCollisionDetection in TriangleSphereCollisionDetection.cs of the CameraModelCollision project. The two class definitions are the same as the classes implemented in the last Implementing sphere-triangle collision detection recipe in this chapter. For a full explanation, please take a look at that recipe.
  4. Change the processor of ball.FBX and CurveSurface.FBX in content project, as shown in the following screenshot:
    ball.FBX and CurveSurface.FBX
  5. Now it is time to draw the ball and curve surface models on screen and take the collision detection in the main game project BallCollideWithCurve. First, add the following lines to the class field:
    [code]
    // Ball model and the world transformation matrix
    Model modelBall;
    Matrix worldBall = Matrix.Identity;
    // Curve model and the world transformation matrix
    Model modelSurface;
    Matrix worldSurface = Matrix.Identity;
    // Camera
    Vector3 cameraPosition;
    publicMatrix view;
    publicMatrix projection;
    // The bounding sphere of ball model
    BoundingSphere boundingSphereBall;
    // The vertices of curve model
    List<Vector3>verticesCurveSurface;
    // Collided triangle
    Triangle CollidedTriangle;
    // The velocity of ball model
    Vector3 Velocity = Vector3.Zero;
    // The acceleration factor
    Vector3 Acceleration = newVector3(0, 0.0098f, 0);
    [/code]
  6. Initialize the camera and the collided triangle. Insert the following code into the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(0, 0, 20);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Initialize the collided triangle
    CollidedTriangle = new Triangle();
    [/code]
  7. Load the ball and curve surface models and extract their vertices. Then, create the bounding sphere of the ball model from the extracted vertices.
    [code]
    modelBall = Content.Load<Model>(“Ball”);
    modelSurface = Content.Load<Model>(“CurveSurface”);
    worldBall = Matrix.CreateTranslation(new Vector3(-2, 5, 0));
    Dictionary<string, List<Vector3>> o =
    (Dictionary<string, List<Vector3>>)modelBall.Tag;
    boundingSphereBall =
    BoundingSphere.CreateFromPoints(o[“Sphere001”]);
    boundingSphereBall.Center = worldBall.Translation;
    verticesCurveSurface = ((Dictionary<string, List<Vector3>>)
    modelSurface.Tag)[“Tube001”];
    [/code]
  8. Take the sphere-triangle collision and update the position of the ball model and its bounding sphere. Insert the following code into the Update() method:
    [code]
    // Take the sphere triangle collision detection
    TriangleSphereCollisionDetection.IsSphereCollideWithTriangles(
    verticesCurveSurface, boundingSphereBall,
    out CollidedTriangle);
    float elapsed;
    // If no collision happens, move the ball
    if (CollidedTriangle == null)
    {
    elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds;
    Velocity += -Acceleration * elapsed;
    worldBall.Translation += Velocity;
    }
    // Update the translation of ball bounding sphere
    boundingSphereBall.Center = worldBall.Translation;
    [/code]
  9. Define the DrawModel() method to draw the model:
    [code]
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.PreferPerPixelLighting = true;
    effect.EnableDefaultLighting();
    effect.DiffuseColor = Color.White.ToVector3();
    effect.World = transforms[mesh.ParentBone.Index] *
    world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  10. Draw the ball model and curve surface ball on the Windows Phone 7 screen. Paste the following code into the Draw() method:
    [code]
    GraphicsDevice.DepthStencilState = DepthStencilState.Default;
    // Draw the ball model and surface model
    DrawModel(modelBall, worldBall, view, projection);
    DrawModel(modelSurface, worldSurface, view, projection);
    [/code]
  11. Now, build and run the application. The application runs as shown in the following screenshots:
    3D ball move along a curved surface

How it works…

In step 5, the modelBall loads the ball model object; worldBall specifies world transformation of the ball model. Similarly, modelSurface for curve surface model, worldSurface for its world matrix; the next three variables cameraPosition, view, and projection serve for the camera; boundingSphereBall is the bounding sphere around the ball model; verticesCurveSurface is a vertex collection of the curve surface model; CollidedTriangle stores the collided triangle when the ball bounding sphere collides with the curve surface model triangles; Velocity specifies the ball moving velocity; Acceleration defines how the velocity will be changed.

In step 7, we use the BoundingSphere.CreateFromPoints() method to create the bounding sphere of the ball model using the vertices extracted from the Tag property of its model file. For verticesCurveSurface, we just read the vertex collection from its model file.

In step 8, the first line is to detect collision between ball bounding sphere and the triangles of the curve surface model. If the collision happens, CollidedTriangle is not null. At this moment, we start to move the ball. Anytime, it is required that updating the center position of the ball bounding sphere along with the ball model.

Windows Phone Collision Detection #Part 1

Detecting the intersection of line segments

Basically, line segments’ intersection is a mathematical concept. To detect the intersection of two line segments, find their intersection points. For 2D games, it is very helpful when an explosion animation appears at a position where the two lines intersect; for example, two laser shoots collide. The line segments’ intersection can also help you to make a decision in pool games as a guideline, especially for beginners. In this recipe, you will learn how to detect the line segments’ intersection.

Getting ready

To define the two line segments, we need the following two formulae:

intersection of line segmentsIf you put in 0 for U, you’ll get the start point, if you put in 1, you’ll get the end point.
With the two equations, if the intersection happens between the two line segments:

The equation could be rewritten as follows:

In order to get the Ua and Ub values, we need two equations. The previous equation could also be written using x and y factors of the points:

You can use the two equations to solve for Ua and Ub:

The denominator of both of the equations is the same. Solve it first. If it is zero, the lines are parallel. If both numerators are also zero, then the two line segments are coincident.

Since these equations treat the lines as infinitely long lines instead of line segments, there is a guarantee of having an intersection point if the lines aren’t parallel. To determine if it happens with the segments we’ve specified, we need to see if U is between zero and one. Verify that both of the following are true:

If we’ve gotten this far, then our line segments intersect, and we just need to find the point at which they do and then we’re done:

The following pseudo code describes the line segments’ intersection algorithm coming from the previous description:

[code]
ua = (p4.x – p3.x)*(p1.y – p3.y) – (p4.y – p3.y)*(p1.x – p3.x)
ub = (p2.x – p1.x)*(p1.y – p3.y) – (p2.y – p1.y)*(p1.x – p3.x)
denominator = (p4.y p2.y)*(p2.x–p1.x) – (p4.x-p2.x)(p2.y-p1.y)
if( | denominator | < epsilon)
{
// Now, two line segments are parallel
If(| ua | <= epsilon && | ub | <= epsilon)
{
// Now, two line segments are coincident
}
}
else
{
ua /= denominator;
ub /= denominator
if( |ua| < 1 && |ub| < 1)
{
// Intersected
intersectionPoint.x = p1.x + ua * (p2.x – p1.x)
intersectionPoint.y = p1.y + ua * (p2.y – p1.y)
}
}
[/code]

Translate the pseudo code to an XNA version, as follows:

[code]
// Line segments’ intersection detection
private void DetectLineSegmentsIntersection(
ref bool intersected, ref bool coincidence,
ref Vector2 intersectedPoint,
ref Vector2 point1, ref Vector2 point2,
ref Vector2 point3, ref Vector2 point4)
{
// Compute the ua, ub factor of two line segments
float ua = (point4.X – point3.X) * (point1.Y – point3.Y) –
(point4.Y – point3.Y) * (point1.X – point3.X);
float ub = (point2.X – point1.X) * (point1.Y – point3.Y) –
(point2.Y – point1.Y) * (point1.X – point3.X);
// Calculate the denominator
float denominator = (point4.Y – point3.Y) * (point2.X –
point1.X) – (point4.X – point3.X)*(point2.Y – point1.Y);
// If the denominator is very close to zero, it means
// the two line segments are parallel
if (Math.Abs(denominator) <= float.Epsilon)
{
// If the ua and ub are very close to zero, it means
// the two line segments are coincident
if (Math.Abs(ua) <= float.Epsilon && Math.Abs(ub) <=
float.Epsilon)
{
intersected = coincidence = true;
intersectedPoint = (point1 + point2) / 2;
}
}
else
{
// If the denominator is greater than zero, it means
// the two line segments have different directions.
ua /= denominator;
ub /= denominator;
// Check the ua and ub are both between 0 and 1 to
// take the line segments’ intersection detection
if (ua >= 0 && ua <= 1 && ub >= 0 && ub <= 1)
{
intersected = true;
// Compute the position of the intersection point
intersectedPoint.X = point1.X +
ua * (point2.X – point1.X);
intersectedPoint.Y = point1.Y +
ua * (point2.Y – point1.Y);
}
else
{
intersected = false;
}
}
}
[/code]

The method receives the four points of the two line segments and two flags for indicating the intersected and coincident states. The first two lines are to compute the numerators of ua and ub, then the following line is on calculating the denominator. After that, we begin to check the value of the denominator to see whether the two line segments intersected. If the denominator is almost zero, it means the two line segments are parallel. Meanwhile, if the numerators of Ua and Ub are both almost zero, the two line segments are coincident. Otherwise, if the denominator is greater than zero, it means the two lines where the two line segments lie intersect. In order to make sure the two line segments intersect, we need to check that the absolute values of ua and ub are both less than or equal to 1. If true, the intersection between them happens; now we can use ua or ub to calculate the intersectionPoint.

How to do it…

The following steps will show you how to master the practical method of using the line segments’ intersection:

  1. Create a Windows Phone Game project named LineSegmentIntersection, change Game1.cs to LineSegmentsIntersectionGame.cs. Then, add Line.cs to the project.
  2. In the next step, we need to define the Line class in Line.cs. The class will draw the line between two points on the Windows Phone 7 screen.
    Declare the indispensable variables of the class, and then add the following code to the class field:
    [code]
    // Line Texture
    private Texture2D lineTexture;
    // Origin point of line texture for translation, scale and
    // rotation
    private Vector2 origin;
    // Scale factor
    private Vector2 scale;
    // Rotation factor
    private float rotation;
    // Axis X
    Vector2 AxisX = new Vector2(1, 0);
    // Dictance vector
    Vector2 distanceVector;
    // Line direction
    Vector2 Direction = Vector2.Zero;
    // The angle between the line and axis X
    float theta = 0;
    // Line thickness
    private int Thickness = 2;
    // Line color
    private Color color;
    [/code]
  3. Create the constructor of the Line class:
    [code]
    public void Load(GraphicsDevice graphicsDevice)
    {
    // Initialize the line texture and its origin point
    lineTexture = CreateLineUnitTexture(graphicsDevice,
    Thickness, color);
    origin = new Vector2(0, Thickness / 2f);
    }
    [/code]
  4. Define the CalculateRotation() method called in the previous step in the Line class. This method calculates the angle between the line and the X-axis:
    [code]
    private void CalculateRotation(Vector2 distanceVector)
    {
    // Normalize the distance vector for line direction
    Vector2.Normalize(ref distanceVector, out Direction);
    // Compute the angle between axis X and line
    Vector2.Dot(ref AxisX, ref Direction, out theta);
    theta = (float)Math.Acos(theta);
    // If the Y factor of distanceVector is less than 0
    // this means the start point is lower than the end point,
    // the rotation direction should be in the opposite
    // direction.
    if (distanceVector.Y < 0)
    {
    theta = -theta;
    }
    // return the angle value for rotation
    rotation = theta;
    }
    [/code]
  5. Implement the CalculateScale() method in the Line class. The method will calculate a scale represented as a Vector2 object. The X factor stores the number of textures while the Y factor stores the scale degree.
    [code]
    private void CalculateScale(Vector2 distanceVector)
    {
    // The Vector2 object scale determines how many textures
    // will be drawn based on the input rotation and start
    // point, X for the number, Y for the scale factor
    float desiredLength = distanceVector.Length();
    scale.X = desiredLength / lineTexture.Width;
    scale.Y = 1f;
    }
    [/code]
  6. Define the CreateLineUnitTexture() method, in the Line class, which creates the line unit texture according to the input line thickness.
    [code]
    // Create a unit texture of line, the texture will be used to
    // generate a line with desired number
    public static Texture2D CreateLineUnitTexture(
    GraphicsDevice graphicsDevice,
    int lineThickness, Color color)
    {
    // Initialize the line unit texture according to the line
    // thickness
    Texture2D texture2D = new Texture2D(graphicsDevice,
    lineThickness, lineThickness, false,
    SurfaceFormat.Color);
    // Set the color of every pixel of the line texture
    int count = lineThickness * lineThickness;
    Color[] colorArray = new Color[count];
    for (int i = 0; i < count; i++)
    {
    colorArray[i] = color;
    }
    texture2D.SetData<Color>(colorArray);
    return texture2D;
    }
    [/code]
  7. Define the Draw() method in the Line class that draws the line segment on Windows Phone 7.
    [code]
    // Draw the line
    public void Draw(SpriteBatch spriteBatch, Vector2 startPoint,
    Vector2 endPoint)
    {
    // Compute the distance vector between the line start
    // point and end point
    Vector2.Subtract(ref endPoint, ref startPoint,
    out distanceVector);
    // Calculate the rotation angle
    CalculateRotation(distanceVector);
    // Calculate the scale factor
    CalculateScale(distanceVector);
    // Draw the line texture on screen
    spriteBatch.Draw(lineTexture, startPoint, null, color,
    rotation, origin, scale, SpriteEffects.None, 0);
    }
    [/code]
  8. From this step, we will begin to interact with the tap gesture and draw the line segments and the intersection point if intersection takes place on the Windows Phone 7 screen. First, add the following lines to the LineSegmentsIntersectionGame class field:
    [code]
    // Line Object
    Line line;
    // Circle Texture
    Texture2D circleTexture;
    // Points of two lines for intersection testing
    Vector2 point1, point2, point3, point4, intersectionPoint;
    // The flag for intersection
    bool Intersection;
    // The flag for coincidence
    bool Coincidence;
    [/code]
  9. Initialize the four points of the two line segments. Insert the following lines to the Initialize() method:
    [code]
    // Initialize the points of two lines
    point1 = Vector2.Zero;
    point2 = newVector2(600, 300);
    point3 = newVector2(0, 200);
    point4 = newVector2(800, 200);
    [/code]
  10. Initialize the line and circleTexture objects. Add the following code to the LoadContent() method:
    [code]
    // Initialize the two line objects with white color
    line = new Line(Color.White);
    line.Load(GraphicsDevice);
    // Initialize the texture of circle
    circleTexture = CreateCircleTexture(GraphicsDevice, 5, Color.
    White);
    [/code]
  11. This step is the key to detecting the line segment intersection. Add the following code to the Update() method:
    [code]
    // Do the line segments’ intersection testing
    DetectLineSegmentsIntersection(ref Intersection,
    ref Coincidence, ref intersectionPoint,
    ref point1, ref point2, ref point3, ref point4);
    // Check the tapped position to see whether it is inside the
    //TopHitRegion
    // or BottomHitRegion
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    if (GraphicsDevice.Viewport.Bounds.Contains(point))
    {
    point1.X = point.X;
    point1.Y = point.Y;
    }
    }
    [/code]
  12. Insert the DetectLineSegmentsIntersection() method into the LineSegmentsIntersectionGame class. This method is the same as the definition we gave at the beginning of the method.
  13. Define the CreateCircleTexture() method. This method creates the circle textures for showing the end points of the line segments:
    [code]
    // Create the circle texture
    public static Texture2D CreateCircleTexture(GraphicsDevice
    graphicsDevice, int radius, Color color)
    {
    int x = 0;
    int y = 0;
    // Compute the diameter of circle
    int diameter = radius * 2;
    // Calculate the center of circle
    Vector2 center = new Vector2(radius, radius);
    // Initialize the circle texture
    Texture2D circle = new Texture2D(graphicsDevice, diameter,
    diameter, false, SurfaceFormat.Color);
    // Initialize the color array of circle texture
    Color[] colors = new Color[diameter * diameter];
    // Set the color of the circle texture
    for (int i = 0; i < colors.Length; i++)
    {
    // For raw
    if (i % diameter == 0)
    {
    y += 1;
    }
    // For column
    x = i % diameter;
    // Calculate the distance from current position to
    // circle center
    Vector2 diff = new Vector2(x, y) – center;
    float distance = diff.Length();
    // Check the position whether inside the circle
    if (distance > radius)
    {
    // If not, set the pixel color to transparent
    colors[i] = Color.Transparent;
    }
    else
    {
    // If yes, set the pixel color to desired color
    colors[i] = color;
    }
    }
    // Assign the processed circle color array to circle
    // texture
    circle.SetData<Color>(colors);
    return circle;
    }
    [/code]
  14. Draw the line segments on screen with end points and intersection point on the Windows Phone 7 screen. Add the following code to the Draw() method:
    [code]
    spriteBatch.Begin();
    // Draw the two lines segments
    line.Draw(spriteBatch, point1, point2);
    line.Draw(spriteBatch, point3, point4);
    // Draw the circles to indicate the endpoints of the two lines
    // with red color
    Vector2 circleOrigin = new Vector2(
    circleTexture.Width / 2, circleTexture.Height / 2);
    spriteBatch.Draw(circleTexture, point1, null, Color.Red, 0,
    circleOrigin, 1, SpriteEffects.None, 0);
    spriteBatch.Draw(circleTexture, point2, null, Color.Red, 0,
    circleOrigin, 1, SpriteEffects.None, 0);
    spriteBatch.Draw(circleTexture, point3, null, Color.Red, 0,
    circleOrigin, 1, SpriteEffects.None, 0);
    spriteBatch.Draw(circleTexture, point4, null, Color.Red, 0,
    circleOrigin, 1, SpriteEffects.None, 0);
    // If the intersection takes place, draw the intersection
    // point
    if (Intersection)
    {
    // If the two lines are coincident, draw the intersection
    // point in green else in yellow
    spriteBatch.Draw(circleTexture, intersectionPoint, null,
    Coincidence ? Color.Green : Color.Yellow, 0,
    circleOrigin, 1, SpriteEffects.None, 0);
    }
    spriteBatch.End();
    [/code]
  15. Now, build and run the application. When you tap on the screen, it should run as shown in the following screenshots:
    intersection of line segments

How it works…

We need to draw the line from a texture since there is no drawing method in XNA 4.0.

In step 2, the lineTexture holds the texture for the drawing line; the origin is the center point for rotation, translation, and scale. The scale, a Vector2 object, the X-factor tells SpriteBatch.Draw() method how many texture units will be drawn. The Y-factor stands for the scaling degree; rotation specifies the rotating degrees; AxisX is the vector represents the X-axis; distanceVector holds the vector between two designated points; Direction indicates the line segment direction; the theta variable shows the angle between the X-axis and the line segment; the Thickness means how thick the line segment should be; color defines the color of the line segment.

In step 4, the first line normalizes the distanceVector, which stores the vector between two points to Direction, a unit vector variable. Then, we use Vector2.Dot() and the Math.Acos() method to calculate the angle between Direction and distanceVector. When we get theta, if the distanceVector.Y is less than 0, this means the start point is lower than the end point, but the theta, which should be a negative angle, is still greater than 0 because in the XNA coordinate system, all the location coordinates are greater than 0, so the dot product is always greater than 0. Thus, we should negate the theta value to meet the actual angle value. Finally, return the theta value to the rotation variable.

In step 5, first distanceVector.Length() returns the length between the two end points of the given line segment. Then, we calculate the number of line unit textures based on the texture width and assign the value to scale.X. After that, we save the scale degree to scale.Y.

In step 6, this method first initializes the line unit texture, of which the size depends on the line thickness. Then, we set the input color to every pixel of the texture. Finally, we return the generated line unit texture.

In step 8, the line will be used to draw the line segments; circle is responsible for drawing the end points and the intersection points of the lines; the Intersection is the flag indicating whether the line segments’ intersection happens; the Coincidence shows whether the line segments are coincident or not.

In step 10, the line has white color; the radius of circle textures is 5.

In step 11, the first line of the method is to detect the line segments’ intersection. The DetectLineSegmentsIntersection() method uses the point1, point2, point3, and point4 to compute the intersection equation. If there is an intersection, the Intersection variable will be true and the intersectionPoint will return the intersected point. We have discussed a more detailed explanation of this method at the beginning of the recipe. The second part is to control the position of the first point to make an interactive manipulation on one of the line segments. If the tapped position is valid, the position of the first point will be changed to the current tapped position on screen.

In step 13, the method first computes diameter for the width and height of the circle textures. The center specifies the center point of the circle. After that, we initialize the circle texture and the texture color array of which the length is diameter*diameter and then we will iterate each pixel in the array. If the position is outside the region of the circle, the pixel color will be set to transparent, or the color what you want.

Implementing per pixel collision detection in a 2D game

In a 2D game, a general method for detecting collision is by using bounding box. This is the solution for a lot of situations where precision is not the most important factor. However, if your game cares whether two irregular objects collide with each other or overlap, the bounding box will not be comfortable with the. At this moment, per pixel collision will help you. In this recipe, you will learn how to use this technique in your game.

How to do it…

  1. Create a Windows Phone Game project named PixelCollision2D, change Game1.cs to PixelCollision2DGame.cs. Then, add the PixelBall.png and PixelScene.png file to the content project.
  2. Add the indispensable data members to the field of PixelCollision2DGame.
    [code]
    // SpriteFont object
    SpriteFont font;
    // The images we will draw
    Texture2D texScene;
    Texture2D texBall;
    // The color data for the images; used for per pixel collision
    Color[] textureDataScene;
    Color[] textureDataBall;
    // Ball position and bound rectangle
    Vector2 positionBall;
    Rectangle boundBall;
    // Scene position and bound rectangle
    Vector2 positionScene;
    Rectangle boundScene;
    // Collision flag
    bool Collided;
    // Ball selected flag
    bool Selected;
    [/code]
  3. Initialize the positions of ball and scene and enable the FreeDrag gesture. Insert the following code in to the Initialize() method:
    [code]
    // Initialize the position of ball
    positionBall = new Vector2(600, 10);
    // Initialize the position of scene
    positionScene = new Vector2(400, 240);
    TouchPanel.EnabledGestures = GestureType.FreeDrag;
    [/code]
  4. Load the textures of ball and scene. Then, extract the color data of these textures and create their bounding box based on the initial position.
    [code]
    // Load Font
    font = Content.Load<SpriteFont>(“gameFont”);
    // Load textures
    texScene = Content.Load<Texture2D>(“PixelScene”);
    texBall = Content.Load<Texture2D>(“PixelBall”);
    // Extract scene texture color array
    textureDataScene =
    new Color[texScene.Width * texScene.Height];
    texScene.GetData(textureDataScene);
    // Extract ball texture color array
    textureDataBall =
    new Color[texBall.Width * texBall.Height];
    texBall.GetData(textureDataBall);
    // Create the ball bound
    boundBall = new Rectangle((int)positionBall.X,
    (int)positionBall.Y,
    texBall.Width, texBall.Height);
    boundScene = new Rectangle((int)positionScene.X,
    (int)positionScene.Y, texScene.Width, texScene.Height);
    [/code]
  5. Define the IntersectPixels() method. This method determines if there is overlap of the non-transparent pixels between two textures.
    [code]
    static bool IntersectPixels(
    Rectangle rectangleA, Color[] dataA,
    Rectangle rectangleB, Color[] dataB)
    {
    // Find the bounds of the rectangle intersection
    int top = Math.Max(rectangleA.Top, rectangleB.Top);
    int bottom = Math.Min(rectangleA.Bottom,
    rectangleB.Bottom);
    int left = Math.Max(rectangleA.Left, rectangleB.Left);
    int right = Math.Min(rectangleA.Right, rectangleB.Right);
    // Check every point within the intersection bounds
    for (int y = top; y < bottom; y++)
    {
    for (int x = left; x < right; x++)
    {
    // Get the color of both pixels at this point
    Color colorA = dataA[(x – rectangleA.Left) +
    (y – rectangleA.Top) * rectangleA.Width];
    Color colorB = dataB[(x – rectangleB.Left) +
    (y – rectangleB.Top) * rectangleB.Width];
    // If both pixels are not completely transparent,
    if (colorA.A != 0 && colorB.A != 0)
    {
    // then an intersection has been found
    return true;
    }
    }
    }
    // No intersection found
    return false;
    }
    [/code]
  6. Call the IntersectPixels() method within the Update()method for examining the per pixel collision. Add the following code to the Update() method:
    [code]
    // Move the ball
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    if (boundBall.Contains(point))
    {
    Selected = true;
    }
    else
    {
    Selected = false;
    }
    }
    // Check whether the gesture is enabled
    while (TouchPanel.IsGestureAvailable)
    {
    // Read the taking place gesture
    GestureSample gestures = TouchPanel.ReadGesture();
    switch (gestures.GestureType)
    {
    // If the on-going gesture is FreeDrag
    case GestureType.FreeDrag:
    if (Selected)
    {
    // If the ball is selected, update the
    // position of ball texture and the ball bound
    positionBall += gestures.Delta;
    boundBall.X += (int)gestures.Delta.X;
    boundBall.Y += (int)gestures.Delta.Y;
    }
    break;
    }
    }
    // Check collision with Scene
    if (IntersectPixels(boundBall, textureDataBall,
    boundScene, textureDataScene))
    {
    Collided = true;
    }
    else
    {
    Collided = false;
    }
    }
    [/code]
  7. Draw the ball, scene, and collision state on the Windows Phone 7 screen.
    [code]
    spriteBatch.Begin();
    // Draw the scene
    spriteBatch.Draw(texScene, boundScene, Color.White);
    // Draw the ball
    spriteBatch.Draw(texBall, positionBall, Color.White);
    spriteBatch.DrawString(font, “Collided: ” +
    Collided.ToString(), new Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  8. Now, build and run the application. It will run as shown in the following screenshots:
    Implementing per pixel collision

How it works…

In step 2, the font is used to draw the collision state; texScene loads the scene image; texBall holds the ball texture; textureDataScene and textureDataBall store their texture color array data; positionBall and positionScene specify the position of ball and scene textures; boundBall and boundScene define the bound around the ball and scene texture; Collided is the flag that shows the collision state; Selected indicates whether the ball is tapped.

In step 5, the IntersectPixels() method is the key method that detects the per pixel collision. The first four variables top, bottom, left, and right individually represent the top, bottom, left, and right side of the intersection rectangle of the two bound boxes around the two textures in the example. Then, in the for loop, we check whether the color alpha value of every pixel of both the textures within the intersection rectangle is completely transparent. If yes, the collision occurs; the method will return true, otherwise, it will return false.

In step 6, the first part is to check whether the ball is selected. If yes, then Selected will be true. The second part is about reading the on-going gesture; if the gesture type is FreeDrag, we will update the position of the ball and its bounding box. The third part calls the IntersectPixels() method to detect the pixel-by-pixel collision.

Implementing BoundingBox collision detection in a 3D game

Regardless of whether you are programming 2D or 3D games, collision detection based on bounding box is straightforward, i simple and easy to understand. You can imagine that every object is individually covered by a box. The boxes are moving along with the corresponding boxes; when the boxes collide, the objects collide too. The boxes are called the BoundingBox. To compose the BoundingBox, you only need to go through all the points or vertices, and then find the min and max ones. After that, the BoundingBox collision detection will depend on the min and max information of every BoundingBox to examine whether their min and max values are inside its own range to make the collision decision. Even in a more accurate collision detection system, bounding box collision detection will be taken first, before using the more precise, but costly method. In this recipe, you will learn how to apply the technique to a simple game.

How to do it…

The following steps will help you build your own BoundingBox information content processor and use the BoundingBox in your game:

  1. Create a Windows Phone Game project named BoundingBoxCollision and change Game1.cs to BoundingBoxCollisionGame.cs. Then, create a Content Pipeline Extension Library named MeshVerticesProcessor and replace the ContentProcessor1.cs with MeshVerticesProcessor.cs. We create the content pipeline processor for processing and extracting the BoundingBox information from the model objects before running the game. This will accelerate the game loading speed because your application won’t need to do this work again and again. After that, add the model file BigBox.FBX to the content project.
  2. Next, we need to define the MeshVerticesProcessor class in MeshVerticesProcessor.cs of the MeshVerticesProcessor project. Extend the MeshVerticesProcessor class from ModelProcessor, because we need the model vertices information based on the original model.
    [code]
    [ContentProcessor]
    publicclass MeshVerticesProcessor : ModelProcessor
    [/code]
  3. Add the Dictionary object in the class field.
    [code]
    Dictionary<string, List<Vector3>> tagData =
    new Dictionary<string, List<Vector3>>();
    [/code]
  4. Define the Process() method in the MeshVerticesProcessor class:
    [code]
    // The main method in charge of processing the content.
    public override ModelContent Process(NodeContent input,
    ContentProcessorContext context)
    {
    FindVertices(input);
    ModelContent model = base.Process(input, context);
    model.Tag = tagData;
    return model;
    }
    [/code]
  5. Define the FindVertices() method in the MeshVerticesProcessor class:
    [code]
    // Extracting a list of all the vertex positions in
    // a model.
    void FindVertices(NodeContent node)
    {
    // Transform the current NodeContent to MeshContent
    MeshContent mesh = node as MeshContent;
    if (mesh != null)
    {
    string meshName = mesh.Name;
    List<Vector3> meshVertices = new List<Vector3>();
    // Look up the absolute transform of the mesh.
    Matrix absoluteTransform = mesh.AbsoluteTransform;
    // Loop over all the pieces of geometry in the mesh.
    foreach (GeometryContent geometry in mesh.Geometry)
    {
    // Loop over all the indices in this piece of
    // geometry. Every group of three indices
    // represents one triangle.
    foreach (int index in geometry.Indices)
    {
    // Look up the position of this vertex.
    Vector3 vertex =
    geometry.Vertices.Positions[index];
    // Transform from local into world space.
    vertex = Vector3.Transform(vertex,
    absoluteTransform);
    // Store this vertex.
    meshVertices.Add(vertex);
    }
    }
    tagData.Add(meshName, meshVertices);
    }
    // Recursively scan over the children of this node.
    foreach (NodeContent child in node.Children)
    {
    FindVertices(child);
    }
    }
    [/code]
  6. Build the MeshVerticesProcessor project. Add a reference to MeshVerticesProcessor.dll in the content project and change the Content Processor of BigBox.FBX to MeshVerticesProcessor, as shown in the following screenshot:
    Content Processor of BigBox
  7. From this step, we will begin to draw the two boxes on screen and detect the bounding box collision between them in the BoundingBoxCollisionGame class in BoundingBoxCollisionGame.cs of the BoundingBoxCollision project. First, add the following lines to the class field:
    [code]
    // The sprite font for drawing collision state
    SpriteFont font;
    // Model box A and B
    Model modelBoxA;
    Model modelBoxB;
    // The world transformation of box A and B
    Matrix worldBoxA;
    Matrix worldBoxB;
    // BoundingBox of model A and B
    BoundingBox boundingBoxA;
    BoundingBox boundingBoxB;
    // The bounding box stores the transformed boundingBox
    BoundingBox boundingBox;
    // Camera
    Vector3 cameraPosition;
    Matrix view;
    Matrix projection;
    // Hit regions
    Rectangle LeftHitRegion;
    Rectangle RightHitRegion;
    // Collided state
    bool Collided;
    [/code]
  8. Initialize the world matrix of box A and B, the camera, and the left and right hit regions. Paste the following code into the Initialize() method in the BoundingBoxCollisionGame class:
    [code]
    // Translate the model box A and B
    worldBoxA = Matrix.CreateTranslation(new Vector3(-10, 0, 0));
    worldBoxB = Matrix.CreateTranslation(new Vector3(10, 0, 0));
    // Initialize the camera
    cameraPosition = new Vector3(0, 10, 40);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Initialize the left and right hit regions
    Viewport viewport = GraphicsDevice.Viewport;
    LeftHitRegion = new Rectangle(0, 0, viewport.Width / 2,
    viewport.Height);
    RightHitRegion = new Rectangle(viewport.Width / 2, 0,
    viewport.Width / 2, viewport.Height);
    [/code]
  9. Load the box model and sprite font. Then extract the box model vertices. With the extracted vertices, create the bounding box for box A and B. Insert the following code into the LoadContent() method in the BoundingBoxCollisionGame class:
    [code]
    // Create a new SpriteBatch, which can be used to draw
    // textures.
    spriteBatch = new SpriteBatch(GraphicsDevice);
    // Load the sprite font
    font = Content.Load<SpriteFont>(“gameFont”);
    // Load the box model
    modelBoxA = Content.Load<Model>(“BigBox”);
    modelBoxB = Content.Load<Model>(“BigBox”);
    // Get the vertices of box A and B
    List<Vector3> boxVerticesA =
    ((Dictionary<string, List<Vector3>>)modelBoxA.Tag)
    [“Box001”];
    List<Vector3> boxVerticesB =
    ((Dictionary<string, List<Vector3>>)modelBoxA.Tag)
    [“Box001”];
    // Create the bounding box for box A and B
    boundingBoxA = BoundingBox.CreateFromPoints(boxVerticesA);
    boundingBoxB = BoundingBox.CreateFromPoints(boxVerticesB);
    // Translate the bounding box of box B to designated position
    boundingBoxB.Min = Vector3.Transform(boundingBoxB.Min,
    worldBoxB);
    boundingBoxB.Max = Vector3.Transform(boundingBoxB.Max,
    worldBoxB);
    [/code]
  10. Move the box A and the corresponding bounding box and detect the bounding box collision between box A and box B. Add the following code to the Update() method in the BoundingBoxCollisionGame class:
    [code]
    // Interact with tapping
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Point point = new Point(
    (int)touches[0].Position.X,(int)touches[0].Position.Y);
    // If the tapped position is inside the left hit region,
    // move the box A left
    if (LeftHitRegion.Contains(point))
    {
    worldBoxA.Translation -= new Vector3(1, 0, 0);
    }
    // If the tapped position is inside the right hit region,
    //move the box A right
    if (RightHitRegion.Contains(point))
    {
    worldBoxA.Translation += new Vector3(1, 0, 0);
    }
    }
    // Create a bounding box for the transformed bounding box A
    boundingBox = new BoundingBox(
    Vector3.Transform(boundingBoxA.Min, worldBoxA),
    Vector3.Transform(boundingBoxA.Max, worldBoxA));
    // Take the collision detection between the transformed
    // bounding box A and bounding box B
    if (boundingBox.Intersects(boundingBoxB))
    {
    Collided = true;
    }
    else
    {
    Collided = false;
    }
    [/code]
  11. Define the DrawModel() method.
    [code]
    // Draw model
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.PreferPerPixelLighting = true;
    effect.EnableDefaultLighting();
    effect.DiffuseColor = Color.White.ToVector3();
    effect.World = transforms[mesh.ParentBone.Index] *
    world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  12. Draw the boxes on the Windows Phone 7 screen. Add the code to the Draw() method.
    [code]
    GraphicsDevice.DepthStencilState = DepthStencilState.Default;
    // Draw the box model A and B
    DrawModel(modelBoxA, worldBoxA, view, projection);
    DrawModel(modelBoxB, worldBoxB, view, projection);
    // Draw the collision state
    spriteBatch.Begin();
    spriteBatch.DrawString(font, “Collided: ” + Collided.
    ToString(), new Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  13. Now, build and run the application. The application should run as shown in the following screenshots:
    BoundingBox collision detection

How it works…

In step 2, the [ContentProcessor] attribute is required. It makes the MeshVerticesProcessor class in to a content processor, which will show up in the content project when you change the model processor.

In step 3, the tagData receives the mesh name as the key and the corresponding mesh vertices as the value.

In step 4, the input—a NodeContent object—represents the root NodeContent of the input model. The key called method is the FindVertices() method, which iterates the meshes in the input model and stores the mesh vertices in the tagData with the mesh name.

In step 5, the first line transforms the current NodeContent to MeshContent so that we can get the mesh vertices. If the current NodeContent is MeshContent, declare meshName variable for holding the current mesh name, meshVertices saves the mesh vertices and stores the world absolute transformation matrix to the absoluteTransform matrix using MeshContent.AbsoluteTransform. The following foreach loop iterates every vertex of the model geometries and transforms it from object coordinate to world coordinate; it then stores the current vertex to the meshVertices. When all the vertices of the current mesh are processed, we add the meshVertices to the tagData dictionary with the meshName as the key. The last part is to recursively process the vertices of the child NodeContent objects of the temporary MeshContent.

In step 7, the font is responsible for drawing the collision state on screen; modelBoxA and modelBoxB hold the two box models; worldBoxA and worldBoxB represent the world transformation of the two boxes; boundingBoxA and boundingBoxB store the bound boxes individually around the two boxes; boundingBox will save the transformed bounding box A for collision detection; the cameraPosition, view, and projection will be used to initialize the camera; LeftHitRegion and RightHitRegion define the left and right hit regions on the Windows Phone 7 screen.

In step 9, in this method, we read the vertices of box A and B from the Model.Tag property. Then, we use BoundingBox.CreateFromPoints() to create the bounding box from the extracted vertices of the box model. Notice, so far, the generated bounding boxes are in the same place; we need to translate them to the place where the corresponding box model locates. Since we will use box A as the moving object, the position will be updated in real time. Now, we just translate the bounding box for box B.

In step 10, in the first part, we check whether the tapped position is in the left or right hit region and move box A. After that, we create a new boundingbox for representing the transformed bounding box A. Then, we take the bounding box collision detection between the boundingBoxA and boundingBoxB using the BoundingBox.Intersects() method. If a collision happens, the method will return true, otherwise it will return false.

Implementing BoundingSphere collision detection in a 3D game

Unlike the bounding box, bounding sphere based collision detection is faster. The technique just needs to compute the length between two points or vertices whether less, equal, or greater than the sum of radii. In modern games, bounding sphere based collision detection is preferred rather than the bounding box. In this recipe, you will learn how to use the technique in an XNA application.

How to do it…

Follow the steps below to master the technique of using BoundingSphere in your game:

  1. Create a Windows Phone Game project named BoundingSphereCollision and change Game1.cs to BoundingSphereCollisionGame.cs. Then, create a Content Pipeline Extension Library named MeshVerticesProcessor and replace the ContentProcessor1.cs with MeshVerticesProcessor.cs. After that, add the model file BallLowPoly.FBX to the content project.
  2. Define the MeshVerticesProcessor class in MeshVerticesProcessor.cs of the MeshVerticesProcessor project. The class is the same as the one mentioned in the last recipe Implementing BoundingBox collision detection in a 3D game. For a full explanation, please refer back to it.
  3. Build the MeshVerticesProcessor project. Add a reference to MeshVerticesProcessor.dll in the content project and change the Content Processor of BallLowPoly.FBX to MeshVerticesProcessor, as shown in the following screenshot:
    Content Processor of BallLowPoly.FBX
  4. From this step, we will begin to draw the two balls on screen and detect the bounding sphere collision between them in the BoundingSphereCollisionGame class in BoundingSphereCollisionGame.cs of the BoundingSphereCollision project. First, add the following lines to the class field:
    [code]
    // The sprite font for drawing collision state
    SpriteFont font;
    // Model ball A and B
    Model modelBallA;
    Model modelBallB;
    // The world transformation of ball A and B
    Matrix worldBallA;
    Matrix worldBallB;
    // BoundingSphere of model A and B
    BoundingSphere boundingSphereA;
    BoundingSphere boundingSphereB;
    // Camera
    Vector3 cameraPosition;
    Matrix view;
    Matrix projection;
    // Hit regions
    Rectangle LeftHitRegion;
    Rectangle RightHitRegion;
    // Collided state
    bool Collided;
    [/code]
  5. Load the ball model and sprite font. Then, extract the vertices of the ball model. With the extracted vertices, create the bounding spheres for ball A and B. Insert the following code into the LoadContent()method:
    [code]
    // Translate the model ball A and B
    worldBallA = Matrix.CreateTranslation(new Vector3(-10, 0, 0));
    worldBallB = Matrix.CreateTranslation(new Vector3(10, 0, 0));
    // Initialize the camera
    cameraPosition = new Vector3(0, 10, 40);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Initialize the left and right hit regions
    Viewport viewport = GraphicsDevice.Viewport;
    LeftHitRegion = new Rectangle(0, 0, viewport.Width / 2,
    viewport.Height);
    RightHitRegion = new Rectangle(viewport.Width / 2, 0,
    viewport.Width / 2, viewport.Height);
    [/code]
  6. Move the ball A and the corresponding bounding sphere. Then, detect the bounding sphere collision between ball A and B. Add the following code to the Update() method:
    [code]
    // Check the tapped position
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    // If the tapped position is inside the left hit region,
    // move ball A to left
    if (LeftHitRegion.Contains(point))
    {
    worldBallA.Translation -= new Vector3(1, 0, 0);
    }
    // If the tapped position is inside the left right region,
    // move the ball A right
    if (RightHitRegion.Contains(point))
    {
    worldBallA.Translation += new Vector3(1, 0, 0);
    }
    }
    // Update the position of bounding sphere A
    boundingSphereA.Center = worldBallA.Translation;
    // Detect collision between bounding sphere A and B
    if (boundingSphereA.Intersects(boundingSphereB))
    {
    Collided = true;
    }
    else
    {
    Collided = false;
    }
    [/code]
  7. Define the DrawModel() method.
    [code]
    // Draw model
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.PreferPerPixelLighting = true;
    effect.EnableDefaultLighting();
    effect.DiffuseColor = Color.White.ToVector3();
    effect.World = transforms[mesh.ParentBone.Index] *
    world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  8. Draw the spheres on screen. Add the following code to the Draw() method.
    [code]
    GraphicsDevice.DepthStencilState = DepthStencilState.Default;
    // Draw the ball model A and B
    DrawModel(modelBallA, worldBallA, view, projection);
    DrawModel(modelBallB, worldBallB, view, projection);
    // Draw the collision state
    spriteBatch.Begin();
    spriteBatch.DrawString(font, “Collided:” + Collided.ToString(),
    new Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  9. Now, build and run the application. The application should run as shown in the following screenshots:
    BoundingSphere collision detection in a 3D game

How it works…

In step 4, the font is responsible for drawing the collision state on screen; modelBallA and modelBallB hold the two box models; worldBallA and worldBallB represent the world transformation of the two boxes; boundingSphereA and boundingSphereB store the bound boxes individually around the two boxes; the cameraPosition, view, and projection will be used to initialize the camera; LeftHitRegion and RightHitRegion define the left and right hit regions on the Windows Phone 7 screen.

In step 6, the first part is to check whether the tapped position is in the left or the right hit region and to move ball A. After that, we update the center position of bounding sphere A with the newest position of ball A. Then, we take the bounding sphere collision detection between the boundingSphereA and boundingSphereB using the BoundingSphere. Intersects() method. If the collision happens, the method will return true, otherwise it will return false.

Implementing ray-triangle collision detection

Ray-triangle collision gives very accurate collision detection in games. Depending on the return value of the distance from the ray start position to the triangle, it is easy for you to decide whether a collision occurs. As you might know, all the models in 3D games are made of triangles, whether static or dynamic. The ray is like a bullet fired from a gun; here, you can consider the gun as another object, with a straight thin rope behind. Once the bullet hits the triangle—an object, the collision happens. A lot of methods on ray-triangle are available; in this recipe, you will learn how to implement the method which has the best time and space complexity to make your game run faster with less memory usage.

Getting ready…

The ray-triangle collision detection method provides more accurate data than other methods using BoundingBox or BoundingSphere. Before the best ray-triangle collision detection method was invented by Moller and Trumbore, most of the existing methods first compute the intersection point between the ray and the triangle’s plane. After that, the intersection point will be projected to the axis-aligned plane to determine whether it is inside the 2D projected triangle. These kinds of methods need the plain equation of triangle based on the computed normal every frame, for a triangle mesh; to do this will cost considerable memory space and CPU resources. However, the method from Moller and Trumbore requires only two cross product computations and also gives us an intersection point.

As a detailed explanation, a point v in a triangle is represented by Barycentric coordinates and not Cartesian coordinates. Since the Barycentric coordinate is the most suitable coordinate system to describe a point position in a triangle, the point could be represented by the following formula:

The u and v coordinates—two of the Barycentric coordinates—are also used in texture mapping, normal interpolation like the Phong lighting algorithm, and color interpolation.

For a ray, a point on the ray is given by:

The intersection point between the ray and the triangle means the point is both on the ray and the triangle. To get the point, we have the formula:

We rearrange the previous equation to an expression in matrix notation:

The previous equation means the distance t from the ray origin to the intersection point and the Barycentric coordinate (u,v) can be found in the equation solution. If

is a matrix M, our job is to find the M-1. The equation will be:

Now, let

With the Cramer’s rule, we find the following solution:

From linear algebra, the determinant is computed using the Triple Product:

The solution can be rewritten as follows:

The following pseudo code describing the algorithm comes from the solution of the previous equation:

[code]
E1 = P1 – P0;
E2 = P2 – P0;
P = D E2;
determinate = P E1;
if(determinate > -epsilon && determinate < epsilon)
return null;
inverse = 1 / determinate;
S = O – P0
u = P S * inverse;
if( u < 0) return null;
Q = S E1;
v = Q D * inverse
if(v < 0) return null;
t = Q E2 * inverse;
if( t < 0) return null;
return (t, u, v)
[/code]

The XNA code to implement the pseudo code should be:

[code]
public void RayIntersectsTriangle(ref Ray ray,
ref Vector3 vertex1,
ref Vector3 vertex2,
ref Vector3 vertex3, out float? result)
{
// Compute vectors along two edges of the triangle.
Vector3 edge1, edge2;
Vector3.Subtract(ref vertex2, ref vertex1, out edge1);
Vector3.Subtract(ref vertex3, ref vertex1, out edge2);
// Compute the determinant.
Vector3 directionCrossEdge2;
Vector3.Cross(ref ray.Direction, ref edge2,
out directionCrossEdge2);
float determinant;
Vector3.Dot(ref edge1, ref directionCrossEdge2,
out determinant);
// If the ray is parallel to the triangle plane, there is
// no collision.
if (determinant > -float.Epsilon &&
determinant < float.Epsilon)
{
result = null;
return;
}
float inverseDeterminant = 1.0f / determinant;
// Calculate the U parameter of the intersection point.
Vector3 distanceVector;
Vector3.Subtract(ref ray.Position, ref vertex1,
out distanceVector);
float triangleU;
Vector3.Dot(ref distanceVector, ref directionCrossEdge2,
out triangleU);
triangleU *= inverseDeterminant;
// Make sure it is inside the triangle.
if (triangleU < 0 || triangleU > 1)
{
result = null;
return;
}
// Calculate the V parameter of the intersection point.
Vector3 distanceCrossEdge1;
Vector3.Cross(ref distanceVector, ref edge1,
out distanceCrossEdge1);
float triangleV;
Vector3.Dot(ref ray.Direction, ref distanceCrossEdge1,
out triangleV);
triangleV *= inverseDeterminant;
// Make sure it is inside the triangle.
if (triangleV < 0 || triangleU + triangleV > 1)
{
result = null;
return;
}
// Compute the distance along the ray to the triangle.
float rayDistance;
Vector3.Dot(ref edge2, ref distanceCrossEdge1,
out rayDistance);
rayDistance *= inverseDeterminant;
// Is the triangle behind the ray origin?
if (rayDistance < 0)
{
result = null;
return;
}
result = rayDistance;
}
[/code]

As the parameters of the RayIntersectsTriangle() method, the vertex1, vertex2, and vertex3 are the three points of a triangle, ray is the object of the XNA built-in type ray, which specifies the origin point and the ray direction; the result will return the distance between the ray start point and the intersection point. In the body of the method, the first three lines compute the two triangle edges, then they use the ray.Direction and edge2 to compute the cross product directionCrossEdge1 that represents P, which equals P=D E2. Next, we use directionCrossEdge2 that takes the dot multiplication with edge1 to compute the determinate with the equation determinate=P E1. The following if statement is to validate the determinate. If the value tends to 0, the determinate will be rejected. Then, we use inverseDeterminant to represent the following fraction:

Now you have got the denominator of the fraction of Cramer’s rule. With the value, the u, v, and t could be solved as the solution equation. Following the pseudo code, the next step is to calculate the P with equation S=O-P0. Here, ray.Position is o, vertex1 is P0, distanceVector is S. Based on the S value, you could get the u value from the equation u=P S*inverse, the code calls the Vector3.Dot() method between directionCrossEdge2 and distanceVector for the intermediate TriangleU, then the returned value multiplies with the inverseDeterminant for the final TriangleU. The v value triangleV comes from the equation v=Q D*inverse in which Q=S E1. Similarly, you could gain the t value rayDistance from the equation t=Q E2*inverse.

How to do it…

Now, let’s look into an example for a direct experience:

  1. Create a Windows Phone Game project named RayTriangleCollisionGame, change Game1.cs to RayTriangleCollisionGame.cs. Then, add the gameFont.spritefont file to the content project.
  2. Declare the necessary variables of the RayTriangleCollisionGame class. Add the following lines to the class field:
    [code]
    // SpriteFont draw the instructions
    SpriteFont font;
    // Triangle vertex array
    VertexPositionColor[] verticesTriangle;
    VertexBuffer vertexBufferTriangle;
    // Line vertex array
    VertexPositionColor[] verticesLine;
    VertexBuffer vertexBufferLine;
    Matrix worldRay = Matrix.CreateTranslation(-10, 0, 0);
    // Camera view matrix
    Matrix view;
    // Camera projection matrix
    Matrix projection;
    // Ray object
    Ray ray;
    // Distance
    float? distance;
    // Left region on screen
    Rectangle LeftRectangle;
    // Right region on screen
    Rectangle RightRectangle;
    // Render state
    RasterizerState Solid = new RasterizerState()
    {
    FillMode = FillMode.Solid,
    CullMode = CullMode.None
    };
    [/code]
  3. Initialize the camera and the hit regions. Insert the code into the Initialize()method:
    [code]
    view = Matrix.CreateLookAt(new Vector3(20, 5, 20),
    Vector3.Zero, Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio, 0.1f, 1000.0f);
    LeftRectangle = new Rectangle(0, 0,
    GraphicsDevice.Viewport.Bounds.Width / 2,
    GraphicsDevice.Viewport.Bounds.Height);
    RightRectangle = new Rectangle(
    GraphicsDevice.Viewport.Bounds.Width / 2, 0,
    GraphicsDevice.Viewport.Bounds.Width / 2,
    GraphicsDevice.Viewport.Bounds.Height);
    [/code]
  4. Initialize the vertices and vertex buffer of the triangle and the line. Then, instance the ray object. Add the following code to the LoadContent() method:
    [code]
    // Load the font
    font = Content.Load<SpriteFont>(“gameFont”);
    // Create a triangle
    verticesTriangle = new VertexPositionColor[3];
    verticesTriangle[0] = new VertexPositionColor(
    new Vector3(0, 0, 0), Color.Green);
    verticesTriangle[1] = new VertexPositionColor(
    new Vector3(10, 0, 0), Color.Green);
    verticesTriangle[2] = new VertexPositionColor(
    new Vector3(5, 5, 0), Color.Green);
    // Allocate the vertex buffer for triangle vertices
    vertexBufferTriangle = new VertexBuffer(
    GraphicsDevice, VertexPositionColor.VertexDeclaration, 3,
    BufferUsage.WriteOnly);
    // Set the triangle vertices to the vertex buffer of triangle
    vertexBufferTriangle.SetData(verticesTriangle);
    // Create the line
    verticesLine = new VertexPositionColor[2];
    verticesLine[0] = new VertexPositionColor(
    new Vector3(5, 2.5f, 10), Color.Red);
    verticesLine[1] = new VertexPositionColor(
    new Vector3(5, 2.5f, -10), Color.Red);
    // Allocate the vertex buffer for line points
    vertexBufferLine = new VertexBuffer(GraphicsDevice,
    VertexPositionColor.VertexDeclaration, 2,
    BufferUsage.WriteOnly);
    // Set the line points to the vertex buffer of line
    vertexBufferLine.SetData(verticesLine);
    // Compute the ray direction
    Vector3 rayDirection = verticesLine[1].Position –
    verticesLine[0].Position;
    rayDirection.Normalize();
    // Initialize the ray with position and direction
    ray = new Ray(verticesLine[0].Position, rayDirection);
    // Transform the ray
    ray.Position = Vector3.Transform(ray.Position, worldRay);
    [/code]
  5. Perform the ray-triangle collision detection. Paste the following code into the Update() method:
    [code]
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    if (LeftRectangle.Contains(point))
    {
    worldRay *= Matrix.CreateTranslation(
    new Vector3(-1, 0, 0));
    ray.Position.X -= 1;
    }
    if (RightRectangle.Contains(point))
    {
    worldRay *= Matrix.CreateTranslation(
    new Vector3(1, 0, 0));
    ray.Position.X += 1;
    }
    }
    RayIntersectsTriangle(
    ref ray,
    ref verticesTriangle[0].Position,
    ref verticesTriangle[1].Position,
    ref verticesTriangle[2].Position,
    out distance);
    if (distance != null)
    {
    verticesTriangle[0].Color = Color.Yellow;
    verticesTriangle[1].Color = Color.Yellow;
    verticesTriangle[2].Color = Color.Yellow;
    }
    else
    {
    verticesTriangle[0].Color = Color.Green;
    verticesTriangle[1].Color = Color.Green;
    verticesTriangle[2].Color = Color.Green;
    }
    vertexBufferTriangle.SetData(verticesTriangle);
    [/code]
  6. Define the DrawColoredPrimitives() method in the RayTriangleCollisionGame class, for drawing the line and triangle on the Windows Phone 7 screen.
    [code]
    public void DrawColoredPrimitives(VertexBuffer buffer,
    PrimitiveType primitiveType, int primitiveCount,
    Matrix world)
    {
    BasicEffect effect = new BasicEffect(GraphicsDevice);
    effect.VertexColorEnabled = true;
    effect.World = world;
    effect.View = view;
    effect.Projection = projection;
    effect.CurrentTechnique.Passes[0].Apply();
    GraphicsDevice.SetVertexBuffers(buffer);
    GraphicsDevice.DrawPrimitives(primitiveType, 0,
    primitiveCount);
    }
    [/code]
  7. Draw the ray and triangle on the Windows Phone 7 screen. Insert the following code into the Draw()method:
    [code]
    GraphicsDevice.RasterizerState = Solid;
    // Draw the triangle
    DrawColoredPrimitives(vertexBufferTriangle,
    PrimitiveType.TriangleList, 1, Matrix.Identity);
    // Draw the line which visualizes the ray
    DrawColoredPrimitives(vertexBufferLine, PrimitiveType.LineList,
    1, worldRay);
    spriteBatch.Begin();
    spriteBatch.DrawString(font,
    “Tap the Left or Right Part of nScreen to Move the ray”,
    new Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  8. Now, build and run the application. It runs as shown in the following screenshots:
    ray-triangle

How it works…

In step 2, the font object will draw the instruction about how to play with the example on screen; verticesTriangle is the vertex array of the testing triangle; vertexBufferTriangle is the vertex buffer that stores the triangle vertices; verticesLine holds the two points of a line, which visually represents the testing ray; the matrix worldRay stands for the ray world position. The following two matrices view and projection will be used to define the camera; the ray object will be the real tested ray; the distance indicates the actual distance from ray origin position to the ray-triangle intersection point; the LeftRectangle and RightRectangle are the hit regions for moving the ray to the left or to the right. The Solid variable specifies the render state of the graphics device.

In step 3, the LeftRectangle occupies the left half of the screen; the RightRectangle takes up the right half of the screen.

In step 4, the first part is to initialize three of the triangle vertices with the position and color. The original color is green, and when ray collides with the triangle, the color will be yellow. Then we set the triangle vertices to the triangle vertex buffer. The second part is about initiating the line that visualizes the ray and putting the data into the vertex buffer for line data. The final part defines the ray object with position and direction.

In step 5, before the RayIntersectsTriangle() method, the code is to check the tapped position to see whether it is in the LeftRectangle or the RightRectangle. When a valid tapping takes place, the ray will move along the X-axis by one unit, then we call the RayIntersectsTriangle() method to judge whether there is a collision between the ray and the triangle. If the returned distance is not null, this means that the collision happened, and we change the color of the triangle vertices to Color.Yellow. Otherwise, the color will be restored to Color.Green. The RayIntersectsTriangle() method has been discussed at the beginning of this recipe and inserts the definition of the RayIntersectsTriangle() method to the RayTriangleCollisionGame class.

In step 6, in the DrawColoredPrimitives() method, the effect receives the view and project matrices for the camera, the world matrix for the world position and transformation. The effect.VertexColorEnabled is set to true to make vertices have color. Then, we use the first pass of the current technique of BasicEffect and GraphicsDevice. The DrawPrimitives() method draws the primitives from the beginning of the vertex array in the vertex buffer.

In step 7, the DrawColoredPrimitives() method is used to draw the triangle that receives a parameter PrimitiveType.TriangleList, where 1 means the count of triangles. When drawing the line the PrimitiveType is LineList.

Ironpython Interacting with COM Objects

An Overview of COM Access Differences with Python

COM access is an area where IronPython and Python take completely different approaches. In fact, it’s safe to say that any Python code you want to use definitely won’t work in IronPython. Python developers normally rely on a library such as Python for Windows Extensions (http://sourceforge.net/ projects/pywin32/). This is a library originally created by Mark Hammond (http://starship .python.net/crew/mhammond/win32/) that includes not only the COM support but also a really nice Python editor. You can see a basic example of using this library to access COM at http://www .boddie.org.uk/python/COM.html. Even if you download the required library and try to follow the tutorial, you won’t get past step 1. The tutorial works fine with standard Python, but doesn’t work at all with IronPython.

It’s important to remember that IronPython is a constantly moving target. The developers who support IronPython constantly come out with new features and functionality, as do the third parties that support it. You may find at some point that there’s a COM interoperability solution that does work for both Python and IronPython. The solution doesn’t exist today, but there’s always hope for tomorrow. If you do encounter such a solution, please be sure to contact me at [email protected]

Fortunately, IronPython developers aren’t left out in the cold. COM support is built right into IronPython in the form of the .NET Framework. An IronPython developer uses the same techniques as a C# or a Visual Basic.NET developer uses to access COM — at least at a code level.

When you work with COM in Visual Studio in either a C# or Visual Basic.NET project, the IDE does a lot of the work for you. If you want to use a COM component in your application, you right-click References in Solution Explorer and choose Add Reference from the context menu. At this point, you see the Add Reference dialog box where you choose the COM tab shown in Figure 9-1.

When you highlight an item, such as the Windows Media Player, and click OK, the IDE adds the COM component to the References folder of Solution Explorer, as shown in Figure 9-2. The IDE writes code for you in the background that adds the COM component and makes it accessible. You’ll find this code in the .CSProj file and it looks something like this:

[code]
<COMReference Include=”MediaPlayer”>
<Guid>{22D6F304-B0F6-11D0-94AB-0080C74C7E95}</Guid>
<VersionMajor>1</VersionMajor>
<VersionMinor>0</VersionMinor>
<Lcid>0</Lcid>
<WrapperTool>tlbimp</WrapperTool>
<Isolated>False</Isolated>
<EmbedInteropTypes>True</EmbedInteropTypes>
</COMReference>
[/code]

Fi gure 9-1: The Add Reference dialog box provides you with a list of COM components you can use.

In addition, the IDE creates Interop.MediaPlayer.DLL, which resides in the project’s objx86Debug or objx86Release folder. This interoperability (interop for short) assembly makes it easy for you to access the COM component features.

Any reference you add appears in the References folder of Solution Explorer.

Of course, if the COM component you want to use is actually a control, you right-click the Toolbox instead and select Choose Items from the context menu. The COM Components tab looks much like the one shown in Figure 9-3.

In this case, check the controls you want to use and click OK. Again, the IDE does some work for you in the background to make the control accessible and usable. For example, it creates the same interop assembly as it would for a reference. You’ll see the control in the Toolbox, as shown in Figure 9-4.

The tasks that the IDE performs for you as part of adding a reference or Toolbox item when working with C# or Visual Basic.NET are manual tasks when working with IronPython. As you might imagine, all of this manual labor makes IronPython harder to use with COM than when you work with Python. While a Python developer simply imports a module and then writes a little specialized code, you’re saddled with creating interop assemblies and jumping through coding hoops.

COM components and controls can also appear in the Choose Toolbox Items dialog box.

The control or controls you selected appear in the Toolbox.

You do get something for the extra work, though. IronPython provides considerably more flexibility than Python does and you can use IronPython in more places. For example, you might find it hard to access Word directly in Python. The bottom line is that IronPython and Python are incompatible when it comes to COM support, so you can’t use all the online Python sources of information you normally rely on when performing a new task.

Choosing a Binding Technique

Before you can use a COM component, you must bind to it (create a connection to it). The act of binding gives you access to an instance of the component. You use binding to work with COM because, in actuality, you’re taking over another application. For example, you can use COM to create a copy of Word, do some work with it, save the resulting file, and then close Word — all without user interaction. A mistake that many developers make is thinking of COM as just another sort of class, but it works differently and you need to think about it differently. For the purposes of working with COM in IronPython, the act of binding properly is one of the more important issues. The following sections describe binding in further detail.

Understanding Early and Late Binding

When you work with a class, you create an instance of the class, set the resulting object’s properties, and then use methods to perform a particular task. COM lets you perform essentially the same set of steps in a process called early binding. When you work with early binding, you define how to access the COM object during design time. In order to do this, you instantiate an object based on the COM class.

These sections provide an extremely simplified view of COM. You can easily become mired in all kinds of details when working with COM because COM has been around for so long. For example, COM supports multiple interface types, which in turn determines the kind of binding you can perform. This chapter looks at just the information you need to work with COM from IronPython. If you want a better overview of COM, check the site at http://msdn.microsoft.com/ library/ms809980.aspx. In fact, you can find an entire list of COM topics at http://msdn.microsoft.com/library/ms877981.aspx.

The COM approach relies on a technique called a virtual table (vtable) — essentially a list of interfaces that you can access, with IUnknown as the interface that’s common to all COM components. Your application gains access to the IUnknown interface and then calls the queryinterface() method to obtain a list of other interfaces that the component supports (you can read more about this method at http://msdn.microsoft.com/library/ms682521.aspx). Using this approach means that your application can understand a component without really knowing anything about it at the outset.

It’s also possible to tell COM to create an instance of an object after the application is already running. This kind of access is called late binding because you bind after the application starts. In order to support late binding, a COM component must support the IDispatch interface. This interface lets you create the object using CreateObject(). Visual Basic was the first language product to rely on late binding. You can read more about IDispatch at http://msdn.microsoft.com/library/ ms221608.aspx.

Late binding also offers the opportunity to gain access to a running copy of a COM component. For example, if the system currently has a copy of Excel running, you can access that copy, rather than create a new Excel object. In this case, you use GetObject() instead of CreateObject() to work with the object. If you call GetObject() where there isn’t any copy of the component already executing, you get an error message — Windows doesn’t automatically start a new copy of the application for you.

If a COM component supports both the vtable and IDispatch technologies, then it has a dual interface that works with any current application language. Most COM components today are dual interface because adding both technologies is relatively easy and developers want to provide the greatest exposure for their components. However, it’s always a good idea to consider the kind of binding that your component supports. You can read more about dual interfaces at http://msdn.microsoft.com/library/ ekfyh289.aspx.

Using Early Binding

As previously mentioned, using early binding means creating a reference to the COM component and then using that reference to interact with the component. IronPython doesn’t support the standard methods of early binding that you might have used in other languages. What you do instead is create an interoperability DLL and then import that DLL into your application. The “Defining an Interop DLL” section of the chapter describes this process in considerably more detail. Early binding provides the following benefits:

  • Faster execution: Generally, your application will execute faster if you use early binding because you rely on compiled code for the interop assembly. However, you won’t get the large benefits in speed that you see when working with C# or Visual Basic.NET because IronPython itself is interpreted.
  • Easier debugging: In most cases, using early binding reduces the complexity of your application, making it easier to debug. In addition, because much of the access code for the COM component resides in the interop assembly, you won’t have to worry about debugging it.
  • Fuller component access: Even though both early and late binding provide access to the component interfaces, trying to work through those interfaces in IronPython is hard. Using early binding provides you with tools that you can use to explore the interop assembly, and therefore discover more about the component before you use it.
  • Better access to enumerations and constants: Using early binding provides you with access to features that you might not be able to access when using late binding. In some cases, IronPython will actually hide features such as enumerations or constants when using late binding.

Using Late Binding

When using late binding, you create a connection to the COM component at run time by creating a new object or reusing a running object. Some developers prefer this kind of access because it’s less error prone than early binding where you might not know about runtime issues during design time. Here are some other reasons that you might use late binding.

  • More connectivity options: You can use late binding to create a connection to a new instance of a COM component (see the “Performing Late Binding Using Activator.CreateInstance()” section of this chapter) or a running instance of the COM component
  • Fewer modules: When you use late binding, you don’t need an interop assembly for each of the COM components you want to use, which decreases the size and complexity of your application.
  • Better version independence: Late binding relies on registry entries to make the connection. Consequently, when Windows looks up the string you use to specify the application, it looks for any application that satisfies that string. If you specify the Microsoft Excel 9.0 Object Library COM component (Office 2000 specific), Windows will substitute any newer version of Office on the system for the component you requested.
  • Fewer potential compatibility issues: Some environments don’t work well with interop assemblies. For example, you might be using IronPython within a Web-based application. In this case, the client machine would already have to have the interop assembly, too, and it probably doesn’t. In this case, using late binding allows your application to continue working when early binding would fail.

Defining an Interop DLL

Before you can do much with COM, you need to provide some means for .NET (managed code) and the component (native code) to talk. The wrapper code that marshals data from one environment to another, and that translates calls from one language to the other, is an interoperability (interop) assembly, which always appears as a DLL. Fortunately, you don’t have to write this code by hand because the task is somewhat mundane. Microsoft was able to automate the process required to create an interop DLL.

Of course, Microsoft couldn’t make the decision straightforward or simple. You use different utilities for controls and components. The Type Library Import (TLbImp) utility produces a DLL suitable for component work, while the ActiveX Import (AxImp) utility produces a pair of DLLs suitable for control work. In many cases, the decision is easy — a COM component that supports a visual interface should use AxImp. However, some COM components, such as Windows Media Player (WMP.DLL) are useful as either controls or components. The example in this chapter uses the control form because that’s the way you’ll use Windows Media Player most often, but it’s important to make the decision. The following sections describe how to use both the TLbImp and AxImp utilities.

Accessing the Visual Studio .NET Utilities

You want to create an interop assembly in the folder that you’ll use for your sample application. However, you also need access to the .NET utilities. The best way to gain this access is to open a Visual Studio command prompt by choosing Start ➪ Programs ➪ Microsoft Visual Studio 2010 ➪ Visual Studio Tools ➪ Visual Studio Command Prompt (2010). If you’re working with Vista or Windows 7, right-click the Visual Studio Command Prompt (2010) entry and choose Run As Administrator from the context menu to ensure you have the rights required to use the utilities. Windows will open a command prompt that provides the required access to the .NET utilities.

Understanding the Type Library Import Utility

Remember that you always use Type Library Import (TLbImp) for components, not for controls. Before you can use TLbImp, you need to know a bit more about it. Here’s the command line syntax for the tool:

[code]
TlbImp TypeLibName [Options]
[/code]

The TypeLibName argument is simply the filename of the COM component that you want to use to create an interop assembly. A COM component can have a number of file extensions, but the most common extensions are .DLL, .EXE, and .OCX.

The TypeLibName argument can specify a resource identifier when the library contains more than one resource. Simply follow the filename with a backslash and the resource number. For example, the command line TLbImp MyModule .DLL1 would create an output assembly that contains only resource 1 in the MyModule.DLL file.

You can also include one or more options that modify the behavior of TLbImp. The following list describes these options.

  • /out:FileName: Provides the name of the file you want to produce as output. If you don’t provide this argument, the default is to add Lib to the end of the filename for the type library. For example, WMP.DLL becomes WMPLib.DLL.
  • /namespace:Namespace: Defines the namespace of the classes within the interop assembly. The default is to add Lib to the filename of the type library. For example, if the file has a name of WMP.DLL, the namespace is WMPLib.
  • /asmversion:Version: Specifies the file version number of the output assembly. This information appears on the Version tab of the file Properties dialog box shown in Figure 9-5. The default version number is 1.0.0.0.

You must specify a version number using dotted syntax. The four version number elements are: major version, minor version, build number, and revision number. For example, 1.2.3.4 would specify a major version number of 1, minor version number of 2, a build number of 3, and a revision number of 4.

  • /reference:FileName: Determines the name of the assembly that TLbImp uses to resolve references. There’s no default value. You may use this command line switch as many times as needed to provide a complete list of assemblies.
  • /tlbreference:FileName: Determines the name of the type library that TLbImp uses to resolve references. There’s no default value. You may use this command line switch as many times as needed to provide a complete list of assemblies.
  • /publickey:FileName: Specifies the name of a file containing a strong name public key used to sign the assembly. There’s no default value.
  • /keyfile:FileName: Specifies the name of a file containing a strong name key pair used to sign the assembly. There’s no default value.
  • /keycontainer:FileName: Specifies the name of a key container containing a strong name key pair used to sign the assembly. There’s no default value.
  • /delaysign: Sets the assembly to force a delay in signing. Use this option when you want to use the assembly for experimentation only.
Include version information for the assembly so others know about it.
Fi gure 9-5: Include version information for the assembly so others know about it.
  • /product:Product: Defines the name of the product that contains this assembly. This information appears on the Version tab of the file Properties dialog box shown in Figure 9-5. The default is to say that the assembly is imported from a specific type library.
  • /productversion:Version: Defines the product version number of the output assembly. This information appears on the Version tab of the file Properties dialog box shown in Figure 9-5. The default version number is 1.0.0.0.
  • /company:Company: Defines the name of the company that produced the output assembly. This information appears on the Version tab of the file Properties dialog box shown in Figure 9-5. There’s no default value.
  • /copyright:Copyright: Defines the copyright information that applies to the output assembly. This information appears on the Version tab of the file Properties dialog box shown in Figure 9-5. There’s no default value.
  • /trademark:Trademark: Defines the trademark and registered trademark information that applies to the output assembly. This information appears on the Version tab of the file Properties dialog box shown in Figure 9-5. There’s no default value.
  • /unsafe: Creates an output assembly that lacks runtime security checks. Using this option will make the assembly execute faster and reduce its size. However, you shouldn’t use this option for production systems because it does reduce the security features that the assembly would normally possess.
  • /noclassmembers: Creates an output assembly that has classes, but the classes have no members.
  • /nologo: Prevents the TLbImp utility from displaying a logo when it starts execution. This option is useful when performing batch processing.
  • /silent: Prevents the TLbImp utility from displaying any output, except error information. This option is useful when performing batch processing.
  • /silence:WarningNumber: Prevents the TLbImp utility from displaying output for the specified warning number. This option is useful when an assembly contains a number of warnings that you already know about and you want to see only the warnings that you don’t know about. You can’t use this option with the /silent command line switch.
  • /verbose: Tells the TLbImp utility to display every available piece of information about the process used to create the output assembly. This option is useful when you need to verify the assembly before placing it in a production environment or when you suspect a subtle error is causing application problems (or you’re simply curious).
  • /primary: Creates a Primary Interop Assembly (PIO). A COM component may use only one PIO and you must sign the PIO (use the /publickey, /keyfile, or /keycontainer switches to sign the assembly). See http://msdn.microsoft.com/library/aax7sdch.aspx for additional information.
  • /sysarray: Specifies that the assembly should use SAFEARRAY in place of the standard System.Array.
  • /machine:MachineType: Creates an assembly for the specified machine type. The valid inputs for this command line switch are:
    • X86
    • X64
    • Itanium
    • Agnostic
  • /transform:TransformName: Performs the specified transformations on the assembly. You may use any of these values as a transformation.
    • SerializableValueClasses: Forces TLbImp to mark all of the classes as serializable.
    • DispRet: Applies the [out, retval] attribute to methods that have a dispatch-only interface.
  • /strictref: Forces TLbImp to use only the assemblies that you specify using the /reference command line switch, along with PIAs, to produce the output assembly, even if the source file contains other references. The output assembly might not work properly when you use this option.
  • /strictref:nopia: Forces TLbImp to use only the assemblies that you specify using the /reference command line switch to produce the output assembly, even if the source file contains other references. This command line switch ignores PIAs. The output assembly might not work properly when you use this option.
  • /VariantBoolFieldToBool: Converts all VARIANT_BOOL fields in structures to bool.
  • /? or /help: Displays a help message containing a list of command line options for the version of TLbImp that you’re using.

Understanding the ActiveX Import Utility

The example in this chapter relies on the ActiveX Import (AxImp) utility because it produces the files you need to create a control (with a visual interface) rather than a component. When you use this utility, you obtain two files as output. The first contains the same information you receive when using the TLbImp utility. The second, the one with the Ax prefix, contains the code for a control. Before you can use AxImp, you need to know a bit more about it. Here’s the command line syntax for the tool:

[code]
AxImp OcxName [Options]
[/code]

The OcxName argument is simply the filename of the COM component that you want to use to create a control version of an interop assembly. A COM component can have a number of file extensions, but the most common extensions are .DLL, .EXE, and .OCX. It’s uncommon for an OLE Control eXtension (OCX), a COM component with a visual interface, to have a .EXE file extension.

You can also include one or more options that modify the behavior of AxImp. The following list describes these options.

  • /out:FileName: Provides the name of the ActiveX library file you want to produce as output. If you don’t provide this argument, the default is to add Lib to the end of the filename for the type library. For example, WMP.DLL becomes WMPLib.DLL and AxWMPLib.DLL. Using this command line switch changes the name of the AxWMPLib.DLL file. For example, if you type AxImp WMP .DLL /out:WMPOut.DLL and press Enter, the utility now outputs WMPLib.DLL and WMPOut.DLL.
  • /publickey:FileName: Specifies the name of a file containing a strong name public key used to sign the assembly. There’s no default value.
  • /keyfile:FileName: Specifies the name of a file containing a strong name key pair used to sign the assembly. There’s no default value.
  • /keycontainer:FileName: Specifies the name of a key container containing a strong name key pair used to sign the assembly. There’s no default value.
  • /delaysign: Sets the assembly to force a delay in signing. Use this option when you want to use the assembly for experimentation only.
  • /source: Generates the C# source code for a Windows Forms wrapper. You don’t need to use this option when working in IronPython because the code doesn’t show how to use the wrapper — it simply shows the wrapper code itself.
  • /rcw:FileName: Specifies an assembly to use for Runtime Callable Wrapper (RCW) rather than generating a new one. In most cases, you want to generate a new RCW when working with IronPython.
  • /nologo: Prevents the AxImp utility from displaying a logo when it starts execution. This option is useful when performing batch processing.
  • /silent: Prevents the AxImp utility from displaying any output, except error information. This option is useful when performing batch processing.
  • /verbose: Tells the AxImp utility to display every available piece of information about the process used to create the output assembly. This option is useful when you need to verify the assembly before placing it in a production environment or when you suspect a subtle error is causing application problems (or you’re simply curious).
  • /? or /help: Displays a help message containing a list of command line options for the version of AxImp that you’re using.

Creating the Windows Media Player Interop DLL

Now that you have an idea of how to use the AxImp utility, it’s time to see the utility in action. The following command line creates an interop assembly for the Windows Media Player.

[code]
AxImp %SystemRoot%System32WMP.DLL
[/code]

This command line switch doesn’t specify any options. It does include %SystemRoot%, which points to the Windows directory on your machine (making it possible to use the command line on more than one system, even if those systems have slightly different configurations). When you execute this command line, you see the AxImp utility logo. After a few minutes work, you’ll see one or more warning or error messages if the AxImp utility encounters problems. Eventually, you see a success message, as shown in Figure 9-6.

The AxImp tells you that it has generated the two DLLs needed for a control.
Fi gure 9-6: The AxImp tells you that it has generated the two DLLs needed for a control.

Exploring the Windows Media Player Interop DLL

When working with imported Python modules, you use the dir() function to see what those modules contain. In fact, you often use dir() when working with .NET assemblies as well, even though you have the MSDN documentation at hand. Theoretically, you can also use dir() when working with imported COM components as well, but things turn quite messy when you do. The “Using the Windows Media Player Interop DLL” section of this chapter describes how to import and use an interop assembly, but for now, let’s just look at WMPLib.DLL using dir(). Figure 9-7 shows typical results.

Using dir() won’t work well with interop assemblies in many cases.
Fi gure 9-7: Using dir() won’t work well with interop assemblies in many cases.

The list goes on and on. Unfortunately, this is only the top level. You still need to drill down into the interop assembly, so things can become confusing and complex. Figuring out what you want to use is nearly impossible. Making things worse is the fact that any documentation you obtain for the interop assembly probably won’t work because the documentation will take the COM perspective of working with the classes and you need the IronPython perspective. Using dir() won’t be very helpful in this situation.

Fortunately, you have another alternative in the form of the Intermediate Language Disassembler (ILDasm) utility. This utility looks into the interop assembly and creates a graphic picture of it for you. Using this utility, you can easily drill down into the interop assembly and, with the help of the COM documentation, normally figure out how to work with the COM component — even complex COM components such as the Windows Media Player.

To gain access to ILDasm, you use the same process you use for TLbImp to create a Visual Studio Command Prompt. At the command prompt, type ILDasm WMPLib.DLL and press Enter (see more of the command line options in the “Using the ILDasm Command Line” section of the chapter). The ILDasm utility will start and show entries similar to those shown in Figure 9-8.

ILDasm is an important tool for the IronPython developer who wants to work with COM. With this in mind, the following sections provide a good overview of ILDasm and many of its usage details. Most important, these sections describe how to delve into the innermost parts of any interop assembly.

Use ILDASM to explore WMPLib.DLL.
Fi gure 9-8: Use ILDASM to explore WMPLib.DLL.

Using the ILDasm Command Line

The ILDasm utility usually works fine when you run it and provide the filename of the interop assembly you want to view. However, sometimes an interop assembly is so complex that you really do want to optimize the ILDasm view. Consequently, you use command line options to change the way ILDasm works. ILDasm has the following command line syntax.

[code]
ildasm [options] <file_name> [options]
[/code]

Even though this section shows the full name of all the command line switches, you can use just the first three letters. For example, you can abbreviate /BYTES as /BYT. In addition, ILDasm accepts both the dash (-) and slash (/) as command line switch prefixes, so /BYTES and -BYTES work equally well.

The options can appear either before or after the filename. You can divide the options into those that affect output redirection (sending the output to a location other than the display) and those that change the way the file/console output appears. ILDasm further divides the file/console options into those that work with EXE and DLL files, and those that work with EXE, DLL, OBJ, and LIB files. Here are the options for output redirection.

  • /OUT=Filename: Redirects the output to the specified file rather than to a GUI.
  • /TEXT: Redirects the output to a console window rather than to a GUI. This option isn’t very useful for anything but the smallest files because the entire content of the interop assembly simply scrolls by. Of course, you can always use a pipe (|) to send the output to the More utility to view the output one page at a time.
  • /HTML: Creates the file in HTML format (valid with the /OUT option only). This option is handy for making the ILDasm available for a group of developers on a Web site. For example, if you type ILDasm /OUT=WMPLib.HTML /HTML WMPLib.DLL and press Enter, you obtain WMPLib.HTML. The resulting file is huge — 7.53 MB for WMPLib.HTML. Figure 9-9 shows how this file will appear.
  • /RTF: Creates the file in RTF format (valid with the /OUT option only). This option is handy for making the ILDasm available for a group of developers on a local network using an application such as Word. For example, if you type ILDasm /OUT=WMPLib.RTF /RTF WMPLib.DLL and press Enter, you obtain WMPLib.RTF. The resulting file is huge — 5.2 MB for WMPLib.RTF, and may cause Word to freeze.

Of course, you might not want to redirect the output to a file, but may want to change the way the console appears instead. The following options change the GUI or file/console output for EXE and DLL files only.

  • /BYTES: Displays actual bytes (in hex) as instruction comments. Generally, this information isn’t useful unless you want to get into the low-level details of the interop assembly. For example, you might see a series of hex bytes such as // SIG: 20 01 01 08, which won’t be helpful to most developers. (In this case, you’re looking at the signature for the WMPLib .IAppDispatch.adjustLeft() method.)
HTML output is useful for viewing ILDasm output in a browser
Fi gure 9-9: HTML output is useful for viewing ILDasm output in a browser
  • /RAWEH: Shows the exception handling clauses in raw form. This isn’t a useful command line switch for interop assemblies because interop assemblies don’t require exception handlers in most cases.
  • /TOKENS: Displays the metadata tokens of classes and members as comments in the source code, as shown in Figure 9-10 for the WMPLib.IAppDispatch.adjustLeft() method. For example, the metadata token for mscorlib is /*23000001*/. Most developers won’t require this information.
The metadata tokens appear as comments beside the coded text.
Fi gure 9-10: The metadata tokens appear as comments beside the coded text.
  • /SOURCE: Shows the original source lines as comments when available. Unfortunately, when working with an interop assembly, there aren’t any original source lines to show, so you won’t need to use this command line switch.
  • /LINENUM: Shows the original source code line numbers as comments when available. Again, when working with an interop assembly, there aren’t any original source code line numbers to show so you won’t need to use this command line switch.
  • /VISIBILITY=Vis[+Vis…]: Outputs only the items with specified visibility. The valid inputs for this argument are:
    • PUB: Public
    • PRI: Private
    • FAM: Family
    • ASM: Assembly
    • FAA: Family and assembly
    • FOA: Family or assembly
    • PSC: Private scope
  • /PUBONLY: Outputs only the items with public visibility (same as /VIS=PUB).
  • /QUOTEALLNAMES: Places single quotes around all names. For example, instead of seeing mscorlib, you’d see ‘mscorlib‘. In some cases, using this approach makes it easier to see or find specific names in the code.
  • /NOCA: Suppresses the output of custom attributes.
  • /CAVERBAL: Displays all of the Custom Attribute (CA) blobs in verbal form. The default setting outputs the CA blobs in binary form. Using this command line switch can make the code more readable, but also makes it more verbose (larger).
  • /NOBAR: Tells ILDasm not to display the progress bar as it redirects the interop assembly output to another location (such as a file).

ILDasm includes a number of command line switches that affect file and console output only. The following command line switches work for EXE and DLL files.

  • /UTF8: Forces ILDasm to use UTF-8 encoding for output in place of the default ANSI encoding.
  • /UNICODE: Forces ILDasm to use Unicode encoding for output in place of the default ANSI encoding.
  • /NOIL: Suppresses Intermediate Language (IL) assembler code output. Unfortunately, this option isn’t particularly useful because it creates a file that contains just the disassembly comments, not any of the class or method information. You do get the resource (.RES) file containing the resource information for the interop assembly (such as the version number). To use this command line switch, you must include redirection such as ILDasm /OUT=WMPLib.HTML / HTML /NOIL WMPLib.DLL to produce WMPLib.HTML as output.
  • /FORWARD: Forces ILDasm to use forward class declaration. In some cases, this command line switch can reduce the size of the disassembly.
  • /TYPELIST: Outputs a full list of types. Using this command line switch can help preserve type ordering.
  • /HEADERS: Outputs the file header information in the output.
  • /ITEM=Class[::Method[(Signature)]]: Disassembles only the specified item. Using this command line switch can greatly reduce the confusion of looking over an entire interop assembly.
  • /STATS: Provides statistical information about the image. The statistics appear at the beginning of the file in comments. Here’s a small segment of the statistics you might see (telling you about the use of space in the file).
    [code]
    // File size : 331776
    // PE header size : 4096 (496 used) ( 1.23%)
    // PE additional info : 1015 ( 0.31%)
    // Num.of PE sections : 3
    // CLR header size : 72 ( 0.02%)
    // CLR meta-data size : 256668 (77.36%)
    // CLR additional info : 0 ( 0.00%)
    // CLR method headers : 9086 ( 2.74%)
    // Managed code : 51182 (15.43%)
    // Data : 8192 ( 2.47%)
    // Unaccounted : 1465 ( 0.44%)
    [/code]
  • /CLASSLIST: Outputs a list of the classes defined in the module. The class list appears as a series of comments at the beginning of the file. Here’s an example of the class list output for WMPLib.DLL
    [code]
    // Classes defined in this module:
    //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    // Interface IWMPEvents (public) (abstract) (auto) (ansi) (import)
    // Class WMPPlaylistChangeEventType (public) (auto) (ansi) (sealed)
    // Interface IWMPEvents2 (public) (abstract) (auto) (ansi) (import)
    // Interface IWMPSyncDevice (public) (abstract) (auto) (ansi) (import)
    // Class WMPDeviceStatus (public) (auto) (ansi) (sealed)
    // Class WMPSyncState (public) (auto) (ansi) (sealed)
    // Interface IWMPEvents3 (public) (abstract) (auto) (ansi) (import)
    // Interface IWMPCdromRip (public) (abstract) (auto) (ansi) (import)
    [/code]
  • /ALL: Performs the combination of the /HEADER, /BYTES, /STATS, /CLASSLIST, and /TOKENS command line switches.

This set of command line switches also affects just file and console output. However, you can use them for EXE, DLL, OBJ, and LIB files.

  • /METADATA[=Specifier]: Shows the interop assembly metadata for the elements defined by Specifier. Here are the values you can use for Specifier.
    • MDHEADER: MetaData header information and sizes
    • HEX: More data in hex as well as words
    • CSV: Record counts and heap sizes
    • UNREX: Unresolved externals
    • SCHEMA: MetaData header and schema information
    • RAW: Raw MetaData tables
    • HEAPS: Raw heaps
    • VALIDATE: MetaData consistency validation

The final set of command line switches affects file and console output for LIB files only.

  • /OBJECTFILE=Obj_Filename: Shows the MetaData of a single object file in library.

Working with ILDasm Symbols

When working with ILDasm, you see a number of special symbols. Unfortunately, the utility often leaves you wondering what the symbols mean. Here are some of the most common symbols you encounter when working with COM components.

  • Interface: Represents an interface with which you can interact.
  • Private Class: Represents an abstract or sealed class in most cases.
  • Enumeration: Contains a list of enumerated items you use to provide values for method calls and other tasks.
  • Attribute: Provides access to the attributes that describe a COM component. Common attributes and attribute containers include:
    • Manifest (and its associated attributes)
    • Extends (defines a class that the class extends)
    • Implements (defines an interface that the class implements)
    • ClassInterface (see http://msdn.microsoft.com/library/system .runtime.interopservices.classinterfaceattribute.aspx for details)
    • GuidAttribute (see http://msdn.microsoft.com/library/system .runtime.interopservices.guidattribute.aspx for details)
    • TypeLibTypeAttribute (see http://msdn.microsoft.com/library/ system.runtime.interopservices.typelibtypeattribute.aspx for details)
    • InterfaceTypeAttribute (see http://msdn.microsoft.com/library/ system.runtime.interopservices.interfacetypeattribute.aspx for details)
  • Method: Describes a method that you can use within an interface or private class.
  • Property: Describes a property that you can use within an interface or private class.
  • Variable: Defines a variable of some type within an interface or private class. The variable could be an interface, such as IConnectionPoint, or an array, such as ArrayList, or anything else that the developer wanted to include.
  • Event: Specifies an event that occurs within the interface or private class.

Exploring ILDasm entries

It’s important to remember that interop assemblies simply provide a reference to the actual code found in the COM component. Even so, you can use ILDasm to find out all kinds of interesting information about the component. At the top level, you can see a list of all of the interfaces, classes, and enumerations, as shown in Figure 9-8. The next level is to drill down into specific methods and properties, as shown in Figure 9-11.

Opening an interface displays all the methods it contains.
Fi gure 9-11: Opening an interface displays all the methods it contains.

The information shown in this figure is actually the most valuable information that ILDasm provides because you can use it to discover the names of methods and properties you want to use in your application. In addition, these entries often provide clues about where to look for additional information in the vendor help files. Sometimes these help files are a little disorganized and you might not understand how methods are related until you see this visual presentation of them.

It’s possible to explore the interop assembly at one more level. Double-click any of the methods, properties, or attributes and you’ll see a dialog box like the one shown in Figure 9-12. The amount of information you receive may seem paltry at first. However, look closer and you’ll discover that this display often tells you about calling requirements. For example, you can discover the data types you need to rely on to work with the COM component (something that COM documentation can’t tell you because the vendor doesn’t know that you’re using the component from .NET).

Discover the calling requirements for methods by reviewing the methods’ underlying code.
Fi gure 9-12: Discover the calling requirements for methods by reviewing the methods’ underlying code.

Using the Windows Media Player Interop DLL

It’s finally time to use early binding to create a connection to the Windows Media Player. This example uses the Windows Media Player as a control. You might find a number of online sources that say it’s impossible to use the Windows Media Player as a control, but it’s actually quite doable. Of course, you need assistance from yet another one of Microsoft’s handy utilities, Resource Generator (ResGen) to do it. The example itself relies on the normal combination of a form file and associated application file. The following sections provide everything needed to create the example.

Working with ResGen

Whenever you drop a control based on a COM component onto a Windows Forms dialog box, the IDE creates an entry for it in the .RESX file for the application. This entry contains binary data that describes the properties for the COM component. You may not know it, but most COM components have a Properties dialog box that you access by right-clicking the control and choosing Properties from the context menu. These properties are normally different from those shown in the Properties window for the managed control. Figure 9-13 shows the Properties dialog box for the Windows Media Player.

The COM component has properties that differ from the managed control.
Fi gure 9-13: The COM component has properties that differ from the managed control.

It’s essential to remember that the managed control is separate from the COM component in a Windows Forms application. The COM component properties appear in a separate location and the managed environment works with them differently. If you look in the .RESX file, you see something like this:

[code]
<data name=”MP.OcxState” mimetype=”application/x-microsoft.net.object.binary.base64”>
<value>
AAEAAAD/////AQAAAAAAAAAMAgAAAFdTeXN0ZW0uV2luZG93cy5Gb3JtcywgVmVyc2lvbj00LjAuMC4w
LCBDdWx0dXJlPW5ldXRyYWwsIFB1YmxpY0tleVRva2VuPWI3N2E1YzU2MTkzNGUwODkFAQAAACFTeXN0
ZW0uV2luZG93cy5Gb3Jtcy5BeEhvc3QrU3RhdGUBAAAABERhdGEHAgIAAAAJAwAAAA8DAAAAywAAAAIB
AAAAAQAAAAAAAAAAAAAAALYAAAAAAwAACAAUAAAAQgBlAGwAbABzAC4AdwBhAHYAAAAFAAAAAAAAAPA/
AwAAAAAABQAAAAAAAAAAAAgAAgAAAAAAAwABAAAACwD//wMAAAAAAAsA//8IAAIAAAAAAAMAMgAAAAsA
AAAIAAoAAABmAHUAbABsAAAACwAAAAsAAAALAP//CwD//wsAAAAIAAIAAAAAAAgAAgAAAAAACAACAAAA
AAAIAAIAAAAAAAsAAAAuHgAAfhsAAAs=
</value>
[/code]

This binary data contains the information needed to configure the COM aspects of the component. When the application creates the form, the binary data is added to the component using the OcxState property like this:

[code]
this.MP.OcxState =
((System.Windows.Forms.AxHost.State)(resources.GetObject(“MP.OcxState”)));
[/code]

Because of the managed code/COM component duality of a Windows Forms application, you can’t simply embed the COM component into an IronPython application using techniques such as the one shown at http://msdn.microsoft.com/library/dd564350.aspx. You must provide the binary data to the COM component using the OcxState property. Unfortunately, IronPython developers have an added twist to consider. The C# code shown previously won’t work because you don’t have access to a ComponentResourceManager for the IronPython form. Instead, you must read the resource from disk using code like this

[code]
self.resources = System.ComponentModel.ComponentResourceManager.
CreateFileBasedResourceManager(
‘frmUseWMP’, ‘C:/0255 – Source Code/Chapter09’, None)
[/code]

Now, here’s where the tricky part begins (you might have thought we were there already, but we weren’t). The CreateFileBasedResourceManager() method doesn’t support .RESX files. Instead, it supports .RESOURCES files. The ResGen utility can create .RESOURCES files. You might be tempted to think that you can duplicate the binary data from the .RESX file using .TXT files as suggested by the ResGen documentation. Unfortunately, .TXT files can only help you create string data in .RESOURCES files.

So your first step is to create a Windows Forms application, add the component to it, perform any required COM component configuration (no need to perform the managed part), save the result, and then take the resulting .RESX file for your IronPython application. You can then use ResGen to create the .RESOURCES file using a command line like this:

[code]
ResGen frmUseWMP.RESX
[/code]

ResGen outputs a .RESOURCES file you can use within your application. Of course, like every Microsoft utility, ResGen offers a little more than simple conversion. Here’s the command line syntax for ResGen:

[code]
ResGen inputFile.ext [outputFile.ext] [/str:lang[,namespace[,class[,file]]]]
ResGen [options] /compile inputFile1.ext[,outputFile1.resources] […]
[/code]

Here are the options you can use.

  • /compile: Performs a bulk conversion of files from one format to another format. Typically, you use this feature with a response file where you provide a list of files to convert.
  • /str:language[, namespace[, classname[, filename]]]: Defines a strongly typed resource class using the specified programming language that relies on Code Document Object Model (CodeDOM) (see http://msdn.microsoft.com/library/y2k85ax6.aspx for details). To ensure that the strongly typed resource class works properly, the name of your output file, without the .RESOURCES extension, must match the [namespace.]classname of your strongly typed resource class. You may need to rename your output file before using it or embedding it into an assembly.
  • /useSourcePath: Specifies that ResGen uses each source file’s directory as the current directory for resolving relative file paths.
  • /publicClass: Creates the strongly typed resource class as a public class. You must use this command line switch with the /str command line switch.
  • /r:assembly: Tells ResGen to load types from the assemblies that you specify. A .RESX file automatically uses newer assembly types when you specify this command line switch. You can’t form the .RESX file to rely on older assembly types.
  • /define:A[,B]: Provides a means for performing optional conversions specified by #ifdef structures within a .RESTEXT (text) file.
  • @file: Specifies the name of a response file to use for additional command line options. You can only provide one response file for any given session.

Creating the Media Player Form Code

As normal, the example relies on two files to hold the form and the client code. Because we’re using a COM component for this example, the form requires a number of special configuration steps. Listing 9-1 shows the form code.

Listin g 9-1: Creating a Windows Forms application with a COM component

[code]
# Set up the path to the .NET Framework.
import sys
sys.path.append(‘C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727’)
# Make clr accessible.
import clr
# Add any required references.
clr.AddReference(‘System.Windows.Forms.DLL’)
clr.AddReference(‘System.Drawing.DLL’)
clr.AddReference(‘AxWMPLib.DLL’)
# Import the .NET assemblies.
import System
import System.Windows.Forms
import System.Drawing.Point
import AxWMPLib
class frmUseWMP(System.Windows.Forms.Form):
# This function performs all of the required initialization.
def InitializeComponent(self):
# Create a Component Resource Manager
self.resources = System.ComponentModel.ComponentResourceManager.
CreateFileBasedResourceManager(
‘frmUseWMP’, ‘C:/0255 – Source Code/Chapter09’, None)
# Configure Windows Media Player
self.MP = AxWMPLib.AxWindowsMediaPlayer()
self.MP.Dock = System.Windows.Forms.DockStyle.Fill
self.MP.Enabled = True
self.MP.Location = System.Drawing.Point(0, 0)
self.MP.Name = “MP”
self.MP.Size = System.Drawing.Size(292, 266)
self.MP.OcxState = self.resources.GetObject(“MP.OcxState”)
# Configure the form.
self.ClientSize = System.Drawing.Size(350, 200)
self.Text = ‘Simple Windows Media Player Example’
# Add the controls to the form.
self.Controls.Add(self.MP)
[/code]

The code begins with the normal steps of adding the .NET Framework path, making clr accessible, importing the required DLLs, and importing the required assemblies. Notice that the example uses the AxWMPLib.DLL file and AxWMPLib assembly. Remember that the Ax versions of the files provide wrapping around the ActiveX controls to make them usable as a managed control.

The code begins by creating a ComponentResourceManager from a file, using the CreateFileBasedResourceManager() method. Normally, a managed application would create the ComponentResourceManager directly from the data stored as part of the form. This is a special step for IronPython that could cause you grief later if you forget about it.

Even though Listing 9-1 shows the CreateFileBasedResourceManager() method call on multiple lines, it appears on a single line in the actual source code. The IronPython call won’t work if you place it on multiple lines because IronPython lacks a line continuation character (or methodology).

Media Player (MP) configuration comes next. You must instantiate the control from the AxWMPLib .AxWindowsMediaPlayer() constructor, rather than using the COM component constructor. The Ax constructor provides a wrapper with additional features you need within the Windows Forms environment. Like most controls, you need to specify control position and size on the form. However, because of the nature of the Windows Media Player, you want it to fill the client area of the form, so you set the Dock property to System.Windows.Forms.DockStyle.Fill.

The one configuration item that you must perform correctly is setting the COM component values using the MP.OcxState property. The ComponentResourceManager, resources, contains this value. You simply set the MP.OcxState property to resources.GetObject(“MP.OcxState”) — this technique is also different from what you’d use in a C# or Visual Basic.NET application. The rest of the form code isn’t anything special — you’ve seen it in all of the Windows Forms examples so far.

Creating the Media Player Application Code

Some COM components require a lot of tinkering by the host application, despite being self-contained for the most part. However, the Windows Media Player is an exception to the rule. Normally, you want to tinker with it as little as possible to meet your programming requirements. In some cases, you won’t want to tinker at all, as shown in Listing 9-2.

Listin g 9-2: Interacting with the COM component

[code]
# Set up the path to the .NET Framework.
import sys
sys.path.append(‘C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727’)
# Make clr accessible.
import clr
# Add any required references.
clr.AddReference(‘System.Windows.Forms.DLL’)
# Import the .NET assemblies.
import System
import System.Windows.Forms
# import the form.
from frmUseWMP import *
# Define the Windows Form and the elements of this specific instance.
WMPForm = frmUseWMP()
WMPForm.InitializeComponent()
# Run the application.
System.Windows.Forms.Application.Run(WMPForm)
[/code]

This code does the minimum possible for a Windows Forms application. It contains no event handlers or anything of that nature. In fact, the code simply displays the forms. Believe it or not, the actual settings for the application appear as part of the .RESOURCES file. What you see when you run this application appears in Figure 9-14.

This is a fully functional Windows Media Player. You can adjust the volume, set the starting position, pause the play, or do anything else you normally do with the Windows Media Player. It’s even possible to right-click the Windows Media Player to see the standard context menu. The context menu contains options to do things like slow the play time, see properties, and change options. Play with the example a bit to see just how fully functional it is.

The example application shows a form with Windows Media Player on it.
Figure 9-14: The example application shows a form with Windows Media Player on it.

A Quick View of the Windows Media Player Component Form

You may encounter times when you really don’t want to display the Windows Media Player as a control — you simply want it to work in the background. In this case, you can use the Windows Media Player as a component. The following code snippet shows the fastest way to perform this task in IronPython (the sys.path.append() call should appear on a single line, even though it appears on two lines in the book). (You can find the entire source in the MPComponent example supplied with the book’s source code.)

[code]
# Set up the path to the .NET Framework.
import sys
sys.path.append(
‘C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727’)
# Make clr accessible.
import clr
# Add any required references.
clr.AddReference(‘System.Windows.Forms.DLL’)
clr.AddReference(‘WMPLib.DLL’)
# Import the .NET assemblies.
import System
import System.Windows.Forms
import WMPLib
# import the form.
from frmMPComponent import *
# Define the event handlers.
def btnPlay_Click(*args):
# Create the Media Player object.
MP = WMPLib.WindowsMediaPlayerClass()
# Assign the media player event.
MP.MediaError += PlayerError
# Assign a sound to the Media Player.
MP.URL = “Bells.WAV”
# Play the sound.
MP.controls.play()
[/code]

Notice that you start by adding a reference to WMPLib.DLL and importing WMPLib into IronPython, rather than using the Ax versions. The next step appears in the btnPlay_Click() event handler. After the code imports the required support, it instantiates an object (MP) of the WindowsMediaPlayerClass, not WindowsMediaPlayer (an interface) as many of the Microsoft examples show.

Now you can perform various tasks with the resulting component. The example is simple. All it does is assign a filename to the URL property, and then call on controls.play() to play the file. You can find additional information on using this technique at http://msdn.microsoft.com/library/dd562692.aspx.

Performing Late Binding Using Activator.CreateInstance()

The Activator.CreateInstance() method is one of the more powerful ways to work with objects of all kinds. In fact, this particular method can give your IronPython applications the same kind of support as the Windows scripting engines CScript and WScript.

When working with the Activator.CreateInstance() method, you describe the type of object you want to create. The object can be anything. In fact, if you look through the HKEY_CLASSES_ ROOT hive of the registry, you’ll find a number of objects to try on your system.

The example in this section does something a bit mundane, but also interesting — it demonstrates how to interact with the Shell objects. You can get a description of the Shell objects at http:// msdn.microsoft.com/library/bb774122.aspx. The main reason to look at the Shell objects is that every Windows machine has them and they’re pretty useful for detecting user preferences. Listing 9-3 shows the code used for this example.

Listin g 9-3: Working with Shell objects

[code]
# We only need the System assembly for this example.
from System import Activator, Type
# Import the time module to help with a pause.
import time
# Constants used for Shell settings.
from ShellSettings import *
# Create the Shell object.
ShObj = Activator.CreateInstance(Type.GetTypeFromProgID(‘Shell.Application’))
# Toggle the Desktop.
raw_input(‘Press Enter to show and then hide the Desktop’)
ShObj.ToggleDesktop()
time.sleep(2)
ShObj.ToggleDesktop()
# Show some of the settings.
print ‘nThe user wants to show file extensions:’,
print ShObj.GetSetting(SSF_SHOWEXTENSIONS)
print ‘The user wants to see system files:’,
print ShObj.GetSetting(SSF_SHOWSYSFILES)
print ‘The user also wants to see operating system files:’,
print ShObj.GetSetting(SSF_SHOWSUPERHIDDEN)
# Check Explorer policies.
print ‘nThe NoDriveTypeAutoRun policies are:’
# Obtain the bit values. These values are:
# 0 Unknown drives
# 1 No root directory
# 2 Removable drives (Floppy, ZIP)
# 3 Hard disk drives
# 4 Network drives
# 5 CD-ROM drives
# 6 RAM disk drives
# 7 Reserved
MyBits = ShObj.ExplorerPolicy(‘NoDriveTypeAutoRun’)
# Display the results.
if MyBits.__and__(0x01) == 0x01:
print(‘tAutorun Disabled for Unknown Drives’)
else:
print(‘tAutorun Enabled for Unknown Drives’)
if MyBits.__and__(0x02) == 0x02:
print(‘tAutorun Disabled for No Root Directory’)
else:
print(‘tAutorun Enabled for No Root Drives’)
if MyBits.__and__(0x04) == 0x04:
print(‘tAutorun Disabled for Removable (Floppy/ZIP) Drives’)
else:
print(‘tAutorun Enabled for Removable (Floppy/ZIP) Drives’)
if MyBits.__and__(0x08) == 0x08:
print(‘tAutorun Disabled for Hard Disk Drives’)
else:
print(‘tAutorun Enabled for Hard Disk Drives’)
if MyBits.__and__(0x10) == 0x10:
print(‘tAutorun Disabled for Network Drives’)
else:
print(‘tAutorun Enabled for Network Drives’)
if MyBits.__and__(0x20) == 0x20:
print(‘tAutorun Disabled for CD-ROM Drives’)
else:
print(‘tAutorun Enabled for CD-ROM Drives’)
if MyBits.__and__(0x40) == 0x40:
print(‘tAutorun Disabled for RAM Disk Drives’)
else:
print(‘tAutorun Enabled for RAM Disk Drives’)
# Pause after the debug session.
raw_input(‘Press any key to continue…’)
[/code]

This example starts by showing a different kind of import call. In this case, the import retrieves only the Activator and Type classes from the System assembly. Using this approach reduces environmental clutter. In addition, using this technique reduces the memory requirements for your application and could mean the application runs faster. The example also imports the time module.

The first step in this application can seem a little complicated so it pays to break it down into two pieces. First, you must get the type of a particular object by using its identifier within the registry with the Type.GetTypeFromProgID() method. As previously mentioned, the object used in this example is Shell.Application. After the code obtains the type, it can create an instance of the object using Activator.CreateInstance().

The Shell.Application object, ShObj, provides several interesting methods and this example works with three of them. The first method, ToggleDesktop(), provides the same service as clicking the Show Desktop icon in the Quick Launch toolbar. Calling ToggleDesktop() the first time shows the desktop, while the second call restores the application windows to their former appearance. Notice the call to time.sleep(2), which provides a 2-second pause between the two calls.

The second method, GetSetting(), accepts a constant value as input. Listing 9-4 shows common settings you can query using GetSetting(). The example shows the results of three queries about Windows Explorer settings for file display. You can see these results (as well as the results for the third method) in Figure 9-15.

Listin g 9-4: Queryable information for GetSetting()

[code]
SSF_SHOWALLOBJECTS = 0x00000001
SSF_SHOWEXTENSIONS = 0x00000002
SSF_HIDDENFILEEXTS = 0x00000004
SSF_SERVERADMINUI = 0x00000004
SSF_SHOWCOMPCOLOR = 0x00000008
SSF_SORTCOLUMNS = 0x00000010
SSF_SHOWSYSFILES = 0x00000020
SSF_DOUBLECLICKINWEBVIEW = 0x00000080
SSF_SHOWATTRIBCOL = 0x00000100
SSF_DESKTOPHTML = 0x00000200
SSF_WIN95CLASSIC = 0x00000400
SSF_DONTPRETTYPATH = 0x00000800
SSF_SHOWINFOTIP = 0x00002000
SSF_MAPNETDRVBUTTON = 0x00001000
SSF_NOCONFIRMRECYCLE = 0x00008000
SSF_HIDEICONS = 0x00004000
SSF_FILTER = 0x00010000
SSF_WEBVIEW = 0x00020000
SSF_SHOWSUPERHIDDEN = 0x00040000
SSF_SEPPROCESS = 0x00080000
SSF_NONETCRAWLING = 0x00100000
SSF_STARTPANELON = 0x00200000
SSF_SHOWSTARTPAGE = 0x00400000
[/code]

The shell objects provide access to all sorts of useful information.
Fi gure 9-15: The shell objects provide access to all sorts of useful information.

The third method, ExplorerPolicy(), is a registry-based query that relies on bit positions to define a value. You find these values in the HKEY_CURRENT_USERSoftwareMicrosoft WindowsCurrentVersionPoliciesExplorer registry key. The two most common policies are NoDriveAutorun and NoDriveTypeAutoRun. When working with the NoDriveAutorun policy, Windows enables or disables autorun on a drive letter basis where bit 0 is drive A and bit 25 is drive Z. Listing 9-3 shows how to work with the bits for the NoDriveTypeAutoRun policy, while Figure 9-15 shows the results for the host machine.

You can find a number of other examples of this kind of late binding for IronPython on the Internet. For example, you can see a Word late binding example at http://www.ironpython.info/index .php/Extremely_Late_Binding. This particular example would possibly be the next step for many developers in working with Activator.CreateInstance(). The important thing to remember is that this method is extremely flexible and that you need to think of the impossible, as well as the possible, when using it.

Performing Late Binding Using Marshal.GetActiveObject()

Sometimes you need to interact with an application that’s already running. In this case, you don’t want to create a new object; you want to gain access to an existing object. The technique used to perform this type of late binding is to call Marshal.GetActiveObject() with the type of object you want to access. Typically, you use this technique with application objects, such as a running copy of Word. Listing 9-5 shows an example of how to use Marshal.GetActiveObject() to gain access to a running Word application.

Listin g 9-5: Working with a running copy of Word

[code]
# Import only the required classes from System.
from System.Runtime.InteropServices import Marshal
# Obtain a pointer to the running Word application.
# Word must be running or this call will fail.
WordObj = Marshal.GetActiveObject(‘Word.Application’)
# Add a new document to the running copy of Word.
MyDoc = WordObj.Documents.Add()
# Get the Application object.
App = MyDoc.Application
# Type some text in the document.
App.Selection.TypeText(‘Hello World’)
App.Selection.TypeParagraph()
App.Selection.TypeText(‘Goodbye!’)
[/code]

The import statement differs from normal in this example. Notice that you can drill down into the namespace or class you want, and then import just the class you need. In this case, the example requires only the Marshal class from System.Runtime.InteropServices.

The first step is to get the running application. You must have a copy of Word running for this step to work; otherwise, you get an error. The call to Marshal.GetActiveObject() with Word.Application returns a Word object, WordObj. This object is the same object you get when working with Visual Basic for Applications (VBA). In fact, if you can do it with VBA, you can do it with IronPython.

After gaining access to Word, the application adds a new document using WordObj.Documents.Add(). It then creates an Application object, App. Using the App.Selection.TypeText() method, the application types some text into Word, as shown in Figure 9-16. Of course, you can perform any task required — the example does something simple for demonstration purposes.

 You can control Word using IronPython as easily as you can using VBA.
Fi gure 9-16: You can control Word using IronPython as easily as you can using VBA.