Using IronPython with Mono

What Is Mono?

Mono (http://www.mono-project.com/) is a run time along the same lines as the .NET Framework, and it includes much of the functionality of the .NET Framework. In fact, with each release, Mono gets a bit closer to .NET Framework functionality. However, don’t get the idea that Mono will ever exactly match the .NET Framework. Platform differences, Microsoft copyrights, and other issues will always keep Mono just a bit different from the .NET Framework. Even so, Mono can run a considerable number of .NET applications. The following sections describe Mono, its advantages and limitations, in greater detail.

An Overview of the Mono Family

You can obtain Mono for a considerable number of platforms. In fact, the makers of Mono add new platforms with every release. At one time, Mono worked on just a few Linux implementations, Windows, and the Mac OS X. Over time, Mono support has increased to the exciting list of platforms that follows.

  • LiveCD: This is actually an openSUSE 11.2.1 (http://www.opensuse.org/en/) LiveCD (a CD or DVD that contains a bootable image — see http://en.wikipedia.org/wiki/ Live_CD for details) that includes Mono 2.6.1.
  • Mac OS X: You can use this installation on a number of Mac versions including Mac OS X Tiger (10.4), Leopard (10.5), and Snow Leopard (10.6) (it may work on other versions as well, but you’re on your own for support). The download includes Mono, Cocoa#, and Gtk# (GIMP Toolkit Sharp). You need to download the Client Software Development Kit (CSDK), available on the Mono site, separately. There are separate downloads for the Intel and PowerPC platforms. You can learn more about Mac OS X at http://www.apple.com/macosx/.
  • openSUSE: You can use this download for the openSUSE 11.0, 11.1, and 11.2 platforms. You must have your own system with openSUSE installed to use it. You can download openSUSE at http://software.opensuse.org/. Just in case you’re interested, the SUSE part of the name stands for Software und System-Entwicklung, which translates to software and systems development.
  • SLES/SLED: You can use this download for SUSE Linux Enterprise Server (SLES) or SUSE Linux Enterprise Desktop (SLED). SLES and SLED are the paid versions of SUSE from Novell. As with openSUSE, you must have your own system with SLES or SLED installed to use this version of Mono. You can find out more about SLES and SLED at http://www.novell.com/linux/.
  • Virtual PC: This is actually an openSUSE 11.2.1 virtual PC image that includes Mono 2.6.1. You could use this download to check out Linux functionality for your IronPython application on your PC without leaving Windows. Of course, performance won’t be very good, but it will get the job done.
  • VMware: This is actually an openSUSE 11.2.1 VMware image that includes Mono 2.6.1. You’d use it to check your application for Linux functionality without leaving the host operating system.
  • Windows: You can officially use this download for Windows 2000, XP, 2003, and Vista. Testing shows that it also works fine for Windows 7 and Windows Server 2008. The download includes Mono for Windows, Gtk# (a graphics library to display a user interface onscreen), and XSP (eXtensible Server Pages, an alternate Web server for serving ASP.NET pages). You can also get the Mono Migration Analyzer tool as a separate download.
  • Other: This is a group of less supported platforms including Debian and Ubuntu. At least these two platforms have supported packages. You can also get Mono in an unsupported form for Solaris, Nokia, and Maemo. Theoretically, you could support yet other platforms by compiling the source code found at http://ftp.novell.com/pub/mono/sources-stable/.

Of course, this list contains only a summary of the main Mono downloads. There are a large number of Mono add-ons a well. For example, you can obtain Mono Tools for Visual Studio (http:// go-mono.com/monotools/download/) if you want to work with Mono directly from Visual Studio. Unfortunately, the current version of this product only works with Visual Studio 2008. The developer should provide a Visual Studio 2010 version soon. You can obtain a trial version of Mono Tools for Visual Studio (registration is required), but you must pay for the full version.

IronPython does include support for Silverlight development. If you plan to use IronPython for Web applications and need to support multiple platforms, you might want to look at Moonlight (http:// mono-project.com/Moonlight) instead. This Silverlight replacement works on the same platforms that Mono does and should also work fine with IronPython.

Some of the extensions to Mono are well outside the scope of this book, but are interesting to contemplate. For example, you can get Mono Touch (http://monotouch.net/) to develop applications for the iPhone and iPod Touch devices. The point is that you can probably find some form of Mono to meet just about any need, but using Mono fully means learning some new techniques, such as creating user interfaces using Gtk#.

Considering the Reasons for Using Mono

You already know the reasons that you’re using the .NET Framework and this chapter isn’t about changing your mind. The .NET Framework is stable and many developers love the functionality it provides them for building great applications. However, you could think of Mono as another tool to extend the range of your applications. If for no other reason, the fact that you could run your IronPython application on Linux or the Mac OS X makes Mono a good choice for some forms of application development. In sum, the main reason for using Mono in place of the .NET Framework is flexibility.

As previously mentioned, Mono and the .NET Framework aren’t precisely the same. The first thought that most developers will have is that compatibility issues will be bad, and to a certain extent, they do cause problems. However, Mono also provides functionality that you won’t find when working with the .NET Framework. Features such as Gtk# actually make Mono a better product. In addition, with Mono you have a lightweight Web server for ASP.NET pages, XSP, that works on every Mono platform. Therefore, the differences between Mono and the .NET Framework aren’t always bad — sometimes they become downright useful.

Mono does provide direct support for IronPython, but you need to use a newer version of Mono (see http://www.mono-project.com/Python for details). The support isn’t all that good. The section “Running the Application from the Command Line” later in this chapter demonstrates the problem of using the Mono implementation of IronPython. Even so, you do get IronPython support that will likely improve as Mono improves, so this is an area where you can expect Mono to grow as an IronPython platform. In reality, the Mono community is quite excited about IronPython. You can find tutorials for using IronPython in a Mono environment at http://zetcode.com/tutorials/ ironpythontutorial/. If you want to see IronPython running under Mono on a Linux system, see the screenshot and description at http://www.ironpython.info/index.php/Mono.

Understanding Mono Limitations

Don’t get the idea that every .NET application will instantly run on Mono. For example, while Mono includes support for Language Integrated Query (LINQ), the support isn’t perfect. The LINQ to SQL support works fine for many applications, but not all of them. The Mono developers realize that the support isn’t complete and they plan to work on it (see the release notes at http://www.mono-project .com/Release_Notes_Mono_2.6.1 for details).

There are some obvious limitations for using Mono that should come to mind immediately. Because the purpose of Mono is to work across platforms, the P/Invoke calls in your extensions aren’t going to work. A P/Invoke call causes your extension to provide Windows-specific support, so using it on Linux wouldn’t work no matter what product you tried. The previous chapters in the book have emphasized when a particular technique is unlikely to produce useful cross-platform results.

The current version of Mono doesn’t work with .NET Framework 4.0 applications. The applications won’t start at all — you see an error message instead. However, Mono does work fine with older versions of the .NET Framework. It’s only a matter of time before Mono supports the .NET Framework 4.0, so this is a short-term limitation that you can easily overcome by using an older version of the .NET Framework when building your application. Given that IronPython doesn’t currently support the .NET Framework 4.0 in many respects, this particular problem isn’t much of an issue for IronPython developers.

In a few cases, you have to look around to determine whether you’ll encounter problems using Mono for a particular task. For example, if your ASP.NET application uses Web Parts, you can’t use Mono (see http://www.mono-project.com/ASP.NET). You also can’t use a precompiled updateable Web site.

Using Mono on Windows Server 2008 Server Core

Early versions of Windows Server 2008 Server Core (Server Core for short) don’t come with any form of the .NET Framework. Consequently, you can’t run any form of .NET application on early versions of Server Core unless you use Mono. The lack of .NET Framework support on Server Core led some people to come up with odd solutions to problems, such as running PowerShell (see the solution at http://dmitrysotnikov.wordpress.com/2008/05/15/powershell-on-server-core/).

Fortunately, Microsoft decided to provide a limited version of the .NET Framework for Windows Server 2008 Server Core Edition R2. You can read about it at http://technet.microsoft.com/ library/dd883268.aspx. However, this version of the .NET Framework still has significant limitations and you might actually find it better to use Mono for your .NET applications. For example, while you can now provide limited support for ASP.NET on Server Core, you might actually find the Mono alternative, XSP, to provide the solutions you need for your application.

Mono has generated quite a bit of interest from the Server Core crowd, especially anyone who uses Server Core as their main server. Server Core has a number of advantages that makes it popular with small- to medium-sized companies. It uses far less memory and other resources, runs faster, runs more reliably, and has a far smaller attack surface for those nefarious individuals who want to ruin your day by attacking your server. You can find a complete article about running applications on Server Core using Mono at http://www.devsource.com/c/a/Architecture/ Mixing-Server-Core-with-NET-Applications/.

Obtaining and Installing Mono

It’s time to obtain and install your copy of Mono. Of course, the first step is to download the product. You can find the various versions of Mono at http://www.go-mono.com/mono-downloads/ download.html. This section assumes you’re installing Mono version 2.6.1 on a Windows system. If you need to install Mono on another system, follow the instructions that the Mono Web site provides for those versions. After you complete the download, follow these steps to perform the installation.

  1. Double-click the mono-2.6.1-gtksharp-2.12.9-win32-1.exe file you downloaded from the Mono Web site. You see a Welcome page.
  2. Click Next. You see a License page.
  3. Read the licensing information. Select I Accept the Agreement, and then click Next. You see the Information page shown in Figure 19-1. Unlike most Information pages, this one actually contains a lot of useful information. Make sure you review the information it contains and click on the links it provides as needed. Especially important for keeping updated on Mono is joining the mailing list (http://www.mono-project.com/Mailing_Lists) or forums (http://www.go-mono.org/forums/). You can find these links at the bottom of the Information page.

    Make sure you review this Information page because it contains useful information.
    Figure 19-1: Make sure you review this Information page because it contains useful information.
  4. Read the release information and then click Next. You see the Select Destination Location page shown in Figure 19-2. Normally, you can accept the default installation location. Some developers prefer a less complex path to Mono, such as simply C:Mono, to make it easier to access from the command line. The chapter uses the default installation location.

    Select an installation location for Mono.
    Figure 19-2: Select an installation location for Mono.
  5. Provide an installation location for Mono and then click Next. You see the Select Components page shown in Figure 19-3. The components you select depend on what you plan to do with Mono — you can always change your setup later if necessary. If your only goal is to try Mono for your existing .NET applications and to create some simple IronPython applications, you really don’t need the Gtk# and XSP support. This chapter assumes that you perform a Compact Installation to obtain a minimum amount of support for working with the IronPython sample application.

    Choose the Mono components that you want to install.
    Figure 19-3: Choose the Mono components that you want to install.
  6. Select the components you want to install and then click Next. You see the Select Start Menu Folder page. This is where you choose a name for the folder that holds the Mono components. The default name normally works fine.
  7. Type a name for the Start menu folder (or simply accept the default) and then click Next. You see the Ready to Install page. This page provides a summary of the options that you’ve selected.
  8. Review the installation options and then click Install. You see the Installing page while the installer installs Mono on your machine. After a few minutes, you see a completion dialog box.
  9. Click Finish. You’re ready to begin using Mono.

Creating an IronPython Application with Mono

It’s time to begin working with Mono and IronPython to create an application. Of course, you’ll want to know a bit more about how Mono works before you just plunge into the project, so the first step is to look at Mono from a command line perspective. The first section that follows shows how to create an IPY environment variable and use it to open the IronPython console using Mono whenever you need it. The sections that follow show how to create a project, build a simple IronPython application, and then test the application in a number of ways.

Working at the Command Line

Mono works differently than the .NET Framework. When you want to use the .NET Framework to execute an application, you simply double-click the application and it starts. The same doesn’t hold true for Mono. If you want to execute an application using Mono, you must open the Mono command prompt and start it by specifically specifying Mono. Unfortunately, this limitation has an unusual effect on working with IronPython because you can no longer access IPY.EXE using the Path environment variable. Instead, you must create a special IPY environment variable using the following steps.

  1. Double-click the System applet in the Control Panel and choose the Advanced tab. You see the System Properties dialog box.
  2. Click Environment Variables. You see the Environment Variables dialog box.
  3. Click New in the System Variables section of the Environment Variables dialog box if you want to use IronPython from any account on the machine or the User Variables section if you want to use IronPython only from your personal account. You see a New System Variable or New User Variable dialog box. Except for the title, both dialog boxes are the same.
  4. Type IPY in the Variable Name field.
  5. Type C:Program FilesIronPython 2.6 or the location of your IronPython installation in the Variable Value field.
  6. Click OK three times to add the new environment variable, close the Environment Variables dialog box, and close the System Properties dialog box. You’re ready to begin working with IronPython.

At this point, you’re ready to begin working with Mono. Choose Start ➪ Programs ➪ Mono 2.6.1 for Windows ➪ Mono-2.6.1 Command Prompt to display a Mono command prompt. When you see the Mono command prompt, type Mono “%IPY%IPY.EXE” and press Enter. You’ll see the usual IronPython console.

The first thing you should notice is that the .NET Framework version reporting by the IronPython console is slightly different from the one you normally see. There isn’t any problem with this difference. In fact, it’s the only difference you’re going to notice as you work with the IronPython console. Let’s give it a try so you can see for yourself. Type the following code and you’ll see the standard responses shown in Figure 19-4.

[code]
import sys
for ThisPath in sys.path:
print ThisPath
[/code]

Running IronPython under Mono doesn’t really look any different.
Figure 19-4: Running IronPython under Mono doesn’t really look any different.

If you compare the results you see when running IronPython under the .NET Framework with the results you see when running IronPython under Mono, you won’t notice any differences. In fact, you can try out the applications in this book, and you won’t see any differences at all unless you need to work with an extension or other outside code source (and you might not even see any differences then). Working with Mono simply means you have access to more platforms when working with IronPython, not that you have more limitations.

Defining the Project

The project you create for working with Mono is going to be just a little different from the one you create when working strictly with the .NET Framework. You’ll still start up IronPython using the Visual Studio IDE, but there’s an extra step now: you must start Mono first.

  1. Choose File ➪ Open ➪ Project/Solution. You see the Open Project dialog box shown in Figure 19-5.
    Use Mono as the starting point for your project.
    Figure 19-5: Use Mono as the starting point for your project.

     

  2. Highlight Mono.EXE in the Program FilesMono-2.6.1bin folder of your machine (unless you used a different installation folder) and click Open. Visual Studio creates a solution based on Mono.
  3. Right-click the Mono entry in Solution Explorer and choose Properties from the context menu. You see the General tab of the Properties window shown in Figure 19-6.

    Set the Mono configuration for your project.
    Figure 19-6: Set the Mono configuration for your project.
  4. Type “C:Program FilesIronPython 2.6IPY.EXE“ -D TestMono.py in the Arguments field (change the folder location to match your IronPython installation).
  5. Click the ellipses in the Working Directory field to display the Browse for Folder dialog box. Locate the folder that contains the project you’re working on and click OK. The project folder appears in the Working Directory field of the Properties window.
  6. Choose File ➪ Save All. You see a Save File As dialog box.
  7. Type the solution name in the Object Name dialog box and click Save.
  8. Right-click the solution entry in Solution Explorer and choose Add ➪ New Item. You see the Add New Item dialog box.
  9. Highlight the Text File template. Type TestMono.py in the Name field and click Add. Visual Studio adds the Python file to your project and automatically opens it for you.

Creating the Code

It’s time to add some code to the IronPython file. This example provides a listing of the modules that IronPython is using. If you compare this list to the one that IronPython provides when you run the application using the .NET Framework, you’ll see the modules in a different order, but otherwise the output is the same. Listing 19-1 shows the code used for this example.

Listin g 19-1: Creating a simple Mono test program

[code]
# Obtain access to the sys module.
import sys
# Output a list of modules.
print ‘IronPython Module Listingn’
for ThisMod in sys.modules:
print ThisMod, sys.modules[ThisMod]
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
[/code]

This example demonstrates a simple for loop to iterate through the list of modules found in the sys .modules dictionary. In this case, the code prints out two items. First, it prints out the module name. Second, it prints out the module information, which normally includes the module location. As always, the code ends with a pause, raw_input(), so that you can see the output before the window closes.

Running the Application from the IDE

Running the application is the first place you see some potential problems with using Mono. If you click Start Debugging, you see the No Debugging Information dialog box shown in Figure 19-7. If you click Yes, the program will run, but you won’t get any debugging support. This is one of the problems with using Mono exclusively. You’ll probably want to use the normal .NET Framework setup to debug your application first, and then move on to the Mono configuration described in this chapter to test the application under Mono.

Mono doesn’t provide any debugging support that Visual Studio understands.
Figure 19-7: Mono doesn’t provide any debugging support that Visual Studio understands.

To start the application successfully, choose Debug ➪ Start Without Debugging or press Ctrl+F5. The program will run normally and you’ll see the usual message at the end. Pressing Enter displays a second pause as shown in Figure 19-8. It seems that Mono provides its own pause so that you can see the results of executing the program, which is a nice touch for those times when you forget to add a pause of your own.

IronPython displays the list of modules found in the current setup.
Figure 19-8: IronPython displays the list of modules found in the current setup.

Running the Application from the Command Line

Interestingly enough, Mono does come with direct support for IronPython, but Mono supports IronPython 1.1, and the IronPython console supplied with Mono seems to do odd things. Open a Mono command prompt, type IPY, and press Enter. Now try typing 1+1 and pressing Enter. You’ll probably see results like those in Figure 19-9.

The IronPython console provided with Mono leaves a lot to be desired.
Figure 19-9: The IronPython console provided with Mono leaves a lot to be desired.

Of course, the question isn’t about the IronPython console, but whether it can run the example application. Press Ctrl+C to break out of the mess you’re seeing onscreen. Type Y and press Enter when you’re asked whether you want to stop the batch file. Then type IPY TestMono.py and press Enter. You’ll see that the application does indeed work, as shown in Figure 19-10. The number of modules is far smaller than the list shown in Figure 19-8, but it’s correct for the version of IronPython provided with Mono.

You can run the test application using the Mono version of IronPython.
Figure 19-10: You can run the test application using the Mono version of IronPython.

The picture isn’t completely gloomy. Developers are constantly trying new solutions for working with IronPython. You can find a potential fix for the problems described in this section of the chapter at http://ironpython-urls.blogspot.com/2009/06/mono-can-now-compile-ironpython-20 .html. The solution comes with the caveat that it might not work for you.

Interacting with Other .NET Languages under Mono

Mono originally focused its attention on C# development, but later added support for Visual Basic .NET as well. At this point, you can run any Visual Studio 2008–created application written under C# or Visual Basic.NET using Mono within the limitations described in the section “Understanding Mono Limitations” earlier in this chapter. Even DLR code appears to run fine in Mono within the current limits of the product, which aren’t many.

 

Using IronPython from Other .NET Languages

Understanding the Relationship between Dynamic and Static Languages

Something that most developers fail to consider is that, at some point, all languages generate the same thing — machine code. Without machine code, the software doesn’t execute. Your computer cares nothing at all about the idiosyncrasies of human language and it doesn’t care about communicating with you at all. Computers are quite selfish when you think about it. The circuitry that makes up your computer relies on software to change the position of switches — trillions of them in some cases. So computers use machine code and only machine code; languages are for humans.

When it comes to dynamic and static languages, it’s the way that humans view the languages that make them useful. A dynamic language offers the developer freedom of choice, call it the creative solution. A static language offers a reliable and stable paradigm — call it the comfort solution, the one that everyone’s used. How you feel about the languages partly affects your use of them. In the end, both dynamic and static language output ends up as machine code. Dynamic and static languages end up being tools that help you create applications faster and with fewer errors. If you really wanted to do so, you could write any application today using assembler (a low-level language just above machine code, see http://www.bing.com/reference/semhtml/Assembly_language for more information), but assembler is hardly the correct tool any longer — humans need a better tool to put applications together. The point is that you should use the tool that works best for a particular development process and not think that the tool is doing anything for your computer.

Anytime you use multiple languages, you must consider issues that have nothing to do with the dynamic or static nature of that language. For example, you must consider the data types that the languages support and provide a method for marshaling data from one language to the other. In fact, marshaling data is an important element in many areas of coding. If you want to communicate with the Win32 API from a .NET-managed language such as C# or Visual Basic.NET, you must marshal the data between the two environments. It’s important not to confuse communication and infrastructure requirements with differences between dynamic and static languages. Many resources you find do confuse these issues, which makes it hard for anyone to truly understand how dynamic and static languages differ.

Before you can use IronPython from other languages, it’s important to consider the way in which IronPython performs tasks. When an IronPython session starts, nothing exists — the environment begins with an empty slate. You’ve discovered throughout this book that IronPython calls upon certain script files as it starts to configure the environment automatically. These configuration tasks aren’t part of the startup; they are part of the configuration — something that occurs after the startup. The dynamic nature of IronPython means that all activity begins and ends with adding, changing, and removing environment features. There aren’t any compiled bits that you can examine statically. Everything in IronPython is dynamic.

When a static language such as C# or Visual Basic.NET attempts to access IronPython, it must accommodate the constant change. If you got nothing else out of Chapter 14 but this one fact, then the chapter was worth reading. In order to do this, C# and Visual Basic.NET rely upon events because they can’t actually accommodate change as part of the language. An event signals a change — an IronPython application has modified a class to contain a new method or property. It isn’t just the idea that the output or value has changed, but the method or property itself is new. In some cases, C# or Visual Basic.NET will also need to deal with the situation where a method or property simply goes away as well. The underlying mechanism of events, delegates, and caches is inspired and all but invisible, but to be successful at using the languages together, you must know they’re present.

The differences between dynamic and static languages go further than simply not knowing what code will execute next in a dynamic language. There’s also the matter of data typing. A static language assigns a type to the data it manages, which means that the compiler can make assumptions about the data and optimize access to it. A dynamic language also assigns types to the data it manages, but only does so at run time and even then the data type can change. Now, consider how this changeability complicates the matter of marshaling data from one language to the other. Because the data no longer has a stable type, the marshaling code can’t assume anything about it and must constantly check type to ensure the data it marshals appears in the right form in the target language.

The difference between dynamic and static languages, at least from a programming perspective, comes down to flexible coding and data typing. Everything else you may have heard either relates to differences between any two languages (such as the need to marshal data) or the political drama of which tool works best. This book won’t endeavor to tell you what tool to use. Certainly, I don’t tell anyone that a hammer works best for driving screws or that screwdrivers make wonderful ice picks (not that I believe either of these statements myself). The tool you use for a particular task is the one you can use best or the one called for by a particular job requirement. The point of this chapter and the rest of the book is to demonstrate that dynamic and static languages can work together successfully and in more than one way. The tool you use is up to you.

Creating an Externally Accessible IronPython Module

The first requirement for building an application that allows external access is to create the IronPython script you want to use. Ideally, this script will contain code that is fully debugged. You also want to test the code before you try to use it within C# or Visual Basic.NET. The following sections provide you with the techniques you use to create an IronPython script that you access from C# or Visual Basic .NET.

Considering Requirements for Externally Accessible Modules

The mistake that many developers will make is to think they must do something special in IronPython to make the code accessible. What you really need to do is create an IronPython script using the same techniques as always, and then test it directly. After you test the script using IronPython code, work with the target static language to gain the required access. This pretesting process is important to ensure that you aren’t fighting with a bad script in addition to potential problems marshaling data or interacting with methods that change.

Creating the IronPython Script

The IronPython script used for this example is quite simple in approach. All that the example call really does is add two numbers together. You could perform the task with far less code, but the point of this class is to demonstrate access techniques, so it’s purposely simple. Listing 15-1 shows the external module code and the code used to test it. As previously mentioned, testing your IronPython script is essential if you want the application to work properly.

Listin g 15-1: A test IronPython class for use in the examples

[code]
# The class you want to access externally.
class DoCalculations():
# A method within the class that adds two numbers.
def DoAdd(self, First, Second):
# Provide a result.
return First + Second
# A test suite in IronPython.
def __test__():
# Create the object.
MyCalc = DoCalculations()
# Perform the test.
print MyCalc.DoAdd(5, 10)
# Pause after the test session.
raw_input(‘nPress any key to continue…’)
# Execute the test.
# Comment this call out when you finish testing the code.
__test__()
[/code]

The class used for this example is DoCalculations(). It contains a single method, DoAdd(), that returns the sum of two numbers, First and Second. Overall, the class is simple.

The TestClass.py file also contains a __test__() function. This function creates an instance of DoCalculations(), MyCalc. It then prints the result of calling the DoAdd() method with values of 5 and 10. The example waits until you press Enter to exit.

In __main__(), you see a call to __test__(). You can execute the example at the command line, as shown in Figure 15-1. Make sure you use the –D command line switch to place the interpreter in debug mode. You could also open IPY.EXE interactively, load the file, and execute it inside the interpreter. When you know that the code works properly, be sure to comment out the call to __test__() in __main__().

Test the external module before you use it with your application.
Figure 15-1: Test the external module before you use it with your application.

Accessing the Module from C#

Now that you have an external module to use, you’ll probably want to access from an application. This section considers the requirements for accessing IronPython from C#. Don’t worry; the section “Accessing the Module from Visual Basic.NET” later in this chapter discusses access from Visual Basic. NET as well. If you follow these steps, you’ll find that access is relatively straightforward, even if it does get a bit convoluted at times. Microsoft promises the future versions of C# will make dynamic language access even easier.

Adding the Required C# References

Any application you create requires access to the dynamic language assemblies. The IronPython assemblies appear in the Program FilesIronPython 2.6 folder on your machine. Right-click References and choose Add Reference from the content menu to display the Add Reference dialog box. Select the Browse tab. In most cases, you only need the three DLLs shown in Figure 15-2 to access any IronPython script. (You may also need to add the IronPython .Modules.DLL file to the list in some cases.)

Add the required references from your IronPython setup.
Figure 15-2: Add the required references from your IronPython setup.

Select the assemblies you require by Ctrlclicking them in the Add Reference dialog box. Click OK when you’re finished. You’ll see the assemblies added to the References folder in Solution Explorer.

Adding the Required References to the Host Language

You can perform a multitude of tasks with IronPython. In fact, later chapters in the book show how to perform tasks such as testing your static application code. IronPython really is quite flexible. However, most people will start by executing external scripts and only need a few of the namespaces in the IronPython assemblies to do it. The following using statements provide everything needed to execute and manage most IronPython scripts.

[code]
using System;
using IronPython.Hosting;
using IronPython.Runtime;
using Microsoft.Scripting.Hosting;
[/code]

Understanding the Use of ScriptEngine

You have many options for working with IronPython scripts. This first example takes an approach that works fine for Visual Studio 2008 developers, as well as those using Visual Studio 2010. It doesn’t require anything fancy and it works reliably for most scripts. Ease and flexibility concerns aside, this isn’t the shortest technique for working with IronPython scripts. This is the Method1 approach to working with IronPython scripts — the technique that nearly everyone can use and it appears in Listing 15-2.

Listin g 15-2: Using the script engine to access the script

[code]
static void Main(string[] args)
{
// Create an engine to access IronPython.
ScriptEngine Eng = Python.CreateEngine();
// Describe where to load the script.
ScriptSource Source = Eng.CreateScriptSourceFromFile(“TestClass.py”);
// Obtain the default scope for executing the script.
ScriptScope Scope = Eng.CreateScope();
// Create an object for performing tasks with the script.
ObjectOperations Ops = Eng.CreateOperations();
// Create the class object.
Source.Execute(Scope);
// Obtain the class object.
Object CalcClass = Scope.GetVariable(“DoCalculations”);
// Create an instance of the class.
Object CalcObj = Ops.Invoke(CalcClass);
// Get the method you want to use from the class instance.
Object AddMe = Ops.GetMember(CalcObj, “DoAdd”);
// Perform the add.
Int32 Result = (Int32)Ops.Invoke(AddMe, 5, 10);
// Display the result.
Console.WriteLine(“5 + 10 = {0}“, Result);
// Pause after running the test.
Console.WriteLine(“rnPress any key when ready…”);
Console.ReadKey();
}
[/code]

Now that you have access to Eng, you can use it to perform various tasks. For example, you must tell Eng what scope to use when executing code, so the example creates a ScriptScope object, Scope. In order to perform tasks, you must also have an ObjectOperations object, Ops. The example uses the defaults provided for each of these objects. However, in a production application, you might decide to change some properties to make the application execute faster or with better security.

At this point, you can execute the script. The act of executing the script using Source.Execute() loads the script into memory and compiles it in a form that the static application can use. The Source.Execute() method associates Scope with the execution environment. At this point, the parameters for executing the script are set in stone — you can’t change them.

The script is in memory, but you can’t access any of its features just yet. The script contains a DoCalculations class that you access by calling Scope.GetVariable() to create CalcObj. The code gains access to the class by creating an instance of it, CalcObj, using Ops.Invoke(). At this point, CalcObj contains an instance of DoCalculations() in the IronPython module, but you can’t use it directly. Remember that you must marshal data between C# and IronPython. In addition, C# has to have a way to deal with the potential changes in the IronPython script.

This seems like a lot of work just to gain access to DoAdd, but you can finally use AddMe to perform the addition. A call to Ops.Invoke() with AddMe and the arguments you want to use performs all of the required marshaling for you. You must coerce the output to an Int32 (something that C# understands). Finally, the application outputs the result, as shown in Figure 15-3.

FThe example application calls the DoAdd() method and displays the result onscreen.
Figure 15-3: The example application calls the DoAdd() method and displays the result onscreen.

Using the dynamic Keyword

One of the new ways in which you can access IronPython in C# 4.0 is to use the dynamic keyword. This keyword makes it possible for you to cut out a lot of the code shown in Listing 15-2 to perform tasks with IronPython. It’s still not perfect, but you’ll do a lot less work. Listing 15-3 shows a short example that accesses the __test__() function found in Listing 15-1.

Listin g 15-3: Accessing IronPython using the dynamic keyword

[code]
static void Main(string[] args)
{
// Obtain the runtime.
var IPY = Python.CreateRuntime();
// Create a dynamic object containing the script.
dynamic TestPy = IPY.UseFile(“TestClass.py”);
// Execute the __test__() method.
TestPy.__test__();
}
[/code]

The next step is to load the script. The dynamic type, TestPy, contains all the features of the TestClass.py script after you load it using IPY.UseFile(). Figure 15-4 shows how TestPy appears after the script loads. Notice that the Locals window correctly identifies all the IronPython types in the file.  (Visual Basic.NET developers will have to wait for an update).

In this case, the example calls the __test__() function. This function outputs the same information shown in Figure 15-1.

Loading the script provides access to all of the features it contains.
Figure 15-4: Loading the script provides access to all of the features it contains.

Working with the App.CONFIG File

In some cases, you might want to configure your application using an App.CONFIG file. Using the App.CONFIG file tends to ensure that your application works better between development machines. In addition, using the App.CONFIG file can make it easier to work with DLR using older versions of Visual Studio. Most important of all, using the App.CONFIG file ensures that anyone working with the application uses the correct version of the DLLs so that any DLL differences aren’t a problem.

Your project won’t contain an App.CONFIG file at the outset. To add this file, right-click the project entry in Solution Explorer and choose Add ➪ New Item from the context menu. You see the Add New Item dialog box shown in Figure 15-5. Highlight the Application Configuration File entry as shown and click Add. Visual Studio automatically opens the file for you.

The App.CONFIG file contains entries that describe the Microsoft scripting configuration. In most cases, you begin by defining a <section> element, which describes a <microsoft.scripting> element. The <microsoft.scripting> element contains a list of languages you want to use in a <languages> element, as shown in Listing 15-4.

Listin g 15-4: Defining the App.CONFIG file content

[code]
<?xml version=”1.0” encoding=”utf-8” ?>
<configuration>
<configSections>
<section name=”microsoft.scripting”
type=”Microsoft.Scripting.Hosting.Configuration.Section,
Microsoft.Scripting, Version=1.0.0.0, Culture=neutral,
PublicKeyToken=31bf3856ad364e35”
requirePermission=”false” />
</configSections>
<microsoft.scripting>
<languages>
<language names=”IronPython,Python,py”
extensions=”.py”
displayName=”IronPython 2.0 Beta”
type=”IronPython.Runtime.PythonContext,IronPython,
Version=2.6.10920.0, Culture=neutral,
PublicKeyToken=31bf3856ad364e35” />
</languages>
</microsoft.scripting>
</configuration>
[/code]

Use an App.CONFIG file to hold DLR configuration information.
Figure 15-5: Use an App.CONFIG file to hold DLR configuration information.

The <section> element includes attributes for name, type, and requirePermission. The type attribute should appear on one line, even though it appears on multiple lines in the book. This attribute describes the Microsoft.Scripting.DLL attributes. Especially important is the Version and PublicKeyToken entries.

The <microsoft.scripting> element contains a <languages> element at a minimum. Within the <languages> element you find individual <language> elements that are descriptions of the languages you want to use in your application.

For this example, you create a <language> element for IronPython that starts with a names attribute. It’s important to define all the names you plan to use to access the language — the example defines three of them. The extensions attribute describes the file extensions associated with the language, which is .py in this case. The displayName attribute simply tells how to display the language. Finally, the type attribute contains a description of the IronPython.DLL file. As with the type element for Microsoft.Scripting.DLL. Again, you need to exercise special care with the Version and PublicKeyToken entries.

Now that you have the App.CONFIG file created, it’s time to look at the application code. Listing 15-5 contains the source for this example.

Listin g 15-5: Using the App.CONFIG file in an application

[code]
static void Main(string[] args)
{
// Read the configuration information from App.CONFIG.
ScriptRuntimeSetup srs = ScriptRuntimeSetup.ReadConfiguration();
// Create a ScriptRuntime object from the configuration
// information.
ScriptRuntime runtime = new ScriptRuntime(srs);
// Create an engine to access IronPython.
ScriptEngine Eng = runtime.GetEngine(“Python”);
// Describe where to load the script.
ScriptSource Source = Eng.CreateScriptSourceFromFile(“TestClass.py”);
// Obtain the default scope for executing the script.
ScriptScope Scope = Eng.CreateScope();
// Create an object for performing tasks with the script.
ObjectOperations Ops = Eng.CreateOperations();
// Create the class object.
Source.Execute(Scope);
// Obtain the class object.
Object CalcClass = Scope.GetVariable(“DoCalculations”);
// Create an instance of the class.
Object CalcObj = Ops.Invoke(CalcClass);
// Get the method you want to use from the class instance.
Object AddMe = Ops.GetMember(CalcObj, “DoAdd”);
// Perform the add.
Int32 Result = (Int32)Ops.Invoke(AddMe, 5, 10);
// Display the result.
Console.WriteLine(“5 + 10 = {0}“, Result);
// Pause after running the test.
Console.WriteLine(“rnPress any key when ready…”);
Console.ReadKey();
}
[/code]

The biggest difference between this example and the one shown in Listing 15-2 is that you don’t create the script engine immediately. Rather, the code begins by reading the configuration from the App.CONFIG file using ScriptRuntimeSetup.ReadConfiguration(). This information appears in srs and is used to create a ScriptRuntime object, runtime.

At this point, the code finally creates the ScriptEngine, Eng, as in the previous example. However, instead of using Python.CreateEngine(), this example relies on the runtime.GetEngine() method. For this example, the result is the same, except that you’ve had better control over how the ScriptEngine is created, which is the entire point of the example — exercising control over the IronPython environment. The rest of the example works the same as the example shown in Listing 15-2. The output is the same, as shown in Figure 15-3.

Accessing the Module from Visual Basic.NET

You might get the idea from the lack of Visual Basic.NET examples online that Microsoft has somehow forgotten Visual Basic.NET when it comes to DLR. Surprise! Just because the examples are nowhere to be seen (send me an e‑mail at [email protected] if you find a stash of Visual Basic.NET examples somewhere) doesn’t mean that you can’t work with IronPython from Visual Basic. In fact, the requirements for working with Visual Basic.NET are much the same as those for working with C#, as shown in the following sections.

Adding the Required Visual Basic.NET References

Visual Basic requires the same DLL references as C# does to work with IronPython. Figure 15-2 shows the assemblies you should add to your application to make it work properly. In this case, you right-click the project entry and choose Add Reference from the context menu to display an Add Reference dialog box similar to the one shown in Figure 15-2. Select the Browse tab and add the IronPython assemblies shown in Figure 15-2 by Ctrl-clicking on each of the assembly entries. Click OK. Visual Basic will add the references, but you won’t see them in Solution Explorer unless you click Show All Files at the top of the Solution Explorer window.

As with C#, you need to add some Imports statements to your code to access the various IronPython assemblies with ease. Most applications will require the following Imports statements at a minimum.

[code]
Imports System
Imports IronPython.Hosting
Imports IronPython.Runtime
Imports Microsoft.Scripting.Hosting
[/code]

Creating the Visual Basic.NET Code

As with all the other examples, you shouldn’t let the IronPython example dictate what you do in your own applications. You can obtain full access to any IronPython script from Visual Basic.NET and fully use every feature it provides.

Accessing IronPython scripts from Visual Basic.NET is much the same as accessing them from C# using the ScriptEngine object. Listing 15-6 shows the code you need to access the IronPython script used for all the examples

Listin g 15-6: Accessing IronPython from Visual Basic.NET

[code]
Sub Main()
‘ Create an engine to access IronPython.
Dim Eng As ScriptEngine = Python.CreateEngine()
‘ Describe where to load the script.
Dim Source As ScriptSource = Eng.CreateScriptSourceFromFile(“TestClass.py”)
‘ Obtain the default scope for executing the script.
Dim Scope As ScriptScope = Eng.CreateScope()
‘ Create an object for performing tasks with the script.
Dim Ops As ObjectOperations = Eng.CreateOperations()
‘ Create the class object.
Source.Execute(Scope)
‘ Obtain the class object.
Dim CalcClass As Object = Scope.GetVariable(“DoCalculations”)
‘ Create an instance of the class.
Dim CalcObj As Object = Ops.Invoke(CalcClass)
‘ Get the method you want to use from the class instance.
Dim AddMe As Object = Ops.GetMember(CalcObj, “DoAdd”)
‘ Perform the add.
Dim Result As Int32 = Ops.Invoke(AddMe, 5, 10)
‘ Display the result.
Console.WriteLine(“5 + 10 = {0}“, Result)
‘ Pause after running the test.
Console.WriteLine(vbCrLf + “Press any key when ready…”)
Console.ReadKey()
End Sub
[/code]

As you can see from the listing, Visual Basic.NET code uses precisely the same process as C# does to access IronPython scripts. In fact, you should compare this listing to the content of Listing 15-2. The two examples are similar so that you can compare them. The output is also precisely the same. You’ll see the output shown in Figure 15-3 when you execute this example.

Developing Test Procedures for External Modules

Many developers are beginning to realize the benefits of extensive application testing. There are entire product categories devoted to the testing process now because testing is so important. Most, if not all, developer tools now include some idea of application testing with them. In short, you should have all the testing tools you need to test the static portion of your IronPython application.

Unfortunately, the testing tools might not work particularly well with the dynamic portion of the application. Creating a test that goes from the static portion of the application to the dynamic portion of the application is hard. (Consequently, you need to include a test harness with your dynamic code and perform thorough testing of the dynamic code before you use it with the static application. (When you think about a test harness, think about a horse, your application that has a harness added externally for testing purposes. You add the harness for testing and remove it for production work without modifying the application.) Listing 15-1 shows an example of how you might perform this task.

The test harness you create has to test everything, which is a daunting task to say the least. In addition, you need to expend extra effort to make the test harness error free — nothing would be worse than to chase an error through your code, only to find out that the error is in the test harness. At a minimum, your test harness should perform the following checks on your dynamic code:

  • Outputs with good inputs
  • Outputs with erroneous inputs
  • Exception handling within methods
  • Property value handling
  • Exceptions that occur on public members that would normally be private

Of course, you want to check every method and property of every class within the dynamic code. To ensure you actually test everything, make sure you create a checklist to use to verify your test harness. Because IronPython isn’t compiled, you’ll find that you must manually perform some checks to ensure the code works precisely as planned, but use as much automation as possible.

Debugging the External Module

Debugging isn’t hard, but it also isn’t as straightforward as you might think when working with IronPython. The debugger won’t take you directly to an error. You can’t test variables using the debugger from within the static language. In short, you have to poke and prod the external script to discover what ails it. Fortunately, you do have three tools at your disposal for discovering errors.

  • Exceptions
  • print Statements
  • An ErrorListener object

Let’s begin with the easiest of the three tools. The static language application won’t ignore outright errors in the script code. For example, you might have the following error in the script:

[code]
# Introduce an error.
print 1/0
[/code]

If your code has this error (and it really shouldn’t), you’ll see an exception dialog box like the one shown in Figure 15-6. Unfortunately, when you click View Detail, the content of the View Detail dialog box is nearly useless. The exception information won’t tell you where to find the error in your script. In fact, it may very well lead you on a wild goose chase that ends in frustration.

The static language application displays exceptions for your script.
Figure 15-6: The static language application displays exceptions for your script.

The name of the exception will provide clues as to where the error might exist, but you can’t confirm your suspicions without help. The only tool, besides vigorous script testing, is to include print statements such as these in your code.

[code]
# Display the values of First and Second.
print ‘Values in IronPython Script’
print ‘First = ‘, First
print ‘Second = ‘, Second
[/code]

When you run the script, you see the output shown in Figure 15-7. Most developers view print statements as a bit old school, but they do work if you use them correctly. Make sure you provide enough information to know where the script is failing to perform as expected. Even so, using print statements may feel a bit like wandering around in the dark, so you should place an emphasis on testing the script before you use it and after each change you make.

Using print statements may seem old school, but they work.
Figure 15-7: Using print statements may seem old school, but they work.

In some cases, you might make a small change to a script and it stops running completely — you might not see a script exception, just an indicator that something’s wrong because the application raises an unrelated exception. Syntax errors and other problems where the interpreter simply fails can cause the developer a lot of woe. For example, your application might have the following syntax error:

[code]
# Create a syntax error.
while True print ‘This is an error!’
[/code]

This code obviously won’t run. Because of the nature of the error, you might even pass it by while looking through your code. The answer to this problem is to create an ErrorListener class like the one shown in Listing 15-7.

Listin g 15-7: Create an ErrorListener to hear script semantic errors

[code]
class MyListener : ErrorListener
{
public override void ErrorReported(ScriptSource source,
string message,
SourceSpan span,
int errorCode,
Severity severity)
{
Console.WriteLine(“Script Error {0}: {1}“, errorCode, message);
Console.WriteLine(“Source: {0}“, source.GetCodeLine(span.Start.Line));
Console.WriteLine(“Severity: {0}“, severity.ToString());
}
}
[/code]

The ErrorListener contains just one method, ErrorReported(). This method can contain anything you need to diagnose errors. The example provides an adequate amount of information for most needs. However, you might decide to provide additional information based on the kind of script you’re using.

In order to use this approach, you must compile the script before you execute it. The compilation process must include the ErrorListener, as shown here.

[code]
// Compile the script.
Source.Compile(new MyListener());
[/code]

When you run the application now, you get some useful information about the syntax error, as shown in Figure 15-8.

The ErrorListener provides useful output on syntax errors.
Figure 15-8: The ErrorListener provides useful output on syntax errors.

 

Interacting with the DLR

The Dynamic Language Runtime (DLR) is a new feature of the .NET platform. Its intended purpose is to support dynamic languages, such as Python (through IronPython) and Ruby (through IronRuby). Without DLR, the .NET Framework can’t really run dynamic languages. In addition, DLR provides interoperability between dynamic languages, the .NET Framework, and static languages such as C# and Visual Basic.NET. Without DLR, dynamic and static languages can’t communicate. In order to meet these goals, DLR must provide basic functionality that marshals data and code calls between the dynamic and static environments. This functionality comes in a number of forms that are discussed in this chapter. You might be surprised to find that you’ve already used many of these features throughout the book. Here’s the list of features that DLR supports in order to accomplish its goals.

  • Hosting Application Programming Interfaces (APIs): In order to run dynamic language scripts, the host language must have access to the scripting engine. The Hosting APIs provide the support needed to host the dynamic language within the host environment through the scripting engine. This marshaling of code and data makes it possible to seamlessly integrate static and dynamic languages.
  • Extensions to Language Integrated Query (LINQ) ExpressionTree: Normally, a language would need to convert data, objects, and code into Microsoft Intermediate Language (MSIL) before it could translate anything into another language. Because all .NET languages eventually end up as MSIL, MSIL is the common language for all higher-level .NET languages. These extensions make it possible for language compilers to create higher-level constructs for communication purposes, rather than always relying on MSIL. The result is that the marshaling process takes less time and the application runs faster.
  • DynamicSite: This feature provides a call-site cache that dynamic languages use in place of making constant calls to other .NET languages. Because the call-site cache is already in a form that the dynamic language can use, the overall speed of the dynamic language application improves.
  • IDynamicObject: An interface used to interact with dynamic objects directly. If you create a class that implements IDynamicObject, DLR lets the class perform the required conversions, rather than rely on the built-in functionality. Essentially, you create an object that can have methods, properties, and events added dynamically during run time. You use IDynamicObject when you want to implement custom behaviors in your class.
  • ActionBinder: The ActionBinder is a utility that helps support .NET interoperability. The ActionBinder is language specific. It ensures that conversions of variable data, return values, and arguments all follow language-specific behaviors so that the host language sees the data in a form it understands.

These are the main tasks that DLR performs. Of course, it also provides other compiler utilities that you need to know about. The final section in this chapter provides an overview of these other features.

DLR is a constantly changing technology today, so you’ll want to keep up with the additions and changes to DLR. One of the better places to find general DLR resources online is at http://blogs.msdn.com/ironpython/ archive/2008/03/16/dlr-resources.aspx. This chapter also provides a number of specific resources you can use to discover more about DLR. The point is to keep track of what’s going on with this exciting technology and review your code as things change.

Obtaining DLR

It’s important to remember that IronPython relies on DLR to perform just about every task that IronPython executes. Therefore, you already have access to a certain level of DLR, even if you don’t install anything or do anything special. In fact, you’re using DLR in the background every time you use IronPython. However, you’re using DLR without really knowing it exists and without understanding what DLR itself can do for your application. So while you can use the direct approach to DLR, it can prove frustrating and less than friendly.

In order to truly understand DLR, you at least need documentation. Better yet, you can download the entire DLR package and begin to understand the true impact of this product. If nothing else, spend some time viewing the available components at http://www.codeplex.com/dlr. The following sections describe various methods of gaining access to DLR so you can use it to perform some custom tasks.

Using the Direct Method

The direct method is the easiest way to obtain the benefits of DLR, but it’s also the most limited. You simply add a reference to the IronPython.DLL file located in the Program FilesIronPython 2.6 folder of your hard drive. This technique works fine for embedding IronPython scripts in your C# or Visual Basic.NET application. In fact, you gain access to the following classes:

  • IronPython
  • IronPython.Compiler
  • IronPython.Compiler.Ast
  • IronPython.Hosting
  • IronPython.Modules
  • IronPython.Runtime
  • IronPython.Runtime.Binding
  • IronPython.Runtime.Exceptions
  • IronPython.Runtime.Operations
  • IronPython.Runtime.Types

For many developers, this is all the DLR support you need, especially if your application only requires cross-language support through the Hosting APIs. (You’ll still want to download the documentation that’s available on the main DLR Web site — the section “Downloading the Documentation” later in this chapter explains how to perform this task.) The following steps describe how to add the required reference to gain access to these classes.

  1. Create the .NET project.
  2. Right-click References in Solution Explorer and choose Add Reference from the context menu. You see the Add Reference dialog box.
  3. Select the Browse tab and locate the IronPython.DLL file, as shown in Figure 14-1.
  4. Click OK. Visual Studio adds the required reference to your project.
Add the IronPython.DLL file to your project.
Figure 14-1: Add the IronPython.DLL file to your project.

You make use of IronPython.DLL as you would any other .NET assembly. Simply add the required use or Imports statement to your code. The examples throughout the book tell you about these requirements for the individual example.

Downloading the Full DLR

If you really want to experience DLR, you need the complete package. The full DLR consists of a number of components and even the source code weighs in at a hefty 10.5 MB.

Before you begin the download, check out the release notes at http://dlr.codeplex.com/ wikipage?title=0.92_release_notes for additional information about DLR. For example, you might decide to get an IronPython- or IronRuby-specific download. The full release includes both language products (which can be helpful, even if you use only one of them).

You obtain the full DLR from http://dlr.codeplex.com/Release/ProjectReleases .aspx?ReleaseId=34834. When you click the DLR-0.92-Src.zip link, you see a licensing dialog box. Click I Agree to begin the download process.

After the download completes, extract the resulting DLR-0.92-Src.ZIP file into its own folder. The resulting Codeplex-DLR-0.92 folder contains the following items.

  • License.HTML and License.RTF: You can read the same licensing information in two different formats. Use whichever form works best for you.
  • Docs: A folder containing the complete documentation for DLR. The best place to begin is the DLR-Overview.DOC file.
  • Samples: A folder containing a number of sample applications that demonstrate DLR features. There’s only one IronPython sample in the whole batch — you’ll find it in the Codeplex-DLR-0.92SamplesSilverlightAppPythonpython folder.
  • Source: A folder that contains the complete DLR source code that you need to compile in order to use DLR to create applications. This folder should be your first stop after you read the DLR-Overview.DOC file.

Building the Full DLR

Before you can use DLR, you must build it. The previous section explains how to download a copy of the DLR source. The following sections describe three methods you can use to build DLR. For most developers, the easiest and fastest method is the command line build. However, if you want to review the code before you use it, you might want to load the solution in Visual Studio and take a peek.

Performing a Command Line Build

The command line build option requires that you use the Visual Studio command line, not a standard command line (which doesn’t contain a path to the utilities you need). The following steps describe how to perform the command line build:

  1. Choose Start ➪ Programs ➪ Microsoft Visual Studio 2008 ➪ Visual Studio Tools ➪ Visual Studio 2008 Command Prompt or Start ➪ Programs ➪ Microsoft Visual Studio 2010 ➪ ➤ Visual Studio Tools ➪ Visual Studio Command Prompt (2010). You’ll see a command prompt.
  2. Type CD Codeplex-DLR-0.92Src and press Enter. This command places you in the DLR source code directory.
  3. Type MSBuild Codeplex-DLR.SLN (when using Visual Studio 2008) or MSBuild Codeplex-DLR-Dev10.SLN (when using Visual Studio 2010) and press Enter. By default, you get a debug build. Use the /p:Configuration=Release command line switch (as in MSBuild Codeplex-DLR.SLN /p:Configuration=Release or MSBuild Codeplex-DLR-Dev10.SLN /p:Configuration=Release) to obtain a release build. You see a lot of text appear onscreen as MSBuild creates the DLR DLLs for you. Some of the text will appear unreadable (Microsoft uses some odd color combinations). When the build process is complete, you should see 0 Error(s) as the output, along with a build time, as shown in Figure 14-2. (If you don’t see a 0 error output, you should probably download the files again because there is probably an error in the files you downloaded.)
The build process should show 0 Error(s) as the output message.
Figure 14-2: The build process should show 0 Error(s) as the output message.

Don’t look for the output in the source code folders. The output from the build process appears in the Codeplex-DLR-0.92Bin40 folder when working with Visual Studio 2010, no matter which technique you use to build DLR. Visual Studio 2008 developers will find their output in the Codeplex-DLR-0.92 BinDebug or Codeplex-DLR-0.92BinRelease folders, depending on the kind of build created. Visual Studio 2008 developers will also find a separate Codeplex-DLR-0.92BinSilverlight Debug or Codeplex-DLR-0.92Bin Silverlight Release folder for Silverlight use.

Performing a Visual Studio 2008 Build

Some developers will want to perform a build from within Visual Studio 2008. To perform this task, simply double-click the Codeplex-DLR.SLN icon in the Codeplex-DLR-0.92Src folder. Choose Build ➪ Build Solution or press Ctrl+Shift+B. You’ll see a series of messages in the Output window. When the process is complete, you should see, “Build: 23 succeeded or up-to-date, 0 failed, 1 skipped” as the output.

You must select each of the options in the Solution Configurations combo box in turn and perform a build to create a complete setup. Otherwise, you’ll end up with just the Release build or just the Debug build. If you need Silverlight or FxCop support, you must also create these builds individually.

Don’t worry if you see a number of messages stating

[code]
Project file contains ToolsVersion=”4.0”, which is not supported by this
version of MSBuild. Treating the project as if it had ToolsVersion=”3.5”.
[/code]

because this is normal when using Visual Studio 2008. You’ll also see a number of warning messages (a total of 59 for the current DLR build) in the Errors window, which you can ignore when using the current release.

Performing a Visual Studio 2010 Build

A release version of DLR will build better if you have a copy of Visual Studio 2010 on your system. To perform this task, simply double-click the Codeplex-DLR-Dev10.SLN icon in the Codeplex-DLR-0.92Src folder. Set the Solution Configurations option to Release or Debug as needed (there aren’t any options to build Silverlight or FxCop output). Choose Build ➪ Build Solution or press Ctrl+Shift+B. You’ll see a series of messages in the Output window. When the process is complete, you should see, “Build: 15 succeeded or up-to-date, 0 failed, 2 skipped” as the output. The Warnings tab of the Error List window should show 24 warnings.

Downloading the Documentation

The download you performed earlier provides code and documentation, but you might find that the documentation is outdated. As with everything else about DLR, the documentation is in a constant state of flux. If you want to use DLR directly, then you need the documentation found at http://dlr.codeplex.com/wikipage?title=Docs and specs&referringTitle=Home. Unfortunately, you have to download each document separately

Reporting Bugs and Other Issues

At some point, you’ll run into something that doesn’t work as expected. Of course, this problem even occurs with production code, but you’ll definitely run into problems when using the current release of DLR. In this case, check out the listing of issues at http://www.codeplex.com/dlr/WorkItem/List.aspx. If you don’t find an issue entry that matches the problem you’re experiencing, make sure you report the bug online so it gets fixed. Of course, reporting applies equally to code and documentation. Documentation errors are often harder to find and fix than coding errors — at least where developers are concerned — because it’s easier to see the coding error in many cases.

Working with Hosting APIs

In fact, you may have even wondered whether it’s possible to use IronPython as a scripting language for your next application. Fortunately, you can use IronPython as the scripting language for your next application by relying on the Hosting APIs. It turns out that a lot of people have considered IronPython an optimal language for the task. The following sections consider a number of Hosting API questions, such as how you can use it in an actual application, what the host application needs in order to use the Hosting APIs, and what you’d need to do to embed IronPython as a scripting language in an application.

Using the Hosting APIs

The DLR specification lists a number of hosting scenarios, such as operating on dynamic objects you create within C# or Visual Basic.NET applications. (See the section “Working with IDynamicObject” later in this chapter for details on dynamic objects in C# and Visual Basic.NET.) It’s also possible to use the Hosting APIs to create a scripting environment within Silverlight or other types of Web applications.

Whatever sort of host environment you create, you can use it to execute code snippets or entire applications found in files. The script run time can appear locally or within a remote application so you can use this functionality to create agent applications or scripting that relies on server support. The Hosting APIs make it possible to choose a specific scripting engine to execute the code or to let DLR choose the most appropriate scripting engine for the task. This second option might seem foolhardy, but it can actually let your code use the most recent scripting engine, even if that engine wasn’t available at the time you wrote the host environment code.

Chaos could result if you couldn’t control the extent (range) of the script execution in some way. For example, two developers could create variables with the same name in different areas of the application. The Hosting APIs make it possible to add scope to script execution. The scope acts much like a namespace does when writing code. Just as a namespace eliminates naming conflicts in assemblies, scoping eliminates them in the scripting environment. Executing the code within a scope also provides it with an execution context (controlled using a ScriptScope). Scopes are either public or private, with private scopes providing a measure of protection for the scripting environment. A script can also import scopes for use within the environment or require the host to support a certain scope to execute the script.

The Hosting APIs also provide support for other functionality. For example, you can employ reflection to obtain information about object members, obtain parameter information, and view documentation. You can also control how the scripting engine resolves file content when dynamic languages import code files.

Understanding the Hosting APIs Usage Levels

The DLR documentation specifies that most developers will use the Hosting APIs at one of three levels that are dictated by application requirements. Here are the three basic levels.

  • Basic code: The basic code level (Level 1 in the documentation) relies on a few basic types to execute code within scopes. The code can interact with variable bindings within those scopes.
  • Advanced code execution: The next level (Level 2 in the documentation) adds intermediate types that provide additional control over how code executes. In addition, this level supports using compiled code in various scopes and permits use of various code sources.
  • Support overrides: The final level (Level 3 in the documentation) provides methods to override how DLR resolves filenames. The application can also use custom source content readers, reflect over objects for design-time tool support, provide late bound variable values from the host, and use remote ScriptRuntime objects.

The concept of a ScriptRuntime object is central to working with the Hosting APIs. A host always begins a session by creating the ScriptRuntime object and then using that object to perform tasks. You can create a ScriptRuntime object using several methods. Of course, the easiest method is to use the standard constructor, which requires a ScriptRuntimeSetup object as input. It’s also possible to create a ScriptRuntime object using these methods

  • ScriptRuntime.CreateFromConfiguration(): A factory method that lets you use a preconfigured scope to create the ScriptRuntime object. In fact, this factor method is just short for new ScriptRuntime(ScriptRuntimeSetup.ReadConfiguration()).
  • ScriptRuntime.CreateRemote(): A factory method that helps you to create a ScriptRuntime object in another domain. The code must meet strict requirements to perform remote execution. See Section 4.1.3, “Create* Methods,” in the Hosting APIs specification for details.

At its name implies, a ScriptRuntimeSetup object gives a host full control over the ScriptRuntime object configuration. The ScriptRuntimeSetup object contains settings for debug mode, private execution, the host type, host arguments, and other setup features. Simply creating a ScriptRuntimeSetup object sets the defaults for executing a script. Once you use a ScriptRuntimeSetup object to create a ScriptRuntime object, you can’t change the settings — doing so will raise an exception.

The Hosting APIs actually support a number of objects that you use to create a scripting environment, load the code you want to execute, and control the execution process. The figure at http:// www.flickr.com/photos/john_lam/2220796647/ provides an overview of these objects and how you normally use them within the hosting session.

It’s important to isolate code during execution. The Hosting APIs provide three levels of isolation.

  • AppDomain: The highest isolation level, which affects the entire application. The AppDomain lets you execute code at different trust levels, and load and unload code as needed.
  • ScriptRuntime: Every AppDomain can have multiple ScriptRuntimes within it. Each ScriptRuntime object can have different name bindings, use different .NET assemblies, have different settings (one can be in debug mode, while another might not), and provide other settings and options support.
  • ScriptScope: Every ScriptRuntime can contain multiple ScriptScopes. A ScriptScope can provide variable binding isolation. In addition, you can use a ScriptScope to give executable code specific permissions.

Now that you have a better idea of how the pieces fit together, it’s important to consider which pieces you use to embed scripting support within an application. Generally, if you want basic code (Level 1) support, all you need are the objects shown in green at http://www.flickr.com/photos/ john_lam/2220796647/. In fact, if you want to use the default ScriptScope settings, all you really need to do is create the ScriptRuntime and then use the default ScriptScope.

Considering the Host Application

A host has to meet specific requirements before it can run IronPython as a scripting language. Chapter 15 discusses more of the details for C# and Visual Basic.NET developers. You’ll find that C# and Visual Basic.NET provide everything you need. However, it’s interesting to see just what the requirements are, especially if you’re using an older version of these languages. Section 3 of the DLR-Spec-Hosting.DOC file found in the Codeplex-DLR-0.92Docs folder contains complete information about the hosting requirements. Section 3.3 (and associated subsections) are especially important for most developers to read if they plan to use the Hosting APIs for anything special.

Embedding IronPython as a Scripting Language

Imagine that you’ve created a custom editor in your application where users can write IronPython scripts. They then save the script to disk (or you could read it from memory), and then they assign the script to a button or menu in your application. When the user selects the button or menu, your application executes the script. Creating this scenario isn’t as hard as you might imagine. DLR comes with most of the functionality you need built in.

Of course, you need a test script to start. Listing 14-1 shows the test script for this example. The example is purposely simple so that the example doesn’t become more focused on the IronPython code than the code that executes it. However, you could easily use any script you want as long as it’s a legitimate IronPython script.

Listin g 14-1: A simple IronPython script to execute

[code]
# A simple function call.
def mult(a, b):
return a * b
# Create a variable to hold the output.
Output = mult(5,10)
# Display the output.
print(‘5 * 10 =’),
print(Output)
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
[/code]

In this case, the example has a simple function, mult(), that multiplies two numbers together. The __main__() part of the script multiplies two numbers and displays the result using the print() function. In short, the script isn’t very complicated.

Now that you have a script, you need to create an application to execute it. The example is a simple console application. In order to create the IronPython ScriptRuntime object, you need access to some of the IronPython assemblies. Right-click References in Solution Explorer and choose Add Reference from the context menu. You see the Add Reference dialog box shown in Figure 14-3. Ctrl+click each of the entries shown in Figure 14-3, then click OK to add them to your project.

Add the required references from your IronPython setup.
Figure 14-3: Add the required references from your IronPython setup.

The example also requires that you add using statements for a number of the assemblies. Here are the using statements that you must add for this example.

[code]
using System;
using IronPython.Hosting;
using IronPython.Runtime;
using Microsoft.Scripting.Hosting;
[/code]

Now that the console project is set up, you can begin coding it. This example is very simple, but it actually works. You can execute an IronPython script using this technique. Of course, you can’t interact with it much. Chapter 15 provides more detailed examples, but this example is a good starting place. Listing 14-2 shows the minimum code you need to execute an IronPython script and display the result of executing it onscreen.

Listin g 14-2: Executing the IronPython script

[code]
static void Main(string[] args)
{
// Create an IronPython ScriptRuntime.
ScriptRuntime Runtime = IronPython.Hosting.Python.CreateRuntime();
// Execute the script file and return scope information about
// the task.
ScriptScope Scope = Runtime.ExecuteFile(“Test.py”);
// Display the name of the file executed.
Console.WriteLine(“rnExecuted {0}“,
Scope.GetVariable<string>(“__name__“));
// Keep the output visible.
Console.WriteLine(“rnPress any key…”);
Console.ReadLine();
}
[/code]

The code begins by creating the ScriptRuntime object, Runtime. Notice that you create this object by directly accessing the IronPython assemblies, rather than the DLR assemblies. There are many ways to accomplish this task, but using the technique shown is the simplest. The Runtime object contains default settings for everything. For example, this ScriptRuntime doesn’t provide debugging capability. Consequently, this technique is only useful when you have a debugged script to work with and may not do everything needed in a production environment where you let users execute their own scripts as part of an application.

The Runtime.ExecuteFile() method is just one of several ways to execute a script. You use it when a script appears in a file on disk, as is the case for this example. When you call the Runtime .ExecuteFile() method, your application actually calls on the IronPython interpreter to execute the code. The output from the script appears in Figure 14-4. As you can see, the code executes as you expect without any interference from the host. In fact, you can’t even tell that the application has a host.

The script output appears as you might expect.
Figure 14-4: The script output appears as you might expect.

When the Runtime.ExecuteFile() method call returns, the C# application that executed the script receives a ScriptScope object that it can use to interact with the application in various ways. This ScriptScope object, like the ScriptRuntime object, contains all the usual defaults. It’s a good idea to examine both Runtime and Scope in the debugger to see what these objects contain because you’ll find useful information in both.

The script is running in a host application. In fact, they share the same console window. To show how this works, the example writes output to the console window. It retrieves the __name__ property from Scope and displays it onscreen with the message, as shown in Figure 14-5. The point of this example is that the IronPython script truly is hosted and not running on its own. The technique shown here lets you perform simple interactions between C# or Visual Basic.NET and IronPython.

The output shows that the host and the IronPython environment share the same console.
Figure 14-5: The output shows that the host and the IronPython environment share the same console.

Understanding the Extensions to LINQ Expression Tree

Part of the premise behind DLR is that every .NET language eventually ends up in Microsoft Intermediate Language (MSIL) form. Whether you use C# or Visual Basic.NET, or even managed C++, the output from the compiler is MSIL. That’s how the various languages can get along. They rely on MSIL as an intermediary so that managed languages can work together.

The problem with compiling everything to MSIL is that MSIL doesn’t necessarily perform tasks quickly or easily when working with dynamic languages such as IronPython. It would be far easier if there were a mechanism for translating the code directly into something that C# or Visual Basic .NET could use. That’s where the LINQ Expression Tree (ET) comes into play. A LINQ ET can represent IronPython or other code (such as JavaScript) in a tree form that DLR can then translate into something that C# or Visual Basic.NET can understand. The result is a DLR tree that presents the code in an easily analyzable and mutable form. The example at http://blogs.msdn.com/hugunin/ archive/2007/05/15/dlr-trees-part-1.aspx explains how DRL trees work graphically. In this case, the author explains how a DLR tree can represent a JavaScript application — the same technique also applies to IronPython.

The LINQ ET originally appeared in the .NET Framework 3.5. In its original form, Microsoft used the LINQ ET to model LINQ expressions written in C# and Visual Basic.NET. In the .NET Framework 4.0, Microsoft added extensions for a number of reasons. For the purposes of this book, the most important reason to extend LINQ ETs is to accommodate the DLR semantics used to translate IronPython code into something that C# and Visual Basic.NET can understand.

DLR trees work in the background. It’s helpful to know they exist, but you generally won’t worry about them when working with IronPython so this section is short. However, let’s say you want to create a scripting language for your application that isn’t as complex as IronPython. Perhaps you want to implement an editor and everything that goes with it in your application. In this case, you may very well want to work with DLR trees. The examples found at http://weblogs.asp.net/ podwysocki/archive/2008/02/08/adventures-in-compilers-building-on-the-dlr.aspx show what you need to do to create your own language compiler. Once you have a compiler like this built, you could execute the code using a technique similar to the one shown in Listing 14-2.

It’s important to consider one word of warning, however, when working with the current version of DLR trees. As you scan through the specification, you’ll find that the authors have left behind copious notes about issues that aren’t resolved now or features that were left out of the current implementation due to a lack of time. The result is conversations such as the one at http://stackoverflow.com/ questions/250377/are-linq-expression-trees-turing-complete. If you look at section 2.4.1 of the specification, you find that a higher-level looping mechanism was indeed cut, but Microsoft is aware of the problem and plans to implement the feature in the future. In short, DLR trees have limits that you need to consider before implementing them in your application.

Considering DynamicSite

When working with a static language such as C# or Visual Basic.NET, the compiler knows what to emit in the form of MSIL based on the code the developer provides. However, dynamic code isn’t static — it can change based on any of a number of factors. One problem with dynamic languages is that DLR doesn’t always know what to emit during compile time because the real time event hasn’t occurred yet. Of course, the static language still needs some code in place because static languages need to know what to do at compile time. This seeming conundrum is handled by invoking a DynamicSite object. Using a DynamicSite object means that the static language knows what to call at compile time and DLR can fill the DynamicSite object with executable code during run time.

As with many parts of DLR, the action takes place behind the scenes — you don’t even know it occurs. However, it’s useful to know what happens so you at least know what to suspect when an error occurs. The act of invoking the DynamicSite method creates an operation to perform and a delegate. The delegate contains caching logic that is updated every time the arguments change. In short, as the dynamic language changes, DLR generates events that change the content of the cache as well.

At the center of working with DynamicSite is the UpdateBindingAndInvoke() method. The first time that application code calls the DynamicSite object, the UpdateBindingAndInvoke() method queries the arguments for the specified code. For example, the code might be something simple such as x + y, so the query would request the types of x and y. At this point, UpdateBindingAndInvoke() generates a delegate that contains the implementation of the code.

The next time the application invokes the DynamicSite object, the delegate checks the arguments in the call against those in the cache. If the argument types match, then the delegate simply uses the current implementation of the code. However, if the arguments are different, then the delegate calls UpdateBindingAndInvoke(), which creates a new delegate that contains a definition of the new code with the updated arguments. The new delegate contains checks for both sets of argument types and calls the appropriate implementation based on the arguments it receives. Of course, if none of the argument sets match the call, then the process starts over again with a call to UpdateBindingAndInvoke().

Working with IDynamicObject

This section discusses the IDynamicObject interface provided as part of DLR, which doesn’t affect IronPython directly, but could affect how you use other languages to interact with IronPython. You can easily skip this section and leave it for later reading if you plan to work exclusively with IronPython for the time being. This is a very short discussion of the topic that is meant to fill in the information you have about DLR and its use with IronPython.

As mentioned throughout the book, C# and Visual Basic.NET are both static languages. Microsoft doesn’t appear to have any desire to change this situation in upcoming versions of either language. Consequently, you can’t create dynamic types using C# or Visual Basic. There isn’t any technique for defining missing methods or dynamic classes using either language. However, you can consume dynamic types defined using a new interface, IDynamicObject.

The IDynamicObject interface tells DLR that the class knows how to dispatch operations on itself. In some respects, IDynamicInterface is a kind of managed form of the IQueryable interface that C++ developers use when creating COM objects. The concept isn’t new, but the implementation of it in the .NET environment is new.

There are many levels of complexity that you can build into your dynamic implementation. The example in this section is a very simple shell that you can build on when creating a fullfledged application. It’s designed to show a common implementation that you might use in an application. You can see another simple example at http://blogs.msdn.com/csharpfaq/ archive/2009/10/19/dynamic-in-c-4-0-creating-wrappers-with-dynamicobject.aspx.

The starting point for this example is a class that implements DynamicObject. In order to create such a class, you need to include the following using statements:

[code]
using System;
using System.Dynamic;
[/code]

The class is called ADynamicObject and appears in Listing 14-3.

Listin g 14-3: Creating a class to handle dynamic objects

[code]
// Any dynamic object you create must implement IDynamicObject.
public class ADynamicObject : DynamicObject
{
// Calls a method provided with the dynamic object.
public override bool TryInvokeMember(InvokeMemberBinder binder,
object[] args, out object result)
{
Console.WriteLine(“InvokeMember of method {0}.”, binder.Name);
if (args.Length > 0)
{
Console.WriteLine(“tMethod call has {0} arguments.”, args.Length);
for (int i = 0; i < args.Length; i++)
Console.WriteLine(“ttArgument {0} is {1}.”, i, args[i]);
}
result = binder.Name;
return true;
}
// Gets the property value.
public override bool TryGetMember(GetMemberBinder binder,
out object result)
{
Console.WriteLine(“GetMember of property {0}.”, binder.Name);
result = binder.Name;
return true;
}
// Sets the property value.
public override bool TrySetMember(SetMemberBinder binder, object value)
Console.WriteLine(“SetMember of property {0} to {1}.”,
binder.Name, value);
return true;
}
}
[/code]

In this case, the code provides the ability to call methods, get property values, and set property values. Amazingly, DLR automatically calls the correct method without any hints from you.

Notice that each of the methods uses a different binder class: InvokeMemberBinder, GetMemberBinder, or SetMemberBinder as needed. The binder provides you with information about the member of interest. In most cases, you use the member name to locate the member within the dynamic object. In this case, the code simply displays the member name onscreen so you can see that the code called the correct member.

Two of these methods, TryInvokeMember() and TryGetMember(), return something to the caller. It’s important to remember that the data is marshaled, so you must use the out keyword for the argument that returns a value or the application will complain later (the compiler may very well accept the error without comment). In both cases, the code simply returns the binder.Name value. If you were building this dynamic object class for an application, you’d use the binder.Name value to access the actual property or method.

When invoking a method, the TryInvokeMember() method receives an array of arguments to use with the method call. The code shows how you detect the presence of arguments and then displays them onscreen for this example. In an actual application, you’d need to compare the arguments provided by the caller against those required by the method to ensure the caller has supplied enough arguments of the right type.

All three methods return true. If the code were to return false instead, you’d see a RuntimeBinderException in the caller code. This exception tells the caller that the requested method or property doesn’t exist.

When a C# application desires to create a dynamic object, it simply creates an instance of the dynamic class. The instance can create properties, methods, or other constructs as needed. Listing 14-4 shows an example of how a test application might appear.

Listin g 14-4: Using the ADynamicObject class

[code]
class Test
{
static void Main()
{
// Create a new dynamic object.
dynamic DynObject = new ADynamicObject();
// Set a property to a specific value.
Console.WriteLine(“Setting a Property to a Value”);
DynObject.AProp = 5;
// Use one property to set another property.
// You would see a property get, followed by a property set.
Console.WriteLine(“rnSetting a Property to another Property”);
DynObject.Prop1 = DynObject.AProp;
// Call a method and set its output to a property.
// You would see a method call, followed by a property set.
Console.WriteLine(“rnSetting a Property to a Method Output”);
DynObject.Prop2 = DynObject.AMethod();
// Call a method with a property argument and set a new property.
// You would see a property get, a method call, and finally a
// property set.
Console.WriteLine(“rnSetting a Property to Method Output with Args”);
DynObject.Prop3 = DynObject.AMethod(DynObject.AProp);
// Wait to see the results.
Console.WriteLine(“rnPress any key when ready…”);
Console.ReadLine();
}
}
[/code]

Notice that the code begins by creating a new dynamic object using the dynamic keyword. At this point, you can begin adding properties and methods to the resulting DynObject. Properties can receive values directly, from other properties, or from methods. Methods can use arguments to change their output. Figure 14-6 shows the output from this example. The path that the code takes through the various objects helps you understand how dynamic objects work.

The DynamicObject class actually provides support for a number of members. You can use these members to provide a complete dynamic implementation for your application. Here’s a list of the DynamicObject members you can override.

  • GetDynamicMemberNames()
  • GetMetaObject()
  • TryBinaryOperation()
  • TryConvert()
  • TryDeleteIndex()
  • TryDeleteMember()
  • TryGetIndex()
  • TryGetMember()
  • TryInvoke()
  • TryInvokeMember()
  • TrySetIndex()
  • TrySetMember()
  • TryUnaryOperation()
The output shows the process used to work with dynamic objects.
Figure 14-6: The output shows the process used to work with dynamic objects.

The point of all this is that you can implement a kind of dynamic object strategy for static languages, but it’s cumbersome compared to IronPython. You might use this approach when you need to provide a dynamic strategy for something small within C# or Visual Basic. This technique is also useful for understanding how IronPython works, at a very basic level. IronPython is far more robust than the code shown in this example, but the theory is the same.

Understanding the ActionBinder

DLR makes it possible to invoke dynamic code from within a static environment using a DynamicSite object. The actual process for creating the method invocation call is to create an Abstract Syntax Tree (AST). The AST has functions assigned to it using an Assign() method. When DLR wants to assign a new function to AST, it supplies a function name and provides a calling syntax using the Call() method. The Call() method accepts four arguments.

  • An object used to hold the function. Normally, the code calls the Create() method of the host class using GetMethod(“Create”).
  • A constant containing the name of the function as it appears within the host object.
  • The array of arguments supplied to the function.
  • A delegate instance used to invoke the code later. It’s this argument that you consider when working with an ActionBinder.

At this point, you have an object that holds the parameters of the function call, as well as a delegate used to execute the function. The problem now is one of determining how to call the function. After all, the rest of your code knows nothing about the delegate if you create it during run time, as is the case when working with dynamic languages. If none of the code knows about the delegate, there must be some way to call it other than directly.

To make rules work, your code has to include a GetRule() method that returns a StandardRule object. Inside GetRule() is a switch that selects an action based on the kind of action that DLR requests, such as a call (DynamicActionKind.Call). When DLR makes this request, the code creates a StandardRule object that contains an ActionBinder. The ActionBinder determines what kind of action the call performs. For example, you might decide that the ActionBinder should be LanguageContext.Binder, which defines a language context for the function. The language context is a definition of the language’s properties, such as its name, identifier, version, and specialized features. (You can learn more about how a language context works at http://www.dotnetguru .org/us/dlrus/DLR2.htm.) The code then calls SetCallRule() with the StandardRule object, the ActionBinder, and a list of arguments for the function.

Now, here’s the important consideration for this section. The ActionBinder is actually part of the language design. If you wanted to create a new language, then part of the design process is to design an ActionBinder for it. The ActionBinder performs an immense amount of work. For example, a call to ActionBinder.ConvertExpression() provides conversion information about the data types that the language supports. Of course, IronPython already performs this task for you, but it’s important to know how things work under the hood in case you encounter problems.

Understanding the Other DLR Features

DLR is a moving target at the time of this writing. The latest release, 0.92, isn’t even considered production code as of yet. Consequently, you might find that the version of DLR that you use has features not described in this chapter because they weren’t available at the time of this writing.

An ExpandoObject is a dynamic property bag. Essentially, you fill it with data you want to move from one language to another. It works just like any other property bag you’ve used in the past. Because the ExpandoObject class implements IDynamicMetaObjectProvider, you can use it with dynamic languages such as IronPython. You use this object when moving data from C# or Visual Basic.NET to IronPython.

Debugging IronPython Applications

Understanding IronPython Warnings

Warnings are simply indicators that something could be wrong with your application or might not work under all conditions. For example, if you use a deprecated (outdated) function, you might later find that the application refuses to work on all machines. You can use warnings for all kinds of purposes, including providing debugging messages for your application.

The main difference between a warning and an exception is that a warning won’t stop the application. When the interpreter encounters a warning, it outputs the warning information to the standard error device unless the interpreter is ignoring the warning. In some cases, you need to tell the interpreter to ignore a warning because the warning is due to a bug in someone else’s code, a known issue that you can’t fix, or simply something that is obscuring other potential errors in your code. A standard warning looks like this:

[code]
__main__:1: UserWarning: deprecated
[/code]

The elements are separated by colons (:) and each warning message contains the following elements (unless you change the message formatting to meet a specific need).

  • Function name (such as __main__)
  • Line number where the warning appears
  • Warning category
  • Message

You’ll discover more about these elements as the chapter progresses. In the meantime, it’s also important to know that you can issue warnings, filter them, change the message formatting, and perform other tasks using the warning-related functions shown in Table 12-1. You see these functions in action in the sections that follow.

Warning-Related Functions and Their Purpose
Table 12-1: Warning-Related Functions and Their Purpose
Warning-Related Functions and Their Purpose
Table 12-1: Warning-Related Functions and Their Purpose (Continue)

Working with Actions

Before you do too much with warnings, it’s important to know that warnings have an action associated with them. For example, you can choose to turn a particular warning into an exception or to ignore it completely. You can apply actions to warnings in a number of ways using either the filterwarnings() or simplefilter() function. Table 12-2 shows the list of standard warning actions.

Standard Warning Actions
Table 12-2: Standard Warning Actions
Table 12-2 (continued)
Table 12-2 (continued)

It’s important to work with a few warnings to see how filtering works because filters are exceptionally important. In order to use warnings, you import the warnings module. Figure 12-1 shows a typical instance of the default action. Notice that the first time the code issues the warnings .warn(“deprecated“, DeprecationWarning) warning, the interpreter displays a message. (Don’t worry too much about the specific arguments for the warnings.warn()function for right now; you see them explained in the “Working with Messages” and “Working with Categories” sections of the chapter.) However, the interpreter ignores the same warning the second time. If you change the message, however, the interpreter displays another message.

The default action displays each message just one time.
Figure 12-1: The default action displays each message just one time.

Of course, you could always associate a different action with the warnings .warn(“deprecated“, DeprecationWarning) warning. To make this change, you can use the simplefilter() function as shown in Figure 12-2. Now when you issue the warning, it appears every time.

You can set the warning to appear every time.
Figure 12-2: You can set the warning to appear every time.

Unfortunately, as shown in the figure, the change affects every message. Using the simplefilter() function affects every message in every module for a particular message category. Both the newmessage and deprecated messages always appear. Let’s say you want to make just the deprecated message always appear. To perform this task, you use the filterwarnings() function as shown in Figure 12-3 (after first resetting the category using the resetwarnings() function).

Use the filterwarnings() function when you need better control over filtering.
Figure 12-3: Use the filterwarnings() function when you need better control over filtering.

In this case, the warnings.warn(“deprecated“, DeprecationWarning) warning appears every time because its action is set to always. However, the warnings .warn(“newmessage“, DeprecationWarning) warning appears only once because it uses the default action.

You can also set an action at the command line using the –W command line switch. For example, to set the interpreter to always display warning messages, you’d use the –W always command line switch. The –W command line switch accepts an action, message, category, module, or line number (lineno) as input. You can include as many –W command line switches as needed on the command line to filter the warning messages.

The resetwarnings() function affects every warning category and every message in every module. You might not want to reset an entire filtering configuration by using the resetwarnings() function. In this case, simply use the filterwarnings() or simplefilter() function to set the warning back to the default action.

At this point, you might wonder how to obtain a list of the filters you’ve defined. For that matter, you don’t even know if there are default filters that the interpreter defines for you. Fortunately, the warnings class provides two attributes, default_action and filters, which provide this information to you. Listing 12-1 shows how to use these two attributes.

Listin g 12-1: Discovering the default action and installed filters

[code]
# Import the required modules.
import warnings
# Display the default action.
print ‘Default action:’, warnings.default_action
# Display the default filters.
print ‘nDefault Filters:’
for filter in warnings.filters:
print ‘Action:’, filter[0],
print ‘Msg:’, filter[1],
print ‘Cat:’, str(filter[2]).split(“‘“)[1].split(‘.’)[1],
print ‘Module:’, filter[3],
print ‘Line:’, filter[4]
# Add new filters.
warnings.filterwarnings(‘always’, message=’Test’, category=UserWarning)
warnings.filterwarnings(‘always’, message=’Test2’, category=UserWarning,
module=’Test’)
warnings.filterwarnings(‘always’, message=’Test3’, category=UserWarning,
module=’Test’, append=True)
# Display the updated filters.
print ‘nUpdated Filters:’
for filter in warnings.filters:
print ‘Action:’, filter[0],
try:
print ‘Msg:’, filter[1].pattern,
except AttributeError:
print ‘None’,
print ‘Cat:’, str(filter[2]).split(“‘“)[1].split(‘.’)[1],
try:
if len(filter[3].pattern) == 0:
print ‘Module: Undefined’,
else:
print ‘Module:’, filter[3].pattern,
except AttributeError:
print ‘Module: None’,
print ‘Line:’, filter[4]
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
[/code]

The code begins by importing the warnings module. It then displays (using warnings.default_ action) the default action that the interpreter will take when it encounters a warning. As shown in Figure 12-4 and described in Table 12-2, the default action is ‘default‘.

The example shows the default actions and filters, along with the output of filter changes.
Figure 12-4: The example shows the default actions and filters, along with the output of filter changes.

The next step is to show the default filters that the interpreter provides for you. It may surprise you to know that the interpreter does include some default filters for the PendingDeprecationWarning, ImportWarning, and BytesWarning, as shown in Figure 12-4. These default filters make the interpreter easier and more enjoyable to use, but could also hide important bugs, so you need to be aware of them.

In order to show how actions and filters work, the example adds three filters using the warnings .filterwarnings() function. The first filter simply tells the interpreter to always display warnings about the Test message provided in the UserWarning category. The second filter specifies that the Test2 warning will appear in the Test module. The third filter specifies that the interpreter should append the warning filter to the end of the filter list, rather than add it to the front of the list as is traditional. You can see the result of all three filter additions in Figure 12-4.

The code used to display the filter information is different in this case because the simple display method used earlier won’t work. What you’ll see as output for the message and module information is something like

[code]
<RE_Pattern object at 0x000000000000002C>
[/code]

which isn’t particularly useful. In order to get information from the message and module elements, you must access the pattern attribute. Unfortunately, this attribute isn’t available with the default filters, so the solution is to create a try…except AttributeError structure, as shown in the code. When the code encounters a default filter entry, it simply prints None as it would have done in the past.

Working with modules presents a special problem. If you look at the first filter declaration, it doesn’t include the Module attribute. Unfortunately, the interpreter takes this omission to mean that you want to create a blank entry, not a null entry. Consequently, the module code also handles the empty entry scenario by saying the module is undefined. If you want to create a null module entry, you must use Module=None as part of your filter declaration.

Notice in Figure 12-4 that the first two filters appear at the front of the list and in reverse order. That’s because the interpreter always adds new filters to the beginning of the list unless you include the append=True attribute. Because the third filter includes this attribute, it appears at the end of the list.

Working with Messages

A message is simply the text that you want to appear as part of the warning. The message is specific information about the warning so that someone viewing the warning will know precisely why the warning is issued. For example, if you issue a DeprecationWarning category warning, the output will automatically tell the viewer that something is deprecated. As a result, your message doesn’t have to tell the viewer that something is deprecated, but it does have to tell the viewer what is deprecated. In many cases, this means supplying the name of the feature such as a method name, attribute, function, or even a class.

Simply telling someone that a feature is deprecated usually isn’t enough information. At a minimum, you must include information about an alternative. For example, you might want to suggest another class or a different function. Even if there is no alternative, you should at least tell the viewer that there isn’t an alternative. Otherwise, the viewer is going to spend hours looking for something that doesn’t exist.

You can’t always tell someone why something is deprecated, but you should when you can. For example, it would be helpful to know that an old function is unstable and that the new function fixes this problem. It’s a good idea to extend this information by saying that the old function is supplied for backward compatibility (assuming that this really is the case).

In some cases, you also need to provide some idea of when a feature is deprecated, especially if the action occurs in the future. Perhaps your organization knows that a function is unstable but hasn’t come up with a fix yet. The fix will appear in the next version of a module as a new function. Having this information will help organizations that rely on your module to plan ahead for required updates.

The point of messages is that they should provide robust information — everything that someone needs to make good decisions. Of course, you don’t want to provide too much information either (anything over three well-written sentences is too much). If you feel the viewer needs additional information, you can always provide it as part of the feature’s help. That way, people who are curious can always find more information. Make sure you note the availability of additional information as part of your message.

Message consistency is another consideration. Remember that filters work with messages as well as categories and other warning elements. If two modules require the same message, make sure you use the same message to ensure filtering works as anticipated. In fact, copying and pasting the message is encouraged to reduce the risk of typographical errors.

If you ever want to see how your message will appear to others, you can use the formatwarning() function to perform the task. Try it out now. Open a copy of the IronPython console and try the following code.

[code]
import warnings
warnings.formatwarning(‘Bad Input’, UserWarning, ‘My.py’, 5, ‘import warnings’)
[/code]

You’ll see results similar to those shown in Figure 12-5. Notice that the output contains linefeeds like this: ‘My.py:5: UserWarning: Bad Inputn import warningsn‘. When you work with the printed version, the warning appears on multiple lines, as shown near the bottom of Figure 12-5.

Use formatwarning() to see how your warning will appear.
Figure 12-5: Use formatwarning() to see how your warning will appear.

Of course, it’s handy to know the arguments for the formatwarning() function. The following list provides a brief description of each argument.

  • Message: The message you want to display to the user.
  • Category: The warning category you want to use.
  • Filename: The name of the file where the warning occurred (not necessarily the current file).
  • Line number: In most cases, this value contains the line at which the warning is detected, which isn’t always the line at which the warning occurs. For example, it’s possible for a warning to appear at the end of a structure, rather than at the appropriate line within the structure.
  • Line of code: An optional entry that shows the line of code at which the warning occurs. If you don’t supply this argument, the formatwarnings() function defaults to a value of None. The IronPython implementation differs from the standard in this regard. According to the standard, the interpreter is supposed to read the file, obtain the correct line of code, and display the specified line when you don’t provide the appropriate text.

Working with Categories

A warning category is a means of identifying a particular kind of warning. The category makes it possible to group like warnings together and reduces the risk that someone will misinterpret the meaning of a message. In short, a category is a way to pigeonhole a particular message so that others know what you intend. Of course, filtering considers the warning category, so you also need to use the correct category to ensure filtering works as expected. Table 12-3 contains a list of the warning message categories, including a general Warning class that you shouldn’t ever use because it’s too general.

Warning Message Categories
Table 12-3: Warning Message Categories
Warning Message Categories
Table 12-3: Warning Message Categories (Continue)
Warning Message Categories
Table 12-3: Warning Message Categories (Continue)

The warning categories are used with almost every warnings module function. For example, you supply a category when setting a filter or creating a new message. There is always an exception. The resetwarnings() function doesn’t require any input, not even a warning category, because it resets the entire warning environment to a default state.

Obtaining Error Information

Errors will happen in your application, even if you use good exception handling. The handlers you create only react to the errors you know about. Applications also encounter unknown errors. In this case, your application has to have a way to obtain error information and display it to the user (or at least record it in a log file).

It’s important to remember that you normally obtain error information in an application using the exception process described. This section of the chapter is more designed for those situations where you need to work with a generic exception or obtain more detailed information than the specific exceptions provide.

As with many things, IronPython provides a number of methods for obtaining error information. In fact, you might be surprised at how many ways you can retrieve information once you really start looking. The following sections discuss the most common methods for obtaining error information.

Using the sys Module

The sys module contains a wealth of useful functions and attributes you use to obtain, track, and manage error information. One of the first things you should know about the sys module is that it contains the sys.stderr attribute, which defines where the interpreter sends error output. Normally, the output goes to the console window, but you can redirect the error output to any object that has a write() method associated with it, such as a file. If you want to later reset the sys.stderr attribute to the console, the sys.__stderr__ attribute always contains the original output location, so using sys.stderr = sys.__stderr__ performs a reset.

Obtaining error information seems like it should be straightforward, but it’s harder than most developers initially think because obtaining error information often affects application execution in unforeseen ways. In addition, ensuring that the caller receives the right information in a multithreaded application is difficult. The caller could also make unfortunate changes to error information objects, such as the traceback object, creating problems with circular references that the garbage collector is unable to handle. Consequently, you find a lot of functions in sys that look like they should do something useful (and this section covers them), but the two functions you need to keep in mind when working with IronPython are.

  • sys.exc_info(): Returns a tuple containing three items:
    • type: The type of the error, such as ZeroDivisionError. You can find a list of all standard exception types in the exceptions module.
    • value: The human readable string that defines the error. For example, a ZeroDivisionError might provide ZeroDivisionError(‘Attempted to divide by zero.‘,) as a value.
    • traceback: An object that describes the stack trace for an exception. Normally, you won’t use this information directly unless you truly need to obtain the stack trace information, which can prove difficult. If you need stack trace information, consider using the traceback module features instead
  • sys.exc_clear(): Clears the existing exceptions from the current thread. After you call this function, sys.exc_info() returns None for all three elements in the tuple.

The sys.exc_info() function isn’t very hard to use, but you can’t really try it out by executing it directly in the IronPython console. You need to place it within a try…except structure instead. The following code shows a quick demonstration you can type directly into the console window.

[code]
try:
5/0
except:
type, value = sys.exc_info()[:2]
print type
print value
[/code]

The example uses a simple division by zero to create an exception. As previously noted, you normally need just the first two elements of the tuple, which you can obtain using sys.exc_info()[:2]. When you execute this code, you see the following output.

[code]
<type ‘exceptions.ZeroDivisionError’>
Attempted to divide by zero.
[/code]

Some IronPython sys module functions affect only the interactive thread (which means they’re safe to use in multithreaded applications because there is only one interactive thread in any given session). You could use these functions to determine the current type, value, and traceback for an exception, but only for the interactive session, which means these functions are completely useless for your application. In most cases, you avoid using these three functions.

  • sys.last_traceback()
  • sys.last_type()
  • sys.last_value()

You could run into problems when working with some functions in the sys module. For example, these three functions are global, which means they aren’t specific to the current thread and are therefore, unsafe to use in a multithreaded application.

  • sys.exc_type()
  • sys.exc_value()
  • sys.exc_traceback()

Interestingly enough, these three functions are also listed as deprecated (outdated) in most Python implementations (including IronPython). As with all IronPython modules, you also have access to low-level functions in the sys module. The following list is low-level modules you can use for special needs, but won’t normally use in your application.

  • sys.excepthook(type, value, traceback): The system calls this low-level function each time it generates an exception. To use this function, you supply the same tuple of values as you receive when you call sys.exc_info().
  • sys._getframe([depth]): The system calls this low-level function to display a frame object from the call stack. If the caller supplies a depth value, the frame object is at that call stack depth. The default depth value setting is 0. IronPython doesn’t appear to implement this function, but you may encounter it in other versions of Python, so it pays to know about this function.

If you want to control how much information the interpreter provides when you request a traceback, you can always set the sys.tracebacklimit attribute. The sys.tracebacklimit attribute defaults to 1,000. It doesn’t actually appear when you perform a dir() command. In fact, until you set it, printing the sys.tracebacklimit attribute returns an AttributeError. Use code like this

[code]
sys.tracebacklimit = 3
[/code]

to modify the traceback level. Now when you try to print the sys.tracebacklimit attribute, you get back the value you supplied.

Using the traceback Module

The traceback module adds to the capabilities of the sys module described in the “Using the sys Module” section of the chapter. In addition, it adds to the standard exception handling capabilities of IronPython by making it easier to obtain complex information about exceptions in general. The traceback module does focus on tracebacks, which are the IronPython equivalent of a call stack.

The most common call is traceback.print_exc(). Essentially, this call prints out the current exception information. You can use it in a try…except structure, much as you’d use the sys.exc_info() function, but with fewer limitations. Figure 12-6 shows a typical view of the traceback.print_exc() function in action.

Obtain traceback information with ease using the traceback.print_exc() function.
Figure 12-6: Obtain traceback information with ease using the traceback.print_exc() function.

You may find that you want a string that you can manipulate, rather than direct output. In this case, you use the traceback.format_exc() function and place its output in a variable. The information is the same as shown in Figure 12-6, but you have the full capability of string manipulation functions to output the information in any form desired.

All of the traceback output functions include a level argument that defines how many levels of trace information you want. The default setting provides 1,000 levels, which may be a little more information than you want. Many of the traceback output functions also include a file argument that accepts the name of a file you can use for output (such as application logging). If you don’t provide the file argument, it defaults to using the sys.stderr device (normally the console).

Some of the traceback functions are macros for longer function combinations. For example, when you type traceback.print_last(), what you’re really doing is executing print_exception(sys.last_ type, sys.last_value, sys.last_traceback, limit, file). Obviously, typing traceback .print_last() is a lot less work!

IronPython is missing some extremely important functionality when it comes to the traceback module. You can’t use traceback.print_stack(), traceback.extract_stack(), or traceback.format_ stack() to obtain current stack information. The code shown in Figure 12-7 is standard output when working with Python. Figure 12-8 shows what happens when you execute this code in IronPython. Instead of getting a nice stack trace you can use for debugging (see Figure 12-7), you get nothing at all (see Figure 12-8). This is a known issue (see the issue information at http://ironpython.codeplex .com/WorkItem/View.aspx?WorkItemId=25543).

Python provides full stack information you can use for debugging.
Figure 12-7: Python provides full stack information you can use for debugging.
IronPython lacks support for stack traces, making debugging significantly more difficult.
Figure 12-8: IronPython lacks support for stack traces, making debugging significantly more difficult.

The traceback module contains a number of interesting functions that you can use to debug your application. You can see these functions described at http://docs.python.org/library/traceback .html. Don’t assume that all of these functions work as they do in Python. There are currently a number of outstanding traceback module issues for IronPython.

Debugging with the Python Debugger

You might not know it, but Python and IronPython come with a debugger module, pdb (for Python debugger). Like any module, you have full access to the debugger source code and can modify it as needed. This section describes the default debugger performance.

It’s possible to use pdb with any Python file by invoking the debugger at the command line using the –m command line switch. Here’s how you’d invoke it for the example shown in Listing 12-1.

[code]
IPY -m pdb ShowFilters.py
[/code]

Unfortunately, using this command line format limits what you can do with the debugger. Although you can single step through code, you can’t work with variables easily and some other debugger commands may not work as anticipated.

The debugger works better if you configure your application to use a main() module. Most of the examples in this book don’t use a main() function for the sake of simplicity, but you should use one for any production code you create. The ShowFilters2.py file contains the modifications to provide a main() function. Essentially, you encase the code in Listing 12-1 in the main() function and then call it using the following code:

[code]
# Create an entry point for debugging.
if __name__ == “__main__“:
main()
[/code]

Using the debugger is very much like old-style DOS debuggers such as the Debug utility. You issue commands and the debugger responds without output based on the application environment and variable content. The lack of a visual display may prove troublesome to developers who have never used a character-mode debugger, but pdb is actually more effective than any of the graphical alternatives in helping you locate problems with your application — at least, in the Python code. Use these steps to start the pdb:

  1. Start the IronPython console by selecting it from the Start menu or typing IPY at the command line.
  2. Type import pdb and press Enter to import the Python debugger.
  3. Type import ApplicationName where ApplicationName is the name of the file that contains your application and press Enter. For example, if your application appears in ShowFilters2.py, then you’d type import ShowFilters2 (without the file extension) and press Enter.
  4. Type pdb.run(‘ApplicationName.FunctionName()‘) where ApplicationName is the name of the application and FunctionName is the name of the function you want to test, and press Enter. For example, if your application is named ShowFilters2 and the function you want to test is main(), you’d type pdb.run(‘ShowFilters2.main()‘) and press Enter. The standard console prompt changes to a pdb prompt, as shown in Figure 12-9.
The Python debugger uses a special pdb prompt where you can enter debugging commands.
Figure 12-9: The Python debugger uses a special pdb prompt where you can enter debugging commands.

Now that you have a debugger prompt, you can begin debugging your application. Here is a list of standard debugger commands you can issue:

  • a or args: Displays the list of arguments supplied to the current function. If there aren’t any arguments, the call simply returns without displaying anything.
  • alias: Creates an alias for a complex command. For example, you might need to use a for loop to drill down into a list to see its contents. You could use an alias to create a command to perform that task without having to write the complete code every time. An alias can include replaceable variables, just as you would use for a batch file.
  • b or break: Defines a breakpoint when you supply a line number or a function name. When you provide a function name, the breakpoint appears at the first executable line within the function. If an application spans multiple files, you can specify a filename, followed by a colon, followed by a line number (no function name allowed), such as ShowFilters2:1. A breakpoint can also include a condition. To add the condition, follow the breakpoint specification with a comma and the condition you want to use, such as ShowFilters2:2, Filter == None. If you type just b or break, the debugger shows the current breakpoints. Use the cl or clear command to clear breakpoints you create.
  • bt, w, or where: Prints a stack trace with the most current frame at the bottom of the list. You can use this feature to see how the application arrived at the current point of execution.
  • c, cont, or continue: Continues application execution until the application ends or the debugger encounters a breakpoint.
  • cl or clear: Clears one or more breakpoints. You can specify the breakpoint to clear by providing one or more breakpoint numbers separated by spaces. As an alternative, you can supply a line number or a filename and line number combination (where the filename and line number are separated by a colon).
  • commands: Defines one or more commands that execute when the debugger arrives at a line of code specified by a breakpoint. You include the optional breakpoint as part of the commands command. If you don’t supply a breakpoint, then the commands command refers to the last breakpoint you set. To stop adding commands to a breakpoint, simply type end. If you want to remove the commands for a breakpoint, type commands, press Enter, type end, and press Enter again. A command can consist of any interactive Python or debugger command. For example, if you want to automatically move to the next line of code, you’d simply add step as one of the commands.
  • condition: Adds a condition to a breakpoint. You must supply a breakpoint number and a Boolean statement (in string format) as arguments. The debugger doesn’t honor a breakpoint with a condition unless the condition evaluates to True. The condition command lets you add a condition to a breakpoint after defining the breakpoint, rather than as part of defining the breakpoint. If you use condition with a breakpoint, but no condition, then the debugger removes a condition from a breakpoint, rather than adding one.
  • d or down: Moves the frame pointer down one level in the stack trace to a new frame.
  • debug: Enters a recursive debugger that helps you debug complex statements.
  • disable: Disables one or more breakpoints so that they still exist, but the debugger ignores them. You can separate multiple breakpoint numbers with spaces to disable a group of breakpoints at once.
  • enable: Enables one or more breakpoints so that the debugger responds to them. You can separate multiple breakpoint numbers with spaces to enable a group of breakpoints at once. Enabling a breakpoint doesn’t override any conditions that are set on the breakpoint. The condition must still evaluate to True before the debugger reacts to the breakpoint.
  • EOF: Tells the debugger to handle the End of File (EOF) as a command. Normally, this means ending the debugger session once the debugger reaches EOF.
  • exit or q or quit: Ends the debugging session. Make sure you type exit, and not exit(), which still ends the IronPython console session.
  • h or help: Displays information about the debugger. If you don’t provide an argument, help displays a list of available debugging commands. Adding an argument shows information about the specific debugging command.
  • ignore: Creates a condition where the debugger ignores a breakpoint a specific number of times. For example, you might want to debug a loop with a breakpoint set at a specific line of code within the breakpoint. You could use the ignore command to ignore the first five times through the loop and stop at the sixth. You must supply a breakpoint number and a count to use this command. The debugger automatically ignores the breakpoint until the count is 0.
  • j or jump: Forces the debugger to jump to the line of code specified as an argument.
  • l or list: Displays the specified lines of code. If you don’t supply any arguments with the command, the debugger displays 11 lines of code starting with the current line. When you supply just a starting point (a code line number), the debugger displays 11 lines of code starting with the starting point you specify. To control the listing completely, supply both a starting and ending point.
  • n or next: Continues execution to the next line of code. If the current line of code is a function, the debugger executes all of the code within the function and stops at the next line of code in the current function. In sum, this command works much like a step over command in most other debuggers.
  • p: Prints the value of an expression as the debugger sees it. Don’t confuse this command with the IronPython print() function, which prints an expression based on how IronPython sees it.
  • pp: Performs a pretty print. Essentially, this command is the same as the p command, except that the debugger interprets any control characters within the output so that the output appears with line feeds, carriage returns, tabs, and other formatting in place.
  • r or return: Continues execution until the current function returns. This command works much like a step out command in most other debuggers
  • restart: Restarts the current application at the beginning so that you can retest it. The command lets you supply optional arguments that appear as part of the sys.argv attribute. This command preserves debugger history, breakpoints, actions, and options.
  • run: Starts the application when used within Python as demonstrated earlier in this section. However, this command is simply an alias for restart when used within the debugger environment.
  • s or step: Executes the current line of code and then moves to the next line of code, even if that line of code appears within another function. This command works much like a step into command in most other debuggers
  • tbreak: Performs precisely like a break command, except that the debugger removes the breakpoint when the debugger stops at it the first time. This is a useful command when you want to execute a breakpoint just one time.
  • u or up: Moves the frame pointer up one level in the stack trace to an old frame.
  • unalias: Removes the specified alias (see the alias command for additional details).
  • unt or until: Continues execution until such time as the line number is greater than the current line number or the current frame returns. This command works much like a combination of the step over and step out commands in most other debuggers (see next, return, and step for other stepping commands).
  • whatis: Displays the type of the argument that you supply.

Debugging with the CLR Debugger

The CLR debugger, CLRDbg.EXE, is part of the .NET Framework SDK. You find it in the GuiDebug folder of your .NET Framework installation or in the Program FilesMicrosoft.NETSDKv2.0 GuiDebug folder. However, if you installed Visual Studio without installing the SDK, you might not see a GuiDebug folder. In this case, you can download and install the .NET Framework SDK separately. You can obtain the .NET Framework SDK for various platforms at these locations.

  • .NET Framework 2.0: http://msdn.microsoft.com/en-us/netframework/aa731542.aspx
  • .NET Framework 3.0: http://msdn.microsoft.com/en-us/netframework/bb264589.aspx
  • .NET Framework 3.5: http://msdn.microsoft.com/en-us/netframework/cc378097.aspx
  • .NET Framework 3.5 SP1: http://msdn.microsoft.com/en-us/netframework/ aa569263.aspx

This section relies on the CLRDbg.EXE version found in the .NET Framework 2.0 SDK. However, the instructions work fine for every other version of the CLR debugger as well. The newer versions of the debugger may include a few additional features that you won’t likely use or need when working with IronPython. The following steps describe how to start the debugger.

  1. Start the CLR debugger. If you installed the .NET Framework SDK separately, choose Start ➪ Programs ➪ Microsoft .NET Framework SDK v2.0 ➪ Tools ➪ Microsoft CLR Debugger. It’s also possible to start the CLR debugger from the command line by typing CLRDbg and pressing Enter as long as the debugger’s location appears in the path. You see the Microsoft CLR Debugger window.

    Provide the information needed to debug your application.
    Figure 12-10: Provide the information needed to debug your application.
  2. Choose Debug ➪ Program to Debug. You see the Program to Debug dialog box shown in Figure 12-10. This dialog box is where you enter the IronPython executable and script information, along with any command line switches you want to use.
  3. Click the ellipsis (…) in the Program field and use the Find Program to Debug dialog box to locate the IPY.EXE file. Click Open to add the IPY.EXE information to the dialog box.
  4. Type –D NameOfScript.py in the Arguments field (the example uses –D ShowFilters2 .py). Type any additional command line arguments you want to use while working with the application.
  5. Click the ellipses in the Working Directory field and use the Browse for Working Directory dialog box to locate the script directory (not the IPY.EXE directory). Click Open to select the working directory.
  6. Click OK. The CLR debugger prepares the debugging environment. However, you don’t see any files opened. You must open any files you wish to interact with as a separate step.
  7. Choose File ➪ Open ➪ File. Locate the source files you want to debug (ShowFilters2.py for the example). Click Open. You see the source file opened in the Microsoft CLR Debugger window. Figure 12-11 shows an example of how your display should look when working with the example. (The figure shows the debugger in debugging mode.)

    Open the source files you want to debug.
    Figure 12-11: Open the source files you want to debug.

At this point, you can begin working with the script just as you would with the Visual Studio debugger. The next section, “Using Visual Studio for IronPython Debugging,” discusses this debugger in more detail.

Using Visual Studio for IronPython Debugging

When you click Start Debugging, the debugger stops at the line of code as you might expect. Now, create a watch for both filter and filters. As shown in Figure 12-12, you can drill down into a complex object and examine it. In many cases, you must look through the Non-Public Members to find what you want, but the data is there for you to peruse. In this case, you can see all five elements in filters and even see the pattern data. Notice that the Type column is truly helpful in showing you which types to use when interacting with the data.

Watches let you drill down into both Python and .NET data.
Figure 12-12: Watches let you drill down into both Python and .NET data.

Unfortunately, Figure 12-12 also shows the other side of the coin. You can’t access warnings .filters even though it should be available. The Visual Studio debugger often produces poor results when working with Python-specific objects. If you have a need for working with these objects.

As shown in Figure 12-13, you can use the Immediate window to query objects directly. However, you can’t drill down into an object as you might have in the past. Consequently, entering ? filter works just fine, but entering ? filter[0] doesn’t.

The Immediate window is only partially useful when working with IronPython.
Figure 12-13: The Immediate window is only partially useful when working with IronPython.

In general, you’ll find that using the Python debugger works better for some Python-specific applications. Even though the Visual Studio debugger does provide a nice visual display, the quality of information isn’t quite as good. Of course, the picture changes when your application mixes Python and .NET code. In this case, the Visual Studio debugger can be your best friend because it knows how to work with the .NET objects.

 Defining and Using Exceptions

Exceptions are an essential part of any application. In fact, most developers have no problem using them at all. Unfortunately, many developers also misuse exceptions. Instead of providing robust code that handles common problems, the developer simply raises an exception and hopes someone else does something about the issue. Exceptions are generally used to address conditions that you couldn’t anticipate.

IronPython provides access to both Python exception and .NET exceptions, so the developer actually has twice as many opportunities to catch errors before they become a problem. It’s important to use the correct kind of exception handling. If you’re working with .NET code, you’ll normally use a .NET exception. Python exceptions address anything that isn’t .NET-specific. The following sections provide additional information about exceptions.

Implementing Python Exceptions

Python provides a number of standard exceptions, just as the .NET Framework does. You find these exceptions in the exceptions module. To see the list of standard exceptions, import the exceptions module and perform a dir() command on it, as shown in Figure 12-14.

Python stores its list of standard exceptions in the exceptions module.
Figure 12-14: Python stores its list of standard exceptions in the exceptions module.

The various exceptions provide different amounts of information. For example, when working with an IOError, you can access the errno, filename, message, and strerror attributes. On the other hand, a ZeroDivisionError provides only the message attribute. You can use the dir(exceptions .ExceptionName) command to obtain information about each of the exception attributes.

As with .NET, you can create custom exceptions using Python. The documentation for creating a custom exception is a bit sketchy, but you can create a custom exception (usually with the word Error in the name by convention) for every need. Listing 12-2 shows all of the Python exception basics, including creating a relatively flexible custom exception.

Listin g 12-2: Discovering the default action and installed filters

[code]
# Import the required modules.
import exceptions
# Define a custom exception.
class MyError(exceptions.Exception):
errno = 0
message = ‘Nothing’
def __init__(self, errno=0, message=’Nothing’):
self.errno = errno
self.message = message
def __str__(self):
return repr(self.message)
# Display the Error exception list.
for Error in dir(exceptions):
if ‘Error’ in Error:
print Error
# Create a standard exception.
try:
5/0
except ZeroDivisionError as (errinfo):
print “nDivide by Zero error: {0}“.format(errinfo)
# Create a custom exception.
try:
raise MyError(5, ‘Hello from MyError’)
except MyError, Info:
print “Custom Error({0}): {1}“.format(Info.errno, Info.message)
# Pause after the debug session.
raw_input(‘nPress any key to continue…’)
[/code]

The code begins by importing exceptions. The for loop lists all of the exceptions (the names of the types) found in exceptions, as shown in Figure 12-15. Notice how the code uses if ‘Error‘ in Error to locate just the exceptions in the module. This technique is useful for a lot of tasks in IronPython where you need to filter the output in some way.

The example shows basic exception handling and creation for Python.
Figure 12-15: The example shows basic exception handling and creation for Python.

The next bit of code raises a standard exception and then handles it. The output shows just a message. Notice that this exception relies on the as clause to access the error information.

It’s time to look at a custom exception, which begins with the MyError class definition. At a minimum, you should define both __init__() and __str__() or the exception won’t work as intended. Notice how __init__() assigns default values to both errno and message. You can’t depend on the caller to provide this information, so including default values is the best way to approach the problem. You can always assign other values later in the code based on the actual errors.

Make sure you create attributes for any amplifying information you want the caller to have. In this case, the example defines two attributes errno and message.

The __str__() method should return a human-readable message. You can return just the text portion of the exception or return some combination of exception attributes. The important thing is to return something that the developer will find useful should the exception occur. You can test this behavior out with the example by typing raise MyError. Here’s the output you’ll see.

[code]
Traceback (most recent call last):
File “<stdin>”, line 1, in <module>
__main__.MyError: ‘Nothing’
[/code]

Because you didn’t provide any arguments, the output shows the default values. Try various combinations to see how the output works. The example tries the exception in a try…except statement. Notice that a custom exception differs from a standard exception in that you don’t use the as clause and simply provide a comma with a variable (Info in this case) instead. You can then use the variable to access the exception attributes as shown. Figure 12-15 shows how the custom exception outputs information. Of course, your custom exception can provide any combination of values.

Implementing .NET Exceptions

In general, you want to avoid using .NET exceptions in your IronPython applications, except in those cases where you need to provide specific functionality for .NET code. The problem is that IronPython views such exceptions from a Python perspective. Consequently, trapping .NET exceptions can prove tricky unless you spend some time working with them in advance.

Many .NET exceptions are available in the System assembly so you need to import it before you can perform any serious work. After that, you can raise a .NET exception much as you do a Python exception. Handling the exception follows the same route as using a try…except statement. However, the problem is that the exception you get isn’t the exception you raised. Look at Figure 12-16 and you see that the ArgumentException becomes a ValueError and the ArithmeticException becomes an ArithmeticError.

Sloppy programming will cost you so much time as to make the programming experience a nightmare. Using a combination of warnings, error trapping, and exceptions will make your code significantly easier to debug. Of course, choosing the right debugging tool is also a requirement if you want to go home this weekend, rather than spending it in your office debugging your latest application.

 

 

 

Printing Text

Creating the Font Demo Project

A font in XNA is nothing more than a text file—at least, from the programmer’s point of view. When the project is compiled, XNA uses the text file to create a bitmap font on a memory texture and use that for printing text on the screen.

This is a time-consuming process, which is why the font is created at program startup rather than while it’s running. Let’s create a new project and add a font to it.

Creating a New XNA Project

Follow these steps to create a new XNA project in Visual C# 2010:

  1. Start up Visual Studio 2010 Express for Windows Phone (or whichever edition of Visual Studio 2010 you are using).
  2. Bring up the New Project dialog, shown in Figure 3.1, from either the Start Page or the File menu.

    Creating the Font Demo project.
    FIGURE 3.1 Creating the Font Demo project.
  3. Choose Windows Phone Game (4.0) from the list of project templates.
  4. Type in a name for the new project (the example is called Font Demo).
  5. Choose the location for the project by clicking the Browse button, or by typing the folder name directly.
  6. Click OK to create the new project.

The new project is generated by Visual Studio and should look similar to the project shown in Figure 3.2.

The newly generated Font Demo project.
FIGURE 3.2 The newly generated Font Demo project.

Adding a New Font to the Content Project

At this point, you can go ahead and run the project by pressing F5, but all you will see in the Windows Phone emulator is a blue screen. That is because we haven’t written any code yet to draw anything. Before we can print text on the screen, we have to create a font, which is added to the Content project.

In XNA 4.0, most game assets are added to the Content project within the Solution, where they are compiled or converted into a format that XNA uses. We might use the general term “project” when referring to a Windows Phone game developed with XNA, but there might be more than one project in the Solution. The “main project” will be the one containing source code for a game. Some assets, however, might be located just within the source code project, depending on how the code accesses those assets. Think of the Content project as a container for “managed” assets.

A Visual Studio “Solution” is the overall wrapper or container for a game project, and should not be confused with “projects” that it contains, including the Content project containing game assets (bitmap files, audio files, 3D mesh files, and so on).

In this example, both the Solution and the main project are called “Font Demo,” because Visual Studio uses the same name for both when a new Solution is generated. Now, let’s add a new font to the Content project. Remember that the Content project is where all game assets are located.

  1. Select the Content project in Solution Explorer to highlight it, as shown in Figure 3.3.

    Highlighting the Content project.
    FIGURE 3.3 Highlighting the Content project.
  2. Open the Project menu and choose Add New Item. Optionally, you can right-click the Content project in Solution Explorer (Font DemoContent (Content)) to bring up the context menu, and choose Add, New Item.
  3. The Add New Item dialog, shown in Figure 3.4, appears. Choose Sprite Font from the list. Leave the name as is (SpriteFont1.spritefont).

    Adding a new Sprite Font.
    FIGURE 3.4 Adding a new Sprite Font.

A new .spritefont file has been added to the Content project, as shown in Figure 3.5. Visual Studio opens the new file right away so that you can make any changes you want to the font details. The default font name is Segoe UI Mono, which is a monospaced font. This means each character of the font has the same width (takes up the same amount of horizontal space). Some fonts are proportional, which means each character has a different width (in which case, “W” and “I” are spaced quite differently, for instance).

A new Sprite Font has been added to the Content project.
FIGURE 3.5 A new Sprite Font has been added to the Content project.

The SpriteFont1.spritefont file is just a text file, like a .CS source code file, but it is formatted in the XML (Extensible Markup Language) format. You can experiment with the font options in the .spritefont descriptor file, but usually the only fields you will need to change are FontName and Size. Here is what the font file looks like with all comments removed:

[code]
<?xml version=”1.0” encoding=”utf-8”?>
<XnaContent xmlns:Graphics =
“Microsoft.Xna.Framework.Content.Pipeline.Graphics”>
<Asset Type=”Graphics:FontDescription”>
<FontName>Segoe UI Mono</FontName>
<Size>14</Size>
<Spacing>0</Spacing>
<UseKerning>true</UseKerning>
<Style>Regular</Style>
<CharacterRegions>
<CharacterRegion>
<Start>&#32;</Start>
<End>&#126;</End>
</CharacterRegion>
</CharacterRegions>
</Asset>
</XnaContent>
[/code]

Visual Studio Solution (.sln) and project (.csproj) files also contain XML-formatted information!

Table 3.1 shows the royalty-free fonts included with XNA 4.0. Note that some fonts come with italic and bold versions even though the SpriteFont description also allows for these modifiers.

XNA Fonts

Learning to Use the SpriteFont Class

We can create as many fonts as we want in an XNA project and use them at any time to print text with different styles. For each font you want to use in a project, create a new .spritefont file. The name of the file is used to load the font, as you’ll see next. Even if you want to use the same font style with a different point size, you must create a separate .spritefont file (although we will learn how to scale a font as a rendering option).

Loading the SpriteFont Asset

To use a SpriteFont asset, first add a variable at the top of the program. Let’s go over the steps:

  1. Add a new variable called SpriteFont1. You can give this variable a different name if you want. It is given the same name as the asset here only for illustration, to associate one thing with another.
    [code]
    public class Game1 : Microsoft.Xna.Framework.Game
    {
    GraphicsDeviceManager graphics;
    SpriteBatch spriteBatch;
    //create new font variable
    SpriteFont SpriteFont1;
    [/code]
  2. Create (instantiate) a new object using the SpriteFont1 variable, and simultaneously load the font with the Content.Load() method. Note the class name in brackets, <SpriteFont>. If you aren’t familiar with template programming, this can look a bit strange. This type of coding makes the code cleaner, because the Content.Load() method has the same call no matter what type of object you tell it to load.
    [code]
    protected override void LoadContent()
    {
    // Create a new SpriteBatch, which can be used to draw textures.
    spriteBatch = new SpriteBatch(GraphicsDevice);
    // TODO: use this.Content to load your game content here
    SpriteFont1 = Content.Load<SpriteFont>(“SpriteFont1”);
    }
    [/code]

If the Content class did not use a templated Load() method, we would need to call a different method for every type of game asset, such as Content.LoadSpriteFont(), Content.LoadTexture2D(), or Content.LoadSoundEffect().

There is another important reason for using a template form of Load() here: We can create our own custom content loader to load our own asset files! XNA is very extendable with this capability. Suppose you want to load a data file saved by your own custom level editor tool. Instead of manually converting the level file into text or XML, which XNA can already read, you could instead just write your own custom content loader, and then load it with code such as this: Content.Load<Level>(“level1”)

The ability to write code like this is powerful, and reflects a concept similar to “late binding.” This means the C# compiler might not know exactly what type of object a particular line of code is referring to at compile time, but the issue is sorted out later while the program is running. That’s not exactly what’s happening here, but it is a similar concept, and the easiest illustration of template programming I can think of.

These are just possibilities. Let’s get back to the SpriteFont code at hand!

Printing Text

Now that we have loaded the .spritefont asset file, and XNA has created a bitmap font in memory after running the code in LoadContent(), the font is available for use. We can use the SpriteFont1 object to print text on the screen using SpriteBatch.DrawString(). Just be sure to always have a matching pair of SpriteBatch.Begin() and SpriteBatch.End() statements around any drawing code.

Here are the steps you may follow to print some text onto the screen using the new font we have created:

  1. Scroll down to the Draw() method in the code listing.
  2. Add the code shown in bold.
    [code]
    protected override void Draw(GameTime gameTime)
    {
    GraphicsDevice.Clear(Color.CornflowerBlue);
    // TODO: Add your drawing code here
    string text = “This is the Segoe UI Mono font”;
    Vector2 position = new Vector2(20, 20);
    spriteBatch.Begin();
    spriteBatch.DrawString(SpriteFont1, text, position, Color.White);
    spriteBatch.End();
    base.Draw(gameTime);
    }
    [/code]

Run the program by pressing F5. The WP7 emulator comes up, as shown in Figure 3.6.

Printing text in the Font Demo program.
FIGURE 3.6 Printing text in the Font Demo program.

The version of SpriteBatch.DrawString() used here is the simplest version of the method, but other overloaded versions of the method are available. An overloaded method is a method such as DrawString() that has two or more different sets of parameters to make it more useful to the programmer. There are actually six versions of DrawString(). Here is an example using the sixth and most complex version. When run, the changes to the text output are dramatic, as shown in Figure 3.7!

[code]
float rotation = MathHelper.ToRadians(15.0f);
Vector2 origin = Vector2.Zero;
Vector2 scale = new Vector2(1.3f, 5.0f);
spriteBatch.DrawString(SpriteFont1, text, position, Color.White,
rotation, origin, scale, SpriteEffects.None, 0.0f);
[/code]

Experimenting with different DrawString() options.
FIGURE 3.7 Experimenting with different DrawString() options.

As you have learned in this hour, the font support in XNA takes a little time to set up, but after a font has been added, some very useful and versatile text printing capabilities are available. We can print text via the SpriteFont.DrawString() method, with many options available such as font scaling and different colors.

Getting Started with Visual C# 2010 for Windows Phone

Visual C# 2010 Express

At the time of this writing, the current version of the development tool for Windows Phone 7 is Visual Studio 2010. To make development simple for newcomers to the Windows Mobile platform, Microsoft has set up a package that will install everything you need to develop, compile, and run code in the emulator or on a physical Windows Phone device—for free. The download URL at this time is http://www.microsoft. com/express/Phone. If you are using a licensed copy of Visual Studio 2010, such as the Professional, Premium, or Ultimate edition, then you will find XNA Game Studio 4.0 and related tools at http://create.msdn.com (the App Hub website). The App Hub website, shown in Figure 2.1, also contains links to the development tools.

The App Hub website has download links to the development tools.
FIGURE 2.1 The App Hub website has download links to the development tools.

The most common Windows Phone developer will be using the free version of Visual C# 2010, called the Express edition. This continues the wonderful gift Microsoft first began giving developers with the release of Visual Studio 2005. At that time, the usual “professional” versions of Visual Studio were still available, of course, and I would be remiss if I failed to point out that a licensed copy of Visual Studio is required by any person or organization building software for business activities (including both for-profit and nonprofit). The usual freelance developer will also need one of the professional editions of Visual Studio, if it is used for profit. But any single person who is just learning, or any organization that just wants to evaluate Visual Studio for a short time, prior to buying a full license, can take advantage of the free Express editions. I speak of “editions” because each language is treated as a separate product. The professional editions include all the languages, but the free Express editions, listed here, are each installed separately:

  • Visual C# 2010 Express
  • Visual Basic 2010 Express
  • Visual C++ 2010 Express

The version of Visual Studio we will be using is called Visual Studio 2010 Express for Windows Phone. This is a “package” with the Windows Phone SDK already prepackaged with Visual C# 2010 Express. (Despite the name, “Visual Studio” here supports only the C# language.) It’s a nice package that makes it very easy to get started doing Windows Phone development. But if you are using Visual Studio 2010 Professional (or one of the other editions) along with the Windows Phone SDK, you will see a lot more project templates in the New Project dialog, shown in Figure 2.2.

The New Project dialog in Visual C# 2010 Express.
FIGURE 2.2 The New Project dialog in Visual C# 2010 Express.
  • Windows Phone Application (Visual C#)
  • Windows Phone Databound Application (Visual C#)
  • Windows Phone Class Library (Visual C#)
  • Windows Phone Panorama Application (Visual C#)
  • Windows Phone Pivot Application (Visual C#)
  • Windows Phone Game (4.0) (Visual C#)
  • Windows Phone Game Library (4.0) (Visual C#)
  • Windows Game (4.0) (Visual C#)
  • Windows Game Library (4.0) (Visual C#)
  • Xbox 360 Game (4.0) (Visual C#)
  • Xbox 360 Game Library (4.0) (Visual C#)
  • Content Pipeline Extension Library (4.0)
  • Empty Content Project (4.0) (Visual C#)

As you can see, even in this limited version of Visual Studio 2010, all the XNA Game Studio 4.0 project templates are included—not just those limited to Windows Phone. The project templates with “(4.0)” in the name come from the XNA Game Studio SDK, which is what we will be primarily using to build Windows Phone games. The first five project templates come with the Silverlight SDK. That’s all we get with this version of Visual Studio 2010. It’s not even possible to build a basic Windows application here—only Windows Phone (games or apps), Windows (game only), and Xbox 360 (obviously, game only). The first five project templates are covered in the next section, “Using Silverlight for WP7.”

Did you notice that all of these project templates are based on the C# language? Unfortunately for Visual Basic fans, we cannot use Basic to program games or apps for Windows Phone using Visual C# 2010 Express. You can install Visual Basic 2010 Express with Silverlight and then use that to make WP7 applications. XNA, however, supports only C#.

We don’t look at Xbox 360 development in this book at all. If you’re interested in the subject, see my complementary book XNA Game Studio 4.0 for Xbox 360 Developers [Cengage, 2011].

Using Silverlight for WP7

Microsoft Silverlight is a web browser plug-in “runtime.” Silverlight is not, strictly speaking, a development tool. It might be compared to DirectX, in that it is like a library, but for rich-content web apps. It’s similar to ASP.NET in that Silverlight applications run in a web browser, but it is more capable for building consumer applications (while ASP.NET is primarily for business apps). But the way Silverlight applications are built is quite different from ASP.NET—it’s more of a design tool with an editing environment called Expression Blend. The design goal of Silverlight is to produce web applications that are rich in media support, and it supports all standard web browsers (not just Internet Explorer, which is a pleasant surprise!), including Firefox and Safari on Mac.

Using Expression Blend to Build Silverlight Projects

Microsoft Expression Blend 4 is a free tool installed with the Windows Phone package that makes it easier to design Silverlight-powered web pages with rich media content support. Blend can be used to design and create engaging user experiences for Silverlight pages. Windows application support is possible with the WPF (Windows Presentation Foundation) library. A key feature of Blend is that it separates design from programming. As you can see in Figure 2.3, the New Project dialog in Blend lists the same project types found in Visual C# 2010 Express.

Expression Blend is a Silverlight development tool for web designers.
FIGURE 2.3 Expression Blend is a Silverlight development tool for web designers.

Let’s create a quick Expression Blend project to see how it works. While working on this quick first project, keep in mind that we’re not building a “Blend” project, but a “Silverlight” project—using Blend. Blend is a whole new Silverlight design and development tool, not affiliated with Visual Studio (but probably based on it). The Silverlight library is already installed on the Windows Phone emulator and actual phones.

Here’s how to create the project:

  1. Create a Windows Phone Application project using the New Project dialog. Click File, New Project.
  2. Blend creates a standard project for Windows Phone, complete with an application title and opening page for the app.
  3. Run the project with Project, Run Project, or by pressing F5. The running program is shown in Figure 2.4.
Our first project with Expression Blend.
FIGURE 2.4 Our first project with Expression Blend.

This is a useless app, but it shows the steps needed to create a new project and run it in the Windows Phone emulator. Did you notice how large the emulator window appears? That’s full size with respect to the screen resolution of WP7. As you’ll recall from the first hour, the resolution is 480×800. That is enough pixels to support 480p DVD movies, but not the 740p or 1080p HD standards. Still, DVD quality is great for a phone! And when rotated to landscape mode, 800×480 is a lot of screen real estate for a game too.

You can make quick and easy changes to the labels at the top and experiment with the design controls in the toolbox on the left. Here you can see that the application title and page title have been renamed, and some images and shapes have been added to the page. Pressing F5 again brings it up in the emulator, shown in Figure 2.5.

Now that you’ve seen what’s possible with Expression Blend’s more designer-friendly editor, let’s take a look at the same Silverlight project in Visual Studio 2010.

Silverlight Projects

The Silverlight runtime for WP7 supports some impressive media types with many different audio and video codecs, vector graphics, bitmap graphics, and animation. That should trigger the perimeter alert of any game developer worth their salt! Silverlight brings some highly interactive input mechanisms to the Web, including accelerometer motion detection, multitouch input (for devices that support it),camera, microphone input, and various phone-type features (like accessing an address book and dialing).

Making quick changes to the page is easy with Expression Blend.
FIGURE 2.5 Making quick changes to the page is easy with Expression Blend.

To find out whether your preferred web browser supports Silverlight, visit the installer web page at http://www.microsoft.com/getsilverlight/get-started/install.

The Visual Studio 2010 project templates specific to Silverlight are highlighted in bold in the list below. These are the same project templates shown in Expression Blend!

  • Windows Phone Application (Visual C#)
  • Windows Phone Databound Application (Visual C#)
  • Windows Phone Class Library (Visual C#)
  • Windows Phone Panorama Application (Visual C#)
  • Windows Phone Pivot Application (Visual C#)
  • Windows Phone Game (4.0) (Visual C#)
  • Windows Phone Game Library (4.0) (Visual C#)
  • Windows Game (4.0) (Visual C#)
  • Windows Game Library (4.0) (Visual C#)
  • Xbox 360 Game (4.0) (Visual C#)
  • Xbox 360 Game Library (4.0) (Visual C#)
  • Content Pipeline Extension Library (4.0)
  • Empty Content Project (4.0) (Visual C#)

Let’s create a quick project in Visual Studio in order to compare it with Expression Blend. You’ll note right away that it is not the same rich design environment, but is more programmer oriented.

Comparing Visual Studio with Expression Blend

Let’s create a new project in Visual C# 2010 in order to compare it with Expression Blend. Follow these steps:

  1. Open the New Project dialog with File, New Project.
  2. Next, in the New Project dialog, choose the target folder for the project and type in a project name, as shown in Figure 2.6.

    Creating a new Silverlight project in Visual C# 2010 Express.
    FIGURE 2.6 Creating a new Silverlight project in Visual C# 2010 Express.
  3. Click the OK button to generate the new project shown in the figure. Not very user-friendly, is it? First of all, double-clicking a label does not make it editable, among other limitations (compared to Expression Blend). Where are the control properties? Oh, yes, in the Properties window in Visual Studio. See Figure 2.7. This is also very data-centric, which programmers love and designers loathe. The view on the left is how the page appears on the device (or emulator); the view on the right is the HTML-like source code behind the page, which can be edited.
  4. Bring up the Properties window (if not already visible) by using the View menu. Select a control on the page, such as the application title. Scroll down in the Properties to the Text property, where you can change the label’s text, as shown in Figure 2.8. Play around with the various properties to change the horizontal alignment, the color of the text, and so on. Open the Toolbox (located on the left side of Visual Studio) to gain access to new controls such as the Ellipse control shown here.
    The new Silverlight project has been created.
    FIGURE 2.7 The new Silverlight project has been created.

    Adding content to the Silverlight page.
    FIGURE 2.8 Adding content to the Silverlight page.

XNA Game Studio

XNA Game Studio 4.0 was released in the fall of 2010. (From now on, let’s just shorten this to “XNA” or “XNA 4.0”, even though “Game Studio” is the name of the SDK, and “XNA” is the overall product name.) XNA 4.0 saw several new improvements to the graphics system, but due to the hardware of the Xbox 360, XNA is still based on Direct3D 9 (not the newer versions, Direct3D 10 or 11). This is actually very good news for a beginner, since Direct3D 9 is much easier to learn than 10 or 11. Although XNA abstracts the C++-based DirectX libraries into the C#-based XNA Framework, there is still much DirectX-ish code that you have to know in order to build a capable graphics engine in XNA. While XNA 4.0 added WP7 support, it simultaneously dropped support for Zune (the portable multimedia and music player).

I have a Zune HD, and it’s a nice device! It can play 720p HD movies and even export them to an HDTV via an adapter and HDMI cable. It plays music well too. But, like many consumers, I just did not have much incentive to go online and download games for the Zune. This is, of course, purely a subjective matter of opinion, but it’s disappointing for game developers who put effort into making games for Zune. Fortunately, the code base is largely the same (thanks to XNA and C#), so those Zune games can be easily ported to WP7 now.

Rendering states, enumerations, return values, and so forth are the same in XNA as they are in Direct3D, so it could be helpful to study a Direct3D book to improve your skills as an XNA programmer!

The project templates for Windows Phone might surprise you—there are only two! We can build a Windows Phone game or a game library. All the other templates are related to the other platforms supported by XNA.

  • Windows Phone Application (Visual C#)
  • Windows Phone Databound Application (Visual C#)
  • Windows Phone Class Library (Visual C#)
  • Windows Phone Panorama Application (Visual C#)
  • Windows Phone Pivot Application (Visual C#)
  • Windows Phone Game (4.0) (Visual C#)
  • Windows Phone Game Library (4.0) (Visual C#)
  • Windows Game (4.0) (Visual C#)
  • Windows Game Library (4.0) (Visual C#)
  • Xbox 360 Game (4.0) (Visual C#)
  • Xbox 360 Game Library (4.0) (Visual C#)
  • Content Pipeline Extension Library (4.0)
  • Empty Content Project (4.0) (Visual C#)

Let’s build a quick XNA project for Windows Phone to see what it looks like. We’ll definitely be doing a lot of this in upcoming chapters since XNA is our primary focus (the coverage of Silverlight was only for the curious—grab a full-blown Silverlight or Expression Blend book for more complete and in-depth coverage.

Creating Your First XNA 4.0 Project

Let’s create a new XNA 4.0 project in Visual C# 2010, so we can use this as a comparison with the previous project created with Expression Blend. Follow these steps:

  1. Create a new project. We’ll be basing these tutorials around Visual Studio 2010 Express for Windows Phone. The processes will be similar to using the Professional version, but you will see many more project templates in the New Project dialog. Open the File menu and choose New Project. The New Project dialog is shown in Figure 2.9.

    Creating a new XNA 4.0 project.
    FIGURE 2.9 Creating a new XNA 4.0 project.
  2. The new project has been created. Note, from Figure 2.10, the code that has been automatically generated for the XNA project. If you have ever worked with XNA before, this will be no surprise—the code looks exactly like the generated code for Windows and Xbox 360 projects!

    The new XNA 4.0 project has been created.
    FIGURE 2.10 The new XNA 4.0 project has been created.
  3. Run the project with Build, Run, or by pressing F5. The emulator will come up, as shown in Figure 2.11. Doesn’t look like much—just a blue screen! That’s exactly what we want to see, because we haven’t written any game code yet.
  4. Add a SpriteFont to the Content project. Right-click the content project, called XNA ExampleContent(Content) in the Solution Explorer. Choose Add, New Item, as shown in Figure 2.12.
    Running the XNA project in the Windows Phone emulator.
    FIGURE 2.11 Running the XNA project in the Windows Phone emulator.

    Adding a new item to the content project.
    FIGURE 2.12 Adding a new item to the content project.
  5. In the Add Item dialog, choose the Sprite Font item from the list, as shown in Figure 2.13, and leave the filename as SpriteFont1.spritefont.

    Adding a new SpriteFont content item to the project.
    FIGURE 2.13 Adding a new SpriteFont content item to the project.
  6. Create the font variable. The comments in the code listing have been removed to make the code easier to read. We’ll dig into the purpose of all this code in the next hour, so don’t be concerned with understanding all the code yet. Type in the two new bold lines of code shown here to add the font variable.
    [code]
    public class Game1 : Microsoft.Xna.Framework.Game
    {
    GraphicsDeviceManager graphics;
    SpriteBatch spriteBatch;
    //new font variable
    SpriteFont font;
    public Game1()
    {
    graphics = new GraphicsDeviceManager(this);
    Content.RootDirectory = “Content”;
    TargetElapsedTime = TimeSpan.FromTicks(333333);
    }
    [/code]
  7. Load the font. Enter the two new lines shown in bold in the LoadContent method.
    [code]
    protected override void Initialize()
    {
    base.Initialize();
    }
    protected override void LoadContent()
    {
    spriteBatch = new SpriteBatch(GraphicsDevice);
    //load the font
    font = Content.Load<SpriteFont>(“SpriteFont1”);
    }
    protected override void UnloadContent()
    {
    }
    [/code]
  8. Print a message on the screen. Using the SpriteBatch and SpriteFont objects, we can print any text message. This is done from the Draw method—add the code highlighted in bold.
    [code]
    protected override void Update(GameTime gameTime)
    {
    if (GamePad.GetState(PlayerIndex.One).Buttons.Back ==
    ButtonState.Pressed)
    this.Exit();
    base.Update(gameTime);
    }
    protected override void Draw(GameTime gameTime)
    {
    GraphicsDevice.Clear(Color.CornflowerBlue);
    //print a message
    spriteBatch.Begin();
    string text = “HELLO FROM XNA!”;
    Vector2 pos = font.MeasureString(text);
    spriteBatch.DrawString(font, text, pos, Color.White);
    spriteBatch.End();
    base.Draw(gameTime);
    }
    }
    [/code]
  9. Run the program using Debug, Start Debugging, or by pressing F5. The program will come up in the emulator, shown in Figure 2.14. Now there’s just one big problem: The font is too small, and the screen needs to be rotated to landscape mode so we can read it!
  10. Click the emulator window to cause the little control menu to appear at the upper right. There are two icons that will rotate the window left or right, allowing us to switch from portrait to landscape mode. All XNA projects will default to portrait mode by default. Landscape mode is shown in Figure 2.15.
    The text message is displayed in the emulator— sideways!
    FIGURE 2.14 The text message is displayed in the emulator— sideways!

    Rotating the emulator window to landscape mode for XNA projects.
    FIGURE 2.15 Rotating the emulator window to landscape mode for XNA projects.
  11. Enlarge the font. We’re almost done; there’s just one final thing I want to show you how to do here. Open the font file you created, SpriteFont1. spritefont. Change the size value from 14 to 36. Now rerun the project by pressing F5. The new, large font is shown in Figure 2.16.

    Enlarging the font to make it more readable.
    FIGURE 2.16 Enlarging the font to make it more readable.

XNA or Silverlight: What’s the Verdict?

We have now seen two projects developed with the two different, and somewhat competing tools: XNA and Silverlight. Which should we choose? This is really a matter of preference when it comes to developing a game. Although XNA is far more capable due to its rendering capabilities, Silverlight can be used to make a game as well, with form-based control programming. For portable, touchscreen applications, it’s a given: Silverlight. But for serious game development, XNA is the only serious option.

We covered quite a bit of information regarding Visual Studio 2010, the project templates available for Windows Phone, and the value-added tool Expression Blend. A sample project was presented using Expression Blend with a corresponding Silverlight project in Visual Studio, as well as an XNA project. We’re off to a good start and already writing quite a bit of code! In the next hour, you will create your first Windows Phone game.

Windows Phone Special Effects

Using dual texture effects

Dual texture is very useful when you want to map two textures onto a model. The Windows Phone 7 XNA built-in DualTextureEffect samples the pixel color from two texture images. That is why it is called dual texture. The textures used in the effect have their own texture coordinate and can be mapped individually, tiled, rotated, and so on. The texture is mixed using the pattern:

[code]
finalTexture.color = texture1.Color * texture2.Color;
finalTexture.alpha = texture1.Alpha * texture2.Alpha;
[/code]

The color and alpha of the final texture come from a separate computation. The best practice of DualTextureEffect is to apply the lightmap on the model. In computer graphics, computing the lighting and shadows is a big performance job in real time. The lightmap texture is pre-computed and stored individually. A lightmap is a data set comprising the different surfaces of a 3D model. This will save the performance cost on lighting computation. Sometimes, you might want to use the ambient occlusion effect, which is costly. At this point, lightmap can be used as a texture, then mapped to the special model or scene for realistic effect. As the lightmap is pre-computed in 3D modeling software (you will learn how to deal with this in 3DS MAX), it is easy for you to use the most complicated lighting effects (shadows, ray-tracing, radiosity, and so on.) in Windows Phone 7. You can use the dual texture effect if you just want the game scene to have shadows and lighting. In this recipe, you will learn how to create the lightmap and apply it on your game model using the DualTextureEffect.

How to do it…

The following steps show you the process for creating the lightmap in 3DS MAX and how to use the lightmap in your Windows Phone 7 game using DualTextureEffect:

  1. Create the Sphere lightmap in 3DS MAX 2011. Open 3DS MAX 2011, in the Create panel, click the Geometry button, then create a sphere by choosing the Sphere push button, as shown in the following screenshot:
    the Geometry button
  2. Add the texture to the Material Compact Editor and apply the material to the sphere. Click the following menu items of 3DS MAX 2011: Rendering | Material Editor | Compact Material Editor. Choose the first material ball and apply the texture you want to the material ball. Here, we use the tile1.png, a checker image, which you can find in the Content directory of the example bundle file. The applied material ball looks similar to the following screenshot:
    Material Compact Editor
  3. Apply the Target Direct Light to the sphere. In the Create panel—the same panel for creating sphere—click the Lights button and choose the Target Direct option. Then drag your mouse over the sphere in the Perspective viewport and adjust the Hotspot/Beam to let the light encompass the sphere, as shown in the following screenshot:
    the Perspective viewport
  4. Render the Lightmap. When the light is set as you want, the next step is to create the lightmap. After you click the sphere that you plan to build the lightmap for, click the following menu items in 3DS MAX: Rendering | Render To Texture. In the Output panel of the pop-up window, click the Add button. Another pop-up window will show up; choose the LightingMap option, and then click Add Elements, as shown in the following screenshot:
    Rendering | Render To Texture
  5. After that, change the setting of the lightmap:
    • Change the Target Map Slot to Self-Illumination in the Output panel.
    • Change the Baked Material Settings to Output Into Source in the Baked Material panel.
    • Change the Channel to 2 in the Mapping Coordinates panel.
    • Finally, click the Render button. The generated lightmap will look similar to the following screenshot:
      the Render buttonBy default, the lightmap texture type is .tga, and the maps are placed in the images subfolder of the folder where you installed 3DS MAX. The new textures are flat. In other words, they are organized according to groups of object faces. In this example, the lightmap name is Sphere001LightingMap.tga.
  6. Open the Material Compact Editor again by clicking the menu items Rendering | Material Editor | Compact Material Editor. You will find that the first material ball has a mixed texture combined with the original texture and the lightmap. You can also see that Self-Illumination is selected and the value is Sphere001LightingMap. tga. This means the lightmap for the sphere is applied successfully.
  7. Select the sphere and export to an FBX model file named DualTextureBall.FBX, which will be used in our Windows Phone 7 game.
  8. From this step, we will render the lightmap of the sphere in our Windows Phone 7 XNA game using the new built-in effect DualTextureEffect. Now, create a Windows Phone Game project named DualTextureEffectBall in Visual Studio 2010 and change Game1.cs to DualTextureEffectBallGame.cs. Then, add the texture file tile1.png, the lightmap file Sphere001LightingMap.tga, and the model DualTextureBall.FBX to the content project.
  9. Declare the indispensable variables in the DualTextureEffectBallGame class. Add the following code to the class field:
    [code]
    // Ball Model
    Model modelBall;
    // Dual Texture Effect
    DualTextureEffect dualTextureEffect;
    // Camera
    Vector3 cameraPosition;
    Matrix view;
    Matrix projection;
    [/code]
  10. Initialize the camera. Insert the following code to the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(0, 50, 200);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,
    1.0f, 1000.0f);
    [/code]
  11. Load the ball model and initialize the DualTextureEffect. Paste the following code to the LoadContent() method:
    [code]
    // Load the ball model
    modelBall = Content.Load<Model>(“DualTextureBall”);
    // Initialize the DualTextureEffect
    dualTextureEffect = new DualTextureEffect(GraphicsDevice);
    dualTextureEffect.Projection = projection;
    dualTextureEffect.View = view;
    // Set the diffuse color
    dualTextureEffect.DiffuseColor = Color.Gray.ToVector3();
    // Set the first and second texture
    dualTextureEffect.Texture =
    Content.Load<Texture2D>(“tile1”);
    dualTextureEffect.Texture2 =
    Content.Load<Texture2D>(“Sphere001LightingMap”);
    Define the DrawModel() method in the class:
    // Draw model
    private void DrawModel(Model m, Matrix world,
    DualTextureEffect effect)
    {
    foreach (ModelMesh mesh in m.Meshes)
    {
    // Iterate every part of current mesh
    foreach (ModelMeshPart meshPart in mesh.MeshParts)
    {
    // Change the original effect to the designed
    // effect
    meshPart.Effect = effect;
    // Update the world matrix
    effect.World *= world;
    }
    mesh.Draw();
    }
    }
    [/code]
  12. Draw the ball model using DualTextureEffect on the Windows Phone 7 screen. Add the following lines to the Draw() method:
    [code]
    // Rotate the ball model around axis Y.
    float timer =
    (float)gameTime.ElapsedGameTime.TotalSeconds;
    DrawModel(modelBall, Matrix.CreateRotationY(timer),
    dualTextureEffect);
    [/code]
  13. Build and run the example. It should run as shown in the following screenshot:
    DualTextureEffect
  14. If you comment the following line in LoadContent() to disable the lightmap texture, you will find the difference when lightmap is on or off:
    [code]
    dualTextureEffect.Texture2 =
    Content.Load<Texture2D>(“Sphere001LightingMap”);
    [/code]
  15. Run the application without lightmap. The model will be in pure black as shown in the following screenshot:
    dual texture effects

How it works…

Steps 1–6 are to create the sphere and its lightmap in 3DS MAX 2011.

In step 8, the modelBall is responsible for loading and holding the ball model. The dualTextureEffect is the object of XNA 4.0 built-in effect DualTextureEffect for rendering the ball model with its original texture and the lightmap. The following three variables cameraPosition, view, and projection represent the camera.

In step 10, the first line is to load the ball model. The rest of the lines initialize the DualTextureEffect. Notice, we use the tile1.png for the first and original texture, and the Sphere001LightingMap.tga for the lightmap as the second texture.

In step 11, the DrawModel() method is different from the definition. Here, we need to replace the original effect of each mesh with the DualTextureEffect. When we iterate the mesh parts of every mesh of the current model, we assign the effect to the meshPart.Effect for applying the DualTextureEffect to the mesh part.

Using environment map effects

In computer games, environment mapping is an efficient image-based lighting technique for aligning the reflective surface with the distant environment surrounding the rendered object. In Need for Speed, produced by Electronic Arts, if you open the special visual effect option while playing the game, you will find the car body reflects the scene around it, which may be trees, clouds, mountains, or buildings. They are amazing and attractive. This is environment mapping, it makes games more realistic. The methods for storing the surrounding environment include sphere mapping and cube mapping, pyramid mapping, and the octahedron mapping. In XNA 4.0, the framework uses cube mapping, in which the environment is projected onto the six faces of a cube and stored as six square textures unfolded into six square regions of a single texture. In this recipe, you will learn how to make a cubemap using the DirectX texture tool, and apply the cube map on a model using EnvironmentMappingEffect.

Getting ready

Cubemap is used in real-time engines to fake refractions. It’s way faster than ray-tracing because they are only textures mapped as a cube. So that’s six images (one for each face of the cube).

For creating the cube map for the environment map effect, you should use the DirectX Texture Tool in the DirectX SDK Utilities folder. The latest version of Microsoft DirectX SDK can be downloaded from the URL http://www.microsoft.com/downloads/en/ details.aspx?FamilyID=3021d52b-514e-41d3-ad02-438a3ba730ba.

How to do it…

The following steps lead you to create an application using the Environment Mapping effect:

  1. From this step, we will create the cube map in DirectX Texture Tool. Run this application and create a new Cube Map. Click the following menu items: File | New Texture. A window will pop-up; in this window, choose the Cubemap Texture for Texture Type; change the dimension to 512 * 512 in the Dimensions panel; set the Surface/Volume Format to Four CC 4-bit: DXT1. The final settings should look similar to the following screenshot:
    Cubemap Texture
  2. Set the texture of every face of the cube. Choose a face for setting the texture by clicking the following menu items: View | Cube Map Face | Positive X, as shown in the following screenshot:
    Cube Map Face | Positive X
  3. Then, apply the image for the Positive X face by clicking: File | Open Onto This Cubemap Face, as shown in the following screenshot:
    Open Onto This Cubemap Face
  4. When you click the item, a pop-up dialog will ask you to choose a proper image for this face. In this example, the Positive X face will look similar to the following screenshot:
    Positive X face will look similar
  5. It is similar for the other five faces, Negative X, Positive Y, Negative Y, Positive Z, and Negative Z. When all of the cube faces are appropriately set, we save cubemap as SkyCubeMap.dds. The cube map will look similar to the following figure:
    Negative X, Positive Y, Negative Y, Positive Z, and Negative Z
  6. From this step, we will start to render the ball model using the XNA 4.0 built-in effect called EnvironmentMapEffect. Create a Windows Phone Game project named EnvironmentMapEffectBall in Visual Studio 2010 and change Game1.cs to EnvironmentMapEffectBallGame.cs. Then, add the ball model file ball.FBX, ball texture file silver.jpg, and the generated cube map from DirectX Texture Tool SkyCubemap.dds to the content project.
  7. Declare the necessary variables of the EnvironmentMapEffectBallGame class. Add the following lines to the class:
    [code]
    // Ball model
    Model modelBall;
    // Environment Map Effect
    EnvironmentMapEffect environmentEffect;
    // Cube map texture
    TextureCube textureCube;
    // Ball texture
    Texture2D texture;
    // Camera
    Vector3 cameraPosition;
    Matrix view;
    Matrix projection;
    [/code]
  8. Initialize the camera. Insert the following lines to the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(2, 3, 32);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,
    1.0f, 100.0f);
    [/code]
  9. Load the ball model, ball texture, and the sky cube map. Then initialize the environment map effect and set its properties. Paste the following code in the LoadContent() method:
    [code]
    // Load the ball model
    modelBall = Content.Load<Model>(“ball”);
    // Load the sky cube map
    textureCube = Content.Load<TextureCube>(“SkyCubeMap”);
    // Load the ball texture
    texture = Content.Load<Texture2D>(“Silver”);
    // Initialize the EnvironmentMapEffect
    environmentEffect = new EnvironmentMapEffect(GraphicsDevice);
    environmentEffect.Projection = projection;
    environmentEffect.View = view;
    // Set the initial texture
    environmentEffect.Texture = texture;
    // Set the environment map
    environmentEffect.EnvironmentMap = textureCube;
    environmentEffect.EnableDefaultLighting();
    // Set the environment effect factors
    environmentEffect.EnvironmentMapAmount = 1.0f;
    environmentEffect.FresnelFactor = 1.0f;
    environmentEffect.EnvironmentMapSpecular = Vector3.Zero;
    [/code]
  10. Define the DrawModel() of the class:
    [code]
    // Draw Model
    private void DrawModel(Model m, Matrix world,
    EnvironmentMapEffect environmentMapEffect)
    {
    foreach (ModelMesh mesh in m.Meshes)
    {
    foreach (ModelMeshPart meshPart in mesh.MeshParts)
    {
    meshPart.Effect = environmentMapEffect;
    environmentMapEffect.World = world;
    }
    mesh.Draw();
    }
    }
    [/code]
  11. Draw and rotate the ball with EnvironmentMapEffect on the Windows Phone 7 screen. Insert the following code to the Draw() method:
    [code]
    // Draw and rotate the ball model float time = (float)gameTime.TotalGameTime.TotalSeconds; DrawModel(modelBall, Matrix.CreateRotationY(time * 0.3f) * Matrix.CreateRotationX(time), environmentEffect);
    [/code]
  12. Build and run the application. It should run similar to the following screenshot:
    environment map effects

How it works…

Steps 1 and 2 use the DirectX Texture Tool to generate a sky cube map for the XNA 4.0 built-in effect EnvironmentMapEffect.

In step 4, the modelBall loads the ball model, environmentEffect will be used to render the ball model in EnvironmentMapEffect, and textureCube is a cube map texture. The EnvironmentMapEffect will receive the texture as an EnvironmentMap property; texture represents the ball texture; the last three variables cameraPosition, view, and projection are responsible for initializing and controlling the camera.

In step 6, the first three lines are used to load the required contents including the ball model, texture, and the sky cube map. Then, we instantiate the object of EnvironmentMapEffect and set its properties. environmentEffect.Projection and environmentEffect. View are for the camera; environmentEffect.Texture is for mapping ball texture onto the ball model; environmentEffect.EnvironmentMap is the environment map from which the ball model will get the reflected color and mix it with its original texture.

The EnvironmentMapAmount is a float that describes how much of the environment map could show up, which also means how much of the cube map texture will blend over the texture on the model. The values range from 0 to 1 and the default value is 1.

The FresnelFactor makes the environment map visible independent of the viewing angle. Use a higher value to make the environment map visible around the edges; use a lower value to make the environment map visible everywhere. Fresnel lighting only affects the environment map color (RGB values); alpha is not affected. The value ranges from 0.0 to 1.0. 0.0 is used to disable the Fresnel Lighting. 1.0 is the default value.

The EnvironmentMapSpecular implements cheap specular lighting, by encoding one or more specular highlight patterns into the environment map alpha channel, then setting the EnvironmentMapSpecular to the desired specular light color.

In step 7, we replace the default effect of every mesh part of the model meshes with the EnvironmentMapEffect, and draw the mesh with replaced effect.

Rendering different parts of a character into textures using RenderTarget2D

Sometimes, you want to see a special part of a model or an image, and you also want to see the original view of them at the same time. This is where, the render target will help you. From the definition of render target in DirectX, a render target is a buffer where the video card draws pixels for a scene that is being rendered by an effect class. In Windows Phone 7, the independent video card is not supported. The device has an embedded processing unit for graphic rendering. The major application of render target in Windows Phone 7 is to render the viewing scene, which is in 2D or 3D, into 2D texture. You can manipulate the texture for special effects such as transition, partly showing, or something similar. In this recipe, you will discover how to render different parts of a model into texture and then draw them on the Windows Phone 7 screen.

Getting ready

Render target, by default, is called the back buffer. This is the part of the video memory that contains the next frame to be drawn. You can create other render targets with the RenderTarget2D class, reserving new regions of video memory for drawing. Most games render a lot of content to other render targets besides the back buffer (offscreen), then assemble the different graphical elements in stages, combining them to create the final product in the back buffer.

A render target has a width and height. The width and height of the back buffer are the final resolution of your game. An offscreen render target does not need to have the same width and height as the back buffer. Small parts of the final image can be rendered in small render targets, and copied to another render target later. To use a render target, create a RenderTarget2D object with the width, height, and other options you prefer. Then, call GraphicsDevice.SetRenderTarget to make your render target the current render target. From this point on, any Draw calls you make will draw into your render target because the RenderTarget2D is the subclass of Texture2D. When you are finished with the render target, call GraphicsDevice.SetRenderTarget to a new render target (or null for the back buffer).

How to do it…

In the following steps, you will learn how to use RenderTarget2D to render different parts of a designated model into textures and present them on the Windows Phone 7 screen:

  1. Create a Windows Phone Game project named RenderTargetCharacter in Visual Studio 2010 and change Game1.cs to RenderTargetCharacterGame.cs. Then, add the character model file character.FBX and the character texture file Blaze. tga to the content project.
  2. Declare the required variables in the RenderTargetCharacterGame class field. Add the following lines of code to the class field:
    [code]
    // Character model
    Model modelCharacter;
    // Character model world position
    Matrix worldCharacter = Matrix.Identity;
    // Camera
    Vector3 cameraPosition;
    Vector3 cameraTarget;
    Matrix view;
    Matrix projection;
    // RenderTarget2D objects for rendering the head, left //fist,
    and right foot of character
    RenderTarget2D renderTarget2DHead;
    RenderTarget2D renderTarget2DLeftFist;
    RenderTarget2D renderTarget2DRightFoot;
    [/code]
  3. Initialize the camera and render targets. Insert the following code to the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(0, 40, 350);
    cameraTarget = new Vector3(0, 0, 1000);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Initialize the RenderTarget2D objects with different sizes
    renderTarget2DHead = new RenderTarget2D(GraphicsDevice,
    196, 118, false, SurfaceFormat.Color,
    DepthFormat.Depth24, 0,
    RenderTargetUsage.DiscardContents);
    renderTarget2DLeftFist = new RenderTarget2D(GraphicsDevice,
    100, 60, false, SurfaceFormat.Color,
    DepthFormat.Depth24,
    0, RenderTargetUsage.DiscardContents);
    renderTarget2DRightFoot = new
    RenderTarget2D(GraphicsDevice, 100, 60, false,
    SurfaceFormat.Color, DepthFormat.Depth24, 0,
    RenderTargetUsage.DiscardContents);
    [/code]
  4. Load the character model and insert the following line of code to the LoadContent() method:
    [code]
    modelCharacter = Content.Load<Model>(“Character”);
    [/code]
  5. Define the DrawModel() method:
    [code]
    // Draw the model on screen
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.DiffuseColor = Color.White.ToVector3();
    effect.World =
    transforms[mesh.ParentBone.Index] * world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  6. Get the rendertargets of the right foot, left fist, and head of the character. Then draw the rendertarget textures onto the Windows Phone 7 screen. Insert the following code to the Draw() method:
    [code]
    // Get the rendertarget of character head
    GraphicsDevice.SetRenderTarget(renderTarget2DHead);
    GraphicsDevice.Clear(Color.Blue);
    cameraPosition = new Vector3(0, 110, 60);
    cameraTarget = new Vector3(0, 110, -1000);
    view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
    Vector3.Up);
    DrawModel(modelCharacter, worldCharacter, view,
    projection);
    GraphicsDevice.SetRenderTarget(null);
    // Get the rendertarget of character left fist
    GraphicsDevice.SetRenderTarget(renderTarget2DLeftFist);
    GraphicsDevice.Clear(Color.Blue);
    cameraPosition = new Vector3(-35, -5, 40);
    cameraTarget = new Vector3(0, 5, -1000);
    view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
    Vector3.Up);
    DrawModel(modelCharacter, worldCharacter, view,
    projection);
    GraphicsDevice.SetRenderTarget(null);
    // Get the rendertarget of character right foot
    GraphicsDevice.SetRenderTarget(renderTarget2DRightFoot);
    GraphicsDevice.Clear(Color.Blue);
    cameraPosition = new Vector3(20, -120, 40);
    cameraTarget = new Vector3(0, -120, -1000);
    view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
    Vector3.Up);
    DrawModel(modelCharacter, worldCharacter, view,
    projection);
    GraphicsDevice.SetRenderTarget(null);
    // Draw the character model
    cameraPosition = new Vector3(0, 40, 350);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    GraphicsDevice.Clear(Color.CornflowerBlue);
    DrawModel(modelCharacter, worldCharacter, view,
    projection);
    // Draw the generated rendertargets of different parts of
    // character model in 2D
    spriteBatch.Begin();
    spriteBatch.Draw(renderTarget2DHead, new Vector2(500, 0),
    Color.White);
    spriteBatch.Draw(renderTarget2DLeftFist, new Vector2(200,
    220),
    Color.White);
    spriteBatch.Draw(renderTarget2DRightFoot, new Vector2(500,
    400),
    Color.White);
    spriteBatch.End();
    [/code]
  7. Build and run the application. The application will run as shown in the following screenshot:
    RenderTarget2D

How it works…

In step 2, the modelCharacter loads the character 3D model and the worldCharacter represents the world transformation matrix of the character. The following four variables cameraPosition, cameraTarget, view, and projection are used to initialize the camera. Here, the cameraTarget will have the same Y value as the cameraPosition and large enough Z value, which is far away behind the center, because we want the camera’s look-at direction to be parallel to the XZ plane. The last three RenderTarget2D objects, renderTarget2DHead, renderTarget2DLeftFist, and renderTarget2DRightFoot, are responsible for rendering the different parts of the character from 3D real-time view to 2D texture.

In step 3, we initialize the camera and the three render targets. The initialization code for the camera is nothing new. The RenderTarget2D has three overloaded constructers, and the most complex one is the third. If you understand the third, the other two are easy. This constructor looks similar to the following code:

[code]
public RenderTarget2D (
GraphicsDevice graphicsDevice,
int width,
int height,
bool mipMap,
SurfaceFormat preferredFormat,
DepthFormat preferredDepthFormat,
int preferredMultiSampleCount,
RenderTargetUsage usage
)
[/code]

Let’s have a look at what all these parameters stand for:

  • graphicsDevice: This is the graphic device associated with the render target resource.
  • width: This is an integer, in pixels, of the render target. You can use graphicsDevice.PresentationParameters.BackBufferWidth to get the current screen width. Because the RenderTarget2D is a subclass of Texture2D, the value for width and height of RenderTarget2D objects are used to define the size of the final RenderTarget2D texture. Notice, the maximum size for Texture2D in Windows Phone 7 is less than 2048, so the width value of RenderTarget2D cannot be beyond this limitation.
  • height: This is an integer, in pixels, of the render target. You can use graphicsDevice.PresentationParameters.BackBufferHeight to get the current screen height. The additional information is similar to the width parameter.
  • mipMap: This is true to enable a full mipMap chain to be generated, otherwise false.
  • preferredFormat: This is the preferred format for the surface data. This is the format preferred by the application, which may or may not be available from the hardware. In the XNA Framework, all two-dimensional (2D) images are represented by a range of memory called a surface. Within a surface, each element holds a color value representing a small section of the image, called a pixel. An image’s detail level is defined by the number of pixels needed to represent the image and the number of bits needed for the image’s color spectrum. For example, an image that is 800 pixels wide and 600 pixels high with 32 bits of color for each pixel (written as 800 x 600 x 32) is more detailed than an image that is 640 pixels wide and 480 pixels tall with 16 bits of color for each pixel (written as 640 x 480 x 16). Likewise, the more detailed image requires a larger surface to store the data. For an 800 x 600 x 32 image, the surface’s array dimensions are 800 x 600, and each element holds a 32-bit value to represent its color.

    All formats are listed from left to right, most-significant bit to least-significant bit. For example, ARGB formats are ordered from the most-significant bit channel A (alpha), to the least-significant bit channel B (blue). When traversing surface data, the data is stored in memory from least-significant bit to most-significant bit, which means that the channel order in memory is from least-significant bit (blue) to most-significant bit (alpha).

    The default value for formats that contain undefined channels (Rg32, Alpha8, and so on) is 1. The only exception is the Alpha8 format, which is initialized to 000 for the three color channels. Here, we use the SurfaceFormat.Color option. The SurfaceFormat.Color is an unsigned format, 32-bit ARGB pixel format with alpha, using 8 bits per channel.

  • preferredDepthFormat: This is a depth buffer containing depth data and possibly stencil data. You can control a depth buffer using a state object. The depth format includes Depth16, Depth24, and Depth24 Stencil.
  • usage: This is the object of RenderTargetUsage. It determines how render target data is used once a new target is set. This enumeration has three values: PreserveContents, PlatformContents, and DiscardContents. The default value DiscardContents means whenever a rendertarget is set onto the device, the previous one will be destroyed first. On the other hand, when you choose the PreserveContents option, the data associated with the render target will be maintained if a new rendertarget is set. This method will impact the performance greatly because it stores data and copies it all back to rendertarget when you use it again. The PlatformContents will either clear or keep the data, depending on the current platform. On Xbox 360 and Windows Phone 7, the render target will discard contents. On PC, the render target will discard the contents if multi-sampling is enabled, and preserve the contents if not.

In step 6, the first part of the Draw() method gets the render target texture for the head of the character, the GraphicDevice.SetRenderTarget() sets a new render target for this device. As the application runs on Windows Phone 7 and the RenderTargetUsage is set to DiscardContents, every time a new render target is assigned onto the device, the previous one will be destroyed. From XNA 4.0 SDK, the method has some restrictions while calling. They are as follows:

  • The multi-sample type must be the same for the render target and the depth stencil surface
  • The formats must be compatible for the render target and the depth stencil surface
  • The size of the depth stencil surface must be greater than, or equal to, the size of the render target

These restrictions are validated only while using the debug runtime when any of the GraphicsDevice drawing methods are called. Then, the following lines until the GraphicsDevice.SetRenderTarget(null) are used to adjust the camera position and the look-at target for rendering the head of the character. The block of code points out the view for transforming and rendering the model from 3D to 2D texture as render target, which will be displayed at the designated place on the Windows Phone screen. The method calling of GraphicsDevice.SetRenderTarget(null) will reset the render target currently on the graphics device for the next render target using it. It is similar to renderTarget2DRightFoot and renderTarget2DLeftFist in the second and third part of the Draw() method. The fourth part is to draw the actual character 3D model. After that, we will present all of the generated render targets on the Windows Phone 7 screen using the 2D drawing methods.

Creating a screen transition effect using RenderTarget2D

Do you remember the scene transition in Star Wars? The scene transition is a very common method for smoothly changing the movie scene from current to next. The frequent transition patterns are Swiping, Rotating, Fading, Checkerboard Scattering, and so on. With the proper transition effects, the audience will know that the plots go well when the stage changes. Besides movies, the transition effects also have a relevant application in video games, especially in 2D games. Every game state change will trigger a transition effect. In this recipe, you will learn how to create a typical transition effect using RenderTarget2D for your Windows Phone 7 game.

How to do it…

The following steps will draw a spinning squares transition effect using the RenderTarget2D technique:

  1. Create a Windows Phone Game named RenderTargetTransitionEffect and change Game1.cs to RenderTargetTransitionEffectGame.cs. Then, add Image1.png and Image2.png to the content project.
  2. Declare the indispensable variables. Insert the following code to the RenderTargetTransitionEffectGame code field:
    [code]
    // The first forefront and background images
    Texture2D textureForeFront;
    Texture2D textureBackground;
    // the width of each divided image
    int xfactor = 800 / 8;
    // the height of each divided image
    int yfactor = 480 / 8;
    // The render target for the transition effect
    RenderTarget2D transitionRenderTarget;
    float alpha = 1;
    // the time counter
    float timer = 0;
    const float TransitionSpeed = 1.5f;
    [/code]
  3. Load the forefront and background images, and initialize the render target for the jumping sprites transition effect. Add the following code to the LoadContent() method:
    [code]
    // Load the forefront and the background image
    textureForeFront = Content.Load<Texture2D>(“Image1”);
    textureBackground = Content.Load<Texture2D>(“Image2”);
    // Initialize the render target
    transitionRenderTarget = new RenderTarget2D(GraphicsDevice,
    800, 480, false, SurfaceFormat.Color,
    DepthFormat.Depth24, 0,
    RenderTargetUsage.DiscardContents);
    [/code]
  4. Define the core method DrawJumpingSpritesTransition() for the jumping sprites transition effect. Paste the following lines into the RenderTargetTransitionEffectGame class:
    [code]
    void DrawJumpingSpritesTransition(float delta, float alpha,
    RenderTarget2D renderTarget)
    {
    // Instance a new random object for generating random
    //values to change the rotation, scale and position
    //values of each sub divided images
    Random random = new Random();
    // Divide the image into designated pieces,
    //here 8 * 8 = 64
    // ones.
    for (int x = 0; x < 8; x++)
    {
    for (int y = 0; y < 8; y++)
    {
    // Define the size of each piece
    Rectangle rect = new Rectangle(xfactor * x,
    yfactor * y, xfactor, yfactor);
    // Define the origin center for rotation and
    //scale of the current subimage
    Vector2 origin =
    new Vector2(rect.Width, rect.Height) / 2;
    float rotation =
    (float)(random.NextDouble() – 0.5f) *
    delta * 20;
    float scale = 1 +
    (float)(random.NextDouble() – 0.5f) *
    delta * 20;
    // Randomly change the position of current
    //divided subimage
    Vector2 pos =
    new Vector2(rect.Center.X, rect.Center.Y);
    pos.X += (float)(random.NextDouble()) ;
    pos.Y += (float)(random.NextDouble()) ;
    // Draw the current sub image
    spriteBatch.Draw(renderTarget, pos, rect,
    Color.White * alpha, rotation, origin,
    scale, 0, 0);
    }
    }
    }
    [/code]
  5. Get the render target of the forefront image and draw the jumping sprites transition effect by calling the DrawJumpingSpritesTransition() method. Insert the following code to the Draw() method:
    [code]
    // Render the forefront image to render target texture
    GraphicsDevice.SetRenderTarget(transitionRenderTarget);
    spriteBatch.Begin();
    spriteBatch.Draw(textureForeFront, new Vector2(0, 0),
    Color.White);
    spriteBatch.End();
    GraphicsDevice.SetRenderTarget(null);
    // Get the total elapsed game time
    timer += (float)(gameTime.ElapsedGameTime.TotalSeconds);
    // Compute the delta value in every frame
    float delta = timer / TransitionSpeed * 0.01f;
    // Minus the alpha to change the image from opaque to
    //transparent using the delta value
    alpha -= delta;
    // Draw the jumping sprites transition effect
    spriteBatch.Begin();
    spriteBatch.Draw(textureBackground, Vector2.Zero,
    Color.White);
    DrawJumpingSpritesTransition(delta, alpha,
    transitionRenderTarget);
    spriteBatch.End();
    [/code]
  6. Build and run the application. It should run similar to the following screenshots:
    transition effect using RenderTarget2D

How it works…

In step 2, the textureForeFront and textureBackground will load the forefront and background images prepared for the jumping sprites transition effect. The xfactor and yfactor define the size of each subdivided image used in the transition effect. transitionRenderTarget is the RenderTarget2D object that will render the foreground image into render target texture for the jumping sprites transition effect. The alpha variable will control the transparency of each subimage and timer will accumulate the total elapsed game time. The TransitionSpeed is a constant value that defines the transition speed.

In step 4, we define the core method DrawJumpingSpritesTransition() for drawing the jumping sprites effect. First of all, we instantiate a Random object, and the random value generated from the object will be used to randomly change the rotation, scale, and position values of the divided subimages in the transition effect. In the following loop, we iterate every subimage row by row and column by column. When it is located at one of the subimages, we create a Rectangle object with the pre-defined size. Then, we change the origin point to the image center; this will make the image rotate and scale in place. After that, we randomly change the rotation, scale, and the position values. Finally, we draw the current subimage on the Windows Phone 7 screen.

In step 5, we draw the forefront image first, because we want the transition effect on the forefront image. Then using the render target, transform the current view to the render target texture by putting the drawing code between the GraphicsDevice.SetRenderTarget(tr ansitionRenderTarget) and GraphicsDevice.SetRenderTarget(null) methods. Next, we use the accumulated elapsed game time to compute the delta value to minus the alpha value. The alpha will be used in the SpriteBatch.Draw() method to make the subimages of the jumping sprites change from opaque to transparent. The last part in the Draw() method is to draw the background image first, then draw the transition effect. This drawing order is important. The texture that has the transition effect must be drawn after the images without the transition effect. Otherwise, you will not see the effect you want.

Windows Phone Game User Interface, Heads Up Display (HUD) #Part 2

Creating a progress bar for game content loading and value status

When playing a game, especially for some big games, at the initialization phase, a progress bar will show you the game object loading status and percents. As a time-based game, the progress bar represents the remaining time. Moreover, a Role Playing Game (RPG) often uses the progress bar to present the current life value. A progress bar is a very common control in game development and is easy to use. In this recipe, you will find the inner code for creating a progress bar.

Getting ready

In Windows Phone 7 XNA programming, two methods will let you create the progress bar. One is using the rectangle for drawing the background and forefront. This is simple but not flexible and stylish. If you want to make some innovative and unusual visual effects, the primitives will not meet your needs. The second method will give you much more space to realize your idea for the progress bar. You can use graphic design tool to draw the background and forefront images as you like, then render these images and change the size of the forefront image to comply with the on going percents; even the round or other shapes can be used for presenting the progress status. In this example, we will use the rectangle images (second method) for implementing the progress bar in Windows Phone 7.

How to do it…

The following steps give you a complete guidance to develop a progress bar in your Windows Phone 7 game:

  1. Create a Windows Phone Game project in Visual Studio 2010 named ProgressBar, change Game1.cs to ProgressBarGame.cs and insert a ProgressBar.cs in the project. Then add ProgressBarBackground.png and ProgressBarForefront. png to the content project.
  2. Add a ProgressBar class in ProgressBar.cs to the main project. Add the code to the ProgressBar class fields:
    [code]
    // SpriteBatch for drawing 2D image
    SpriteBatch spriteBatch;
    // ProgressBar forefront and background images
    Texture2D texForefront;
    Texture2D texBackground;
    // The background and forefront positon
    Vector2 backgroundPosition;
    Vector2 forefrontPosition;
    // The offset of forefront image from the background.
    float forefrontStartOffSetX;
    float forefrontStartOffSetY;
    // Current value of progressbar
    public int Value;
    // The Min and Max values of progressbar
    public int Min;
    public int Max;
    // Percent of current value around 100
    float percent;
    // the actual rendering width of forefront image
    float actualWidth;
    // The direction of progress.
    bool increase = false;
    [/code]
  3. Next, we define the Increase property:
    [code]
    // The increasing direction
    public bool Increase
    {
    get
    {
    return increase;
    }
    set
    {
    // When increasing, the Value begins from Min
    if (value)
    {
    increase = value;
    Value = Min;
    }
    // When decreasing, the Value begins from Max
    else
    {
    increase = value;
    Value = Max;
    }
    }
    }
    [/code]
  4. The next step is to define the constructor of the ProgressBar class:
    [code]
    public ProgressBar(Vector2 position, Texture2D forefront,
    Texture2D background, SpriteBatch spriteBatch)
    {
    this.spriteBatch = spriteBatch;
    texForefront = forefront;
    texBackground = background;
    backgroundPosition = position;
    // Calculate the offset for forefront image
    forefrontStartOffSetX = (texBackground.Width –
    texForefront.Width) / 2;
    forefrontStartOffSetY = (texBackground.Height –
    texForefront.Height) / 2;
    // Create the forefront image position
    forefrontPosition = new Vector2(backgroundPosition.X +
    forefrontStartOffSetX,
    backgroundPosition.Y + forefrontStartOffSetY);
    // Intitialize the Min and Max
    Min = 0;
    Max = 100;
    // Set the increasing direction from high to low.
    Increase = false;
    }
    [/code]
  5. After the constructor, the following method definition is the Update(), so add the method to the ProgressBar class:
    [code]
    public void Update(GameTime gameTime)
    {
    // If decreasing and Value greater than Min, minus the
    // Value by one
    if (Increase && Value < Max)
    {
    Value++;
    }
    else if (Value > Min)
    {
    Value–;
    }
    // Compute the actual forefront image for drawing
    percent = (float)Value / 100;
    actualWidth = percent * texForefront.Width;
    }
    [/code]
  6. The final step of creating the ProgressBar class is to define the Draw() method:
    [code]
    public void Draw()
    {
    spriteBatch.Draw(texBackground, backgroundPosition,
    Color.White);
    spriteBatch.Draw(texForefront, forefrontPosition, new
    Rectangle(0, 0, (int)actualWidth,
    texForefront.Height), Color.White);
    }
    [/code]
  7. Use the ProgressBar class in our game. First, add the code to the class field:
    [code]
    // Texture objects for background and forefront images
    Texture2D texForefront;
    Texture2D texBackground;
    // The background image position
    Vector2 position;
    // Progress bar object
    ProgressBar progressBar;
    [/code]
  8. Then insert the initialization code to the LoadContent() method:
    [code]
    // Load the background and forefront images
    texForefront =
    Content.Load<Texture2D>(“ProgressBarForefront”);
    texBackground =
    Content.Load<Texture2D>(“ProgressBarBackground”);
    // Initialize the progress bar
    position = new Vector2(200, 240);
    progressBar = new ProgressBar(position, texForefront,
    texBackground, spriteBatch);
    [/code]
  9. Next, insert the code to the Update() method:
    [code]
    // Update the progress bar
    progressBar.Update(gameTime);
    [/code]
  10. [code]
    // draw the progress bar
    spriteBatch.Begin();
    progressBar.Draw();
    spriteBatch.End();
    [/code]
  11. Now, build and run the application, and it will run as shown in the following screenshots:
    game content loading and value status

How it works…

In step 2, the texForefront and texBackground are the Texture2D objects that hold the progressBar forefront and background images. The next two variables forefrontStartOffSetX and forefrontStartOffSetY indicate the offset position of forefront from the background; Value stores the progressBar current value; the Min and Max defines the range of the progressBar; percent and actualWidth will be used to calculate and store the current width of the forefront image respectively; the last variable increase represents the direction of the progressBar value increasing.

In step 3, if the Increase value is false, which means it is decreasing, the Value begins from the right with Max. Otherwise, the Value will begin from the left.

In step 4, notice the computation for forefront image offset, we use the background image width minus the width of the forefront image, get the gap between the left sides of the two images, then use the gap value and divide it by 2, get the offset on the X-axis from the background for the forefront image. The offset on the Y-axis is similar. After getting the offset of the forefront image, we set Min to 0 and Max to 100—the value range of progressBar. The last line is to define the increasing direction. False, here, stands for the progress value that will decrease from 100 to 0, right to left.

In step 5, the first part of the Update() method is to change Value by one, according to the increasing direction. The second part is about computing the actual width of the forefront image for rendering.

In step 6, this code draws the background image and forefront image on screen. Notice the third parameter in the Drawing() method for the forefront image. This is a Rectangle parameter, which represents the part of the forefront image for rendering in every frame; it helps you to adjust the size of the forefront image for presenting the value variation of the progress bar.

In step 7, the texForefront stands for the forefront image of the progress bar; the texBackground represents the background image of the progress bar; position defines the progress bar position on the Windows Phone 7 screen; the last variable progressBar is the progress bar object which will perform the different progress behaviors.

Creating buttons in your game

In any type of game, button control is always the most basic and important part. In a GUI system, button control often plays a role in linking different parts of other controls. When you input some text in a text control, you click the button to send the message or confirm it as a command. When you are using a listbox, the up and down buttons help you look up special information that you need in the game, such as choosing weapons. In the development phase, programmers can define specific behaviors of the button events to implement the game logic or effects. To implement a button in the Windows Phone 7 XNA framework is not a hard mission. In this recipe, you will learn how to build your own button in Windows Phone 7.

How to do it…

The following steps will show you the working code for creating the buttons for your Windows Phone 7 game:

  1. Create a Windows Phone Game project named Button, change Game1.cs to ButtonGame.cs. Then add the Button.cs to the main project and button_image.png and gameFont.spriteFont to the content project.
  2. Create the Button class in the Button.cs file. Add the line to the class as a field:
    [code]
    // Button texture
    Texture2D texButton;
    // SpriteBatch for drawing the button image
    SpriteBatch spriteBatch;
    // Button position on the screen
    public Vector2 Position;
    // Color alpha value
    public int Alpha = 255;
    // Button color
    Color color;
    // Timer for game elapsed time accumulation
    float timer;
    // The Tapped bool value indicates whether tap in the button
    region
    public bool Tapped;
    // Event handler OnTapped to react with tap gesture
    public event EventHandler OnTapped;
    [/code]
  3. Then, define the HitRegion property of the Button class:
    [code]
    // Get the hit region
    public Rectangle HitRegion
    {
    get
    {
    return new Rectangle((int)Position.X, (int)Position.Y,
    texButton.Width, texButton.Height);
    }
    }
    [/code]
  4. Next, give the class constructor Button() of the Button class:
    [code]
    // Initialize the button without text
    public Button(Texture2D texture, Vector2 position, SpriteBatch
    spriteBatch)
    {
    this.texButton = texture;
    this.Position = position;
    this.spriteBatch = spriteBatch;
    color = Color.White;
    }
    [/code]
  5. After the class constructor, the important Update() method that reacts to the tap gesture looks similar to the following code:
    [code]
    // Update the button
    public void Update(GameTime gameTime, Vector2 touchPosition)
    {
    // React to the tap gesture
    Point point = new Point((int)touchPosition.X,
    (int)touchPosition.Y);
    // If tapped button, set the Hovered to true and trigger
    // the OnClick event
    if (HitRegion.Contains(point))
    {
    Tapped = true;
    OnTapped(this, null);
    }
    else
    {
    Tapped = false;
    }
    }
    [/code]
  6. The final step to build the Button class is to define the Draw() method:
    [code]
    // Draw the button
    public virtual void Draw(GameTime gameTime)
    {
    timer += (float)gameTime.ElapsedGameTime.TotalMilliseconds;
    // Draw the button texture
    if (Tapped)
    {
    // Flash the button through the alpha value changing
    if (timer > 100)
    {
    // If the Alpha is 255, set it to 0
    if (Alpha == 255)
    {
    Alpha = 0;
    }
    // If the Alpha value is 0, set it to 255
    else if (Alpha == 0)
    {
    Alpha= 255;
    }
    // Set the color alpha value
    color.A = (byte)Alpha;
    // Set the timer to 0 for next frame
    timer = 0;
    }
    // Draw the button image
    spriteBatch.Draw(texButton, HitRegion, null, color, 0,
    Vector2.Zero, SpriteEffects.None, 0);
    }
    else
    {
    spriteBatch.Draw(texButton, HitRegion,
    null,Color.White, 0,
    Vector2.Zero, SpriteEffects.None, 0);
    }
    }
    [/code]
  7. Use the Button class in our main game class. Insert code to the ButtonGame class field:
    [code]
    // Sprite Font for showing the text
    SpriteFont font;
    // Text object
    string textTapState = “Random Color Text”;
    // Text color;
    Color textColor = Color.White;
    // Random object for showing the random color
    Random random;
    // Button object
    Button button;
    // Button texture;
    Texture2D buttonTexture;
    [/code]
  8. Initialize the random variable in the Initialize() method, and add the following code to the method:
    [code]
    random = new Random();
    [/code]
  9. Load the button image and initialize the button object. Add the code to the LoadContent() method:
    [code]
    font = Content.Load<SpriteFont>(“gameFont”);
    buttonTexture = Content.Load<Texture2D>(“button_image”);
    Vector2 position = new Vector2(
    GraphicsDevice.Viewport.Width / 2 – buttonTexture.Width / 2,
    GraphicsDevice.Viewport.Height/2 – buttonTexture.Height / 2);
    button = new Button(buttonTexture, position, spriteBatch);
    button.OnTapped += new EventHandler(button_OnTapped);
    [/code]
  10. Next is the reaction method for the button OnTapped event:
    [code]
    void button_OnTapped(object sender, EventArgs e)
    {
    textColor.R = (byte)random.Next(0, 256);
    textColor.G = (byte)random.Next(0, 256);
    textColor.B = (byte)random.Next(0, 256);
    }
    [/code]
  11. Get the tapped position and pass it to the Button.Update() method, paste the code in to the Update() method:
    [code]
    TouchCollection touches = TouchPanel.GetState();
    if(touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Vector2 tappostion = touches[0].Position;
    button.Update(gameTime, tappostion);
    }
    [/code]
  12. Draw the button on screen, and put the following lines of code to the Draw() method:
    [code]
    spriteBatch.Begin(SpriteSortMode.Immediate,
    BlendState.NonPremultiplied);
    button.Draw(gameTime);
    spriteBatch.DrawString(font, textTapState, new Vector2(0, 0),
    textColor);
    spriteBatch.End();
    [/code]
  13. Ok, we have done the code work. Build and run the project, and the application should run similar to the following screenshot; when we tap the button, it will flicker and generate a random color for the text on the top-left corner.
    Creating buttons in your game

How it works…

Steps 2–6 are about creating the Button class.

In step 2, the texButton is the button texture; spriteBatch will render the button texture on screen; Position specifies the location of the button on screen. Alpha represents the alpha value of the button color; timer will be used to accumulate the game elapsed time; bool value tapped will indicate the tapping state of the button; OnTap is the event handler to handle the button tap gesture.

In step 3, the HitRegion property will return the bound surrounding the button for tap validation.

In step 4, the constructor initializes the button texture, position, and color.

In step 5, within the Update() method of the Button class, the code checks the tapped position to see whether it’s inside the button HitRegion. If yes, set Tapped to true and trigger the OnTapped event, else, it will be false.

In step 6, the first line is to accumulate the game elapsed time in milliseconds. The following code draws the button image. If the button is tapped and the elapsed time is greater than 100, it will flicker. The effect is implemented by setting the Alpha value of button color. If the Alpha value equals 255 (Opaque), we set it to 0 (Transparent). Otherwise, the value will be set from 0 to 255. After that, the latest alpha value will be assigned to Color.A , the alpha factor of Color. Then, reset the timer for the next frame. The last line will render the flickering effect on screen.

Steps 7–11 are about using the Button class in the main game class.

In step 7, the font object will render the text on screen; the textTapState stores the text to be displayed; textColor specifies the text color; random will be used to generate a random color for the text; the button variable represents the Button class instance; the buttonTexture loads the button image.

In step 9, the button_OnTapped() method will run if the OnTapped event happens. In the event reaction method, we set the R, G, and B factors of text color randomly, because the RGB value is from 0 to 255, so the random value for each of them must be inside the range.

In step 10, we get the tapped position for the button hit region validation.

In step 11, notice we must set the BlendState.NonPremultiplied because we change the button image Alpha value linearly.

Creating a listbox to speed up your information management in a game

Listbox is a list-style control, which collects the information in the list. For games, list control often plays a role in information management. The items in the listbox are presented one-by-one vertically or horizontally. You can choose the information entry through the control. In this recipe, you will master the technique of building your own listbox.

Getting ready

The example will create a listbox in Windows Phone 7. When you click the scrollbar down or the down button, the listbox will show the list items from the latest index. Once you tap one of the items, the text of the item will be presented at the top-left of the screen. Now, let’s begin with building the Button class.

How to do it…

The following steps will show you the complete process of creating the GUI listbox control:

  1. Create the Windows Phone Game project named ListBoxControl, change Game1.cs to ListBoxControlGame.cs. Add the Button.cs, ScrollBar. cs, and ListBox.cs files to the main project, and add gameFont.spriteFont, ListBoxBackground.png, ScrollBarDown.png, and ScrollBarUp.png to the content project.
  2. Create the Button class in Button.cs. First, insert the lines as the field of the Button class:
    [code]
    // Button texture
    Texture2D texButton;
    // SpriteBatch for drawing the button image
    SpriteBatch spriteBatch;
    // Button position on the screen
    public Vector2 Position;
    // Button color
    Color color;
    // The Tapped bool value indicates whether tap in the button
    region
    public bool Tapped;
    // Event handler OnTap to react with tap gesture
    public event EventHandler OnTapped;
    [/code]
  3. The next part is the HitRegion property:
    [code]
    // Get the hit region
    public Rectangle HitRegion
    {
    get
    {
    return new Rectangle((int)Position.X, (int)Position.Y,
    texButton.Width, texButton.Height);
    }
    }
    [/code]
  4. After the property definition, the constructor will be:
    [code]
    // Initialize the button without text
    public Button(Texture2D texture, Vector2 position, SpriteBatch
    spriteBatch)
    {
    this.texButton = texture;
    this.Position = position;
    this.spriteBatch = spriteBatch;
    color = Color.White;
    }
    [/code]
  5. Then, we define the Update() method:
    [code]
    // Update the button
    public void Update(GameTime gameTime, Vector2 touchPosition)
    {
    // React to the tap gesture
    Point point = new Point((int)touchPosition.X,
    (int)touchPosition.Y);
    // If tapped button, set the Hovered to true and trigger
    // the OnClick event
    if (HitRegion.Contains(point))
    {
    Tapped = true;
    OnTapped(this, null);
    }
    else
    {
    Tapped = false;
    }
    }
    [/code]
  6. The final method in the Button class is the Draw() method:
    [code]
    // Draw the button
    public virtual void Draw(GameTime gameTime)
    {
    // Draw the button texture
    if (Tapped)
    {
    spriteBatch.Draw(texButton, HitRegion, null,
    Color.Red, 0,
    Vector2.Zero, SpriteEffects.None, 0);
    }
    else
    {
    spriteBatch.Draw(texButton, HitRegion,
    null,Color.White, 0,
    Vector2.Zero, SpriteEffects.None, 0);
    }
    }
    [/code]
  7. Create the ScrollBar class in ScrollBar.cs. As the class field, we use the following code:
    [code]
    // SpriteBatch for drawing the scrollbar
    SpriteBatch spriteBatch;
    // ScrollBar up and down buttons
    Button scrollUp;
    Button scrollDown;
    // Textures for scrollbar up and down buttons
    Texture2D texScrollUp;
    Texture2D texScrollDown;
    // The position of scrollbar
    public Vector2 Position;
    // The positions of scrollbar up and down buttons
    public Vector2 scrollUpPosition;
    public Vector2 scrollDownPosition;
    // Event handler when scrollbar up button tapped
    public event EventHandler OnScrollUpTapped;
    // Event handler when scrollbar down button tapped
    public event EventHandler OnScrollDownTapped;
    // The ScrollBar Height and Width
    public int ScrollBarHeight;
    public int ScrollBarWidth;
    [/code]
  8. The following code is the ScrollDownBound and ScrollUpBound property of the ScrollBar class:
    [code]
    // The Bound of Scrollbar down button
    public Rectangle ScrollDownBound
    {
    get
    {
    return new Rectangle((int)scrollDownPosition.X,
    (int)scrollDownPosition.Y,
    (int)texScrollDown.Width,
    (int)texScrollDown.Height);
    }
    }
    // The Bound of Scrollbar up button
    public Rectangle ScrollUpBound
    {
    get
    {
    return new Rectangle((int)scrollUpPosition.X,
    (int)scrollUpPosition.Y,
    (int)texScrollDown.Width,
    (int)texScrollDown.Height);
    }
    }
    [/code]
  9. After the properties, the constructor of the ScrollBar class should be as follows:
    [code]
    // ScrollBar constructor
    public ScrollBar(Vector2 position, int scrollbarHeight,
    ContentManager content, SpriteBatch spriteBatch)
    {
    // Load the textures of scroll bar up and down button
    texScrollDown = content.Load<Texture2D>(“ScrollBarDown”);
    texScrollUp = content.Load<Texture2D>(“ScrollBarUp”);
    Position = position;
    this.spriteBatch = spriteBatch;
    // Get the scrollbar width and height
    this.ScrollBarWidth = texScrollDown.Width;
    this.ScrollBarHeight = scrollbarHeight;
    // The position of scrollbar up button
    this.scrollUpPosition = new Vector2(
    Position.X – ScrollBarWidth / 2, Position.Y);
    // The position of scrollbar down button
    this.scrollDownPosition = new Vector2(
    Position.X – ScrollBarWidth / 2,
    Position.Y + ScrollBarHeight – texScrollDown.Height);
    // Instance the scrollbar up and down buttons
    scrollUp = new Button(texScrollUp, scrollUpPosition,
    spriteBatch);
    scrollDown = new Button(texScrollDown, scrollDownPosition,
    spriteBatch);
    }
    [/code]
  10. Next, we define the Update() method of the Scrollbar class:
    [code]
    // Scrollbar Update method
    public void Update(GameTime gameTime, Vector2 tappedPosition)
    {
    // Check whether the tapped position is in the bound of
    // scrollbar up button
    if (ScrollDownBound.Contains((int)tappedPosition.X,
    (int)tappedPosition.Y))
    {
    // If yes, set the Tapped property of scrollbar down
    // button to true
    scrollDown.Tapped = true;
    // Set the Tapped property of scrollbar up button to
    // false
    scrollUp.Tapped = false;
    // Trigger the scrollbar down button event
    OnScrollDownTapped(this, null);
    }
    else if(ScrollUpBound.Contains((int)tappedPosition.X,
    (int)tappedPosition.Y))
    {
    // If yes, set the Tapped property of scrollbar up
    // button to true
    scrollUp.Tapped = true;
    // Set the Tapped property of scrollbar down button to
    // false
    scrollDown.Tapped = false;
    // Trigger the scrollbar up button event
    OnScrollUpTapped(this, null);
    }
    }
    [/code]
  11. Then, draw the scrollbar on screen by using the Draw() method:
    [code]
    // Draw the scrollbar
    public void Draw(GameTime gameTime)
    {
    // Draw the scrollbar down and up buttons
    scrollDown.Draw(gameTime);
    scrollUp.Draw(gameTime);
    }
    [/code]
  12. Create the ListBox class in the ListBox.cs file. We add the following code as the class field:
    [code]
    // Game object holds the listbox
    Game game;
    // SpriteBatch for drawing listbox
    SpriteBatch spriteBatch;
    // SpriteFont object for showing the listbox text items
    SpriteFont font;
    // The listbox background texture
    Texture2D texBackground;
    // The collection of listbox text items
    public List<string> list;
    // The position of listbox on screen
    public Vector2 Position;
    // The count of the listbox text items
    public int Count;
    // Scrollbar object to control the text items for showing
    ScrollBar scrollBar;
    // The Index for locating the specified item in listbox
    public int Index = 0;
    // The bounds of showed items
    List<Rectangle> listItemBounds;
    // The index of selected items
    public int SelectedIndex = 0;
    // The selected item
    public string SelectedItem = “”;
    // The selected area for highlighting the selected item
    Texture2D SelectedArea;
    // The offset from the position of listbox as the beginning of
    // drawing the text items
    Vector2 Offset;
    // The width and height of listbox
    int ListBoxWidth;
    int ListBoxHeight;
    // The total number of items presenting in listbox
    int ShowedItemCount = 0;
    [/code]
  13. As properties, the CharacterHeight and Bound look similar to the following:
    [code]
    // Get the character height o text item
    public float CharacterHeight
    {
    get
    {
    if (font != null && list.Count > 0)
    {
    // The Y value represents the character height in
    // the returned Vector2 value
    // of SpriteFont.MeasureString()
    return font.MeasureString(list[0]).Y;
    }
    else
    {
    throw new Exception();
    }
    }
    }
    // Get the bound of listbox
    public Rectangle Bound
    {
    get
    {
    return new Rectangle((int)Position.X, (int)Position.Y,
    texBackground.Width, texBackground.Height);
    }
    }
    [/code]
  14. The next block of code is the constructor of the ListBox class:
    [code]
    // Listbox constructor
    public ListBox(Vector2 position, ContentManager content,
    SpriteBatch
    spriteBatch, Game game)
    {
    this.game = game;
    this.spriteBatch = spriteBatch;
    listItemBounds = new List<Rectangle>();
    list = new List<string>();
    Position = position;
    font = content.Load<SpriteFont>(“gameFont”);
    texBackground =
    content.Load<Texture2D>(“ListBoxBackground”);
    ListBoxWidth = texBackground.Width;
    ListBoxHeight = texBackground.Height;
    // Define the scrollbar position relative to the position
    // of listbox
    Vector2 scrollBarPosition = new Vector2(
    Position.X + ListBoxWidth + 40, Position.Y);
    // Instance the scrollbar
    scrollBar = new ScrollBar(scrollBarPosition, ListBoxHeight,
    content, spriteBatch);
    scrollBar.OnScrollUpTapped += new
    EventHandler(scrollBar_OnScrollUpTapped);
    scrollBar.OnScrollDownTapped += new
    EventHandler(scrollBar_OnScrollDownTapped);
    // Define the offset for drawing the text items
    Offset = new Vector2(20, 4);
    }
    [/code]
  15. Now, we define the reaction method of the tap event of the scrollbar’s up and down buttons:
    [code]
    // The reaction method of scrollbar down button tapping event
    void scrollBar_OnScrollDownTapped(object sender, EventArgs e)
    {
    // If the current item index plus the ShowedItemCount
    // is less
    // than count of list items, increase the Index
    if (Index + ShowedItemCount < Count)
    {
    Index++;
    }
    }
    // The reaction method of scrollbar up button tapping event
    void scrollBar_OnScrollUpTapped(object sender, EventArgs e)
    {
    // If the current item index is greater than 0, decrease
    the
    // Index
    if (Index > 0)
    {
    Index–;
    }
    }
    [/code]
  16. The following important method in the ListBox class is the Update() method:
    [code]
    // Check the tapping state of scrollbar and the selection of
    // listbox items
    public void Update(GameTime gameTime, Vector2 tapposition)
    {
    scrollBar.Update(gameTime, tapposition);
    CheckSelected(tapposition);
    }
    [/code]
  17. The definition of CheckSelected() is as follows:
    [code]
    // Get the selected index and item in listbox
    private void CheckSelected(Vector2 tappedPosition)
    {
    for (int i = 0; i < ShowedItemCount; i++)
    {
    // Check whether the tapped position is in the region
    of
    // listbox and in which one of the item bounds.
    if (Bound.Contains(
    (int)tappedPosition.X, (int)tappedPosition.Y)
    && tappedPosition.Y <
    listItemBounds[i].Y + CharacterHeight)
    {
    SelectedIndex = i;
    SelectedItem = list[Index + i];
    break;
    }
    }
    }
    [/code]
  18. Before giving the definitions of the AddItem() and RemoveItem() methods, let’s give the definition of the GetListItemBound() method:
    [code]
    private void GetListItemBound(List<String> list)
    {
    // If the count of the items is greater than 0
    if (list.Count > 0)
    {
    Rectangle itemBound;
    // If the current count of item is less than the
    // ShowedItemCount, set the LoopBound to Count, else,
    // set it to ShowedItemCount.
    int LoopBound = Count < ShowedItemCount ? Count :
    ShowedItemCount;
    // Get the item bounds
    for (int i = 0; i < LoopBound; i++)
    {
    itemBound = new Rectangle(
    (int)Position.X,
    (int)(Position.Y + Offset.Y) +
    font.LineSpacing * i,
    (int)ListBoxWidth, (int)CharacterHeight);
    listItemBounds.Add(itemBound);
    }
    }
    }
    [/code]
  19. Next it’s time for implementing the AddItem() and the RemoveItem() methods:
    [code]
    // Add text item to listbox
    public void AddItem(string str)
    {
    // Add the text item to the list object
    this.list.Add(str);
    // Update total number of list items
    Count = list.Count;
    // Set the limited count for showing the list items
    if (list.Count == 1)
    {
    ShowedItemCount = (int)(texBackground.Height /
    CharacterHeight);
    }
    // Get the text item bounds
    listItemBounds.Clear();
    GetListItemBound(list);
    }
    [/code]
  20. Now, define the RemoveItem() method:
    [code]
    public void RemoveItem(string str)
    {
    // Delete the text item from the list items
    this.list.Remove(str);
    // Update the total number of list items
    Count = list.Count;
    GetListItemBound(list);
    }
    [/code]
  21. After the text item management functions, is the Selection Area creating method:
    [code]
    // Create the texture of the selected area
    private void CreateSelectedArea(Rectangle rectangle)
    {
    // Initialize the selected area texture
    SelectedArea = new Texture2D(game.GraphicsDevice,
    rectangle.Width, rectangle.Height, false,
    SurfaceFormat.Color);
    // Initialize the pixels for the texture
    Color[] pixels = new Color[SelectedArea.Width *
    SelectedArea.Height];
    for (int y = 0; y < SelectedArea.Height; y++)
    {
    for (int x = 0; x < SelectedArea.Width; x++)
    {
    pixels[x + y * SelectedArea.Width] =
    new Color(new Vector4(125f, 125f,125f, 0.5f));
    }
    }
    // Set the pixels to the selected area texture
    SelectedArea.SetData<Color>(pixels);
    }
    [/code]
  22. The final step in building the ListBox class is to draw the listbox on screen through the Draw() method:
    [code]
    public void Draw(GameTime gameTime)
    {
    // Draw the listbox background
    spriteBatch.Draw(texBackground, Position, Color.White);
    // The text items exist
    if (Count > 0)
    {
    // If current count of items is less than the
    // ShowedItemCount, show the items one by one
    // from the beginning
    if (Count <= ShowedItemCount)
    {
    for (int i = 0; i < Count; i++)
    {
    spriteBatch.DrawString(font, list[i],
    Position + new Vector2(
    Offset.X, Offset.Y + font.LineSpacing * i),
    Color.White);
    }
    }
    // If current count of items is greater than the
    // ShowedItemCount, show the items from the current
    // index.
    else
    {
    for (int i = 0; i < ShowedItemCount; i++)
    {
    spriteBatch.DrawString(font, list[i + Index],
    Position + new Vector2(
    Offset.X, Offset.Y + font.LineSpacing * i),
    Color.White);
    }
    }
    // If the SelectionArea is not created, creat a new
    // one
    if (SelectedArea == null)
    {
    CreateSelectedArea(listItemBounds[0]);
    }
    // Draw the SelectedArea texture
    spriteBatch.Draw(SelectedArea,
    listItemBounds[SelectedIndex], Color.White);
    }
    scrollBar.Draw(gameTime);
    }
    [/code]
  23. Woo! The ListBox class and its dependent classes are done. Now, we will use the ListBox class in our main project, and this is simple and easy to code. Insert the following lines to the ListBoxControlGame class field:
    [code]
    SpriteFont spriteFont;
    ListBox listBox;
    [/code]
  24. Initialize the spriteFont object and listBox. Add the lines in the LoadContent() method:
    [code]
    // Create a new SpriteBatch, which can be used to draw
    textures.
    spriteBatch = new SpriteBatch(GraphicsDevice);
    spriteFont = Content.Load<SpriteFont>(“gameFont”);
    listBox = new ListBox(new Vector2(200, 100), this.Content,
    spriteBatch, this);
    listBox.AddItem(“Item1”);
    listBox.AddItem(“Item2”);
    listBox.AddItem(“Item3”);
    listBox.AddItem(“Item4”);
    listBox.AddItem(“Item5”);
    listBox.AddItem(“Item6”);
    listBox.AddItem(“Item7”);
    listBox.AddItem(“Item8”);
    [/code]
  25. Get the tapped position on listBox. Paste the following code to the Update() method:
    [code]
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Vector2 tapposition = touches[0].Position;
    listBox.Update(gameTime, tapposition);
    }
    [/code]
  26. Draw the listbox and selected text item on screen, and insert the block of code in to the Draw() method:
    [code]
    spriteBatch.Begin(SpriteSortMode.Immediate,
    BlendState.NonPremultiplied);
    listBox.Draw(gameTime);
    spriteBatch.DrawString(spriteFont,
    “ SelectedItem: “ + listBox.SelectedItem,
    new Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  27. The whole project is complete. Build and run the application. It should look similar to the following screenshots:
    listbox to speed up your information management

How it works…

Steps 1–6 are about creating the Button class.

In step 2, the texButton stores the button texture; spriteBatch will render the button texture on screen; Position defines the button position on screen; the color object represents the button color; Tapped shows the tapping state of the button; OnTap is the event handler of tap gesture.

In step 3, the HitRegion property returns the button hit region around the button background texture.

In step 5, the Update() method gets the tapped position and checks whether it’s inside the button hit region. If yes, set Tapped to true and trigger the OnTapped event. Else, set Tapped to false.

In step 6, the code draws a button on the Windows Phone 7 screen. If the button is tapped, draw the button in red, or else, in white.

Steps 7–11 are about implementing the Scrollbar class.

In step 7, the spriteBatch will render the scrollbar background texture on screen; the scrollUp and scrollDown buttons will be used to increase or decrease the index of listbox items; Position stores the position of the scrollbar; scrollUpPosition and scrollDownPosition maintain the positions of the scrollUp and scrollDown buttons; the following two event handlers specify the tap events of the two scroll buttons when they are tapped. The two variables ScrollBarHeight and ScrollBarWidth define the height and width of the scrollbar buttons.

In step 8, the ScrollDownBound property returns the bound around the scrollDown button, similar to the ScrollUpBound property.

In step 9, the constructor initializes the two scrollbar buttons and gets the scrollbar width and height.

In step 10, the Update() method checks the tapped position to see whether it’s in the ScrollDownBound, if yes, set the Tapped of scrollDown to true and the property of scrollUp to false, then trigger the OnScrollDownTapped event; otherwise, do similar things for ScrollUpBound.

Steps 12–22 are to build the ListBox class using Button and Scrollbar classes.

In step 12, the game is the object of Game that supplies the GraphicDevice for drawing the selection area texture; spriteBatch will draw the listbox on screen; font will be used to draw the text of the listbox items; texBackground holds the listbox background texture; list is the collection of listbox text items; Position specifies the position of the listbox; Count shows the current total number of listbox items; scrollbar is the object of ScrollBar used to explore the listbox items; Index shows the current beginning index in listbox items; listItemBounds is the collection the bounds of list items. The following two variables indicate the index of the selected item in the listbox; if the item is selected, the SelectedArea will present a rectangle texture around the item; Offset is the position for drawing the text item relative to the position of the listbox; ShowedItemCount saves the maximum limitation of the number for rendering listbox items.

In step 13, the CharacterHeight returns the character height of the listbox text item. The Bound property gets the rectangle surrounding the listbox.

In step 15, when the scrollUp button is tapped, the Index will increase by 1 if the sum of Index plus ShowedItemCount is less than the amount of listbox items. While the scrollDown button is tapped and the Index is greater than 0, we will decrease the Index by one.

In step 16, first the Update() method checks the tapped position, to see whether it’s inside the buttons of the scrollbar. Then, use the CheckSelected() method to check the listbox item selection.

In step 17, because the ShowedItemCount limits the number for showing items, we just need to do the same steps for the loop. In the body of the for loop, first we check the tapped position to see whether it’s in the bound of the listbox, then we examine the lower-side of the current item bound to see whether it is greater than the Y factor of the tapped position. If so, it means the tapped position is in the region of the current item bound that is selected. Now, we set the current i to the SelectedIndex and the content of the current item to SelectedItem. The break here is important, as we only need the first selected one.

In step 18, we implement the GetListItemBound() method. When the list.Count is greater than 0 and if the current count of listbox item is less than ShowedItemCount, the loop will be equal to the Count; otherwise, the ShowedCount should be the LoopBound. In the loop, the code generates the item bound with the CharacterHeight and ListBoxWidth.

In step 19, once every new listbox item is added to the list, we will update the Count. This stands for the total number of listbox items. We get the ShowedItemCount when there is an item in the listbox. After that, we obtain the bounds of items through the GetListItemBound() method defined in step 18.

In step 21, the CreateSelectedArea() method first creates a new texture— SelectedArea which has the same size as the method parameter—rectangle. The second line defines the dimension of pixels equal to the SelectedArea. In the for loop, we set the actual color to each pixel of the new texture. Finally, the SetData() method copies the pixels to the SelectedArea for texture drawing.

In step 22, the first line of the Draw() method draws the listbox background texture. When the Count is greater than 0 and is equal to, or less than, the ShowedItemCount, the list item will be drawn one by one from the beginning of the list. Otherwise, we draw the items from the current index. After that, if one of the list items is selected, the SelectionArea will also be rendered around the selected item.

Steps 23–26 are for drawing the listbox on screen in the main game class ListBoxControlGame.

Creating a text input control to communicate with others in a game

Textbox is a very common and useful control in applications, reading the input and displaying the symbols in the main area. For multiplayer games, players love to use the control to communicate with each other for exchanging their thoughts. A textbox control can also act like a command line for controlling the game settings. With textbox control and corresponding functions, you can do a lot of things. In this recipe, you will learn how to make your own textbox control in Windows Phone 7.

How to do it…

The following steps will help you to implement a text input control for communicating in your own Windows Phone 7 game:

  1. Create a Windows Phone Game project in Visual Studio 2010 named TextBox, and change Game1.cs to TextBoxGame.cs. Then, add cursor.png, button.png, backspace.png, TextboxBackground.png, and gameFont.spriteFont to the content project.
  2. Now, let’s develop a button class for input. First of all, in the Button.cs file we declare the class field and property:
    [code]
    // Button texture
    Texture2D texButton;
    // SpriteBatch for drawing the button image
    SpriteBatch spriteBatch;
    // SpriteFont for drawing the button text
    SpriteFont font;
    // Button text
    public String Text = “”;
    // Button text position on the screen
    public Vector2 TextPosition;
    // Button text size
    public Vector2 TextSize;
    // Button position on the screen
    public Vector2 Position;
    // The Clicked bool value indicates whether tap in the button
    public bool Clicked;
    // Event handler when tap on the button
    public event EventHandler OnClicked;
    // Get the hit region
    public Rectangle HitRegion
    {
    get
    {
    return new Rectangle((int)Position.X, (int)Position.Y,
    texButton.Width, texButton.Height);
    }
    }
    [/code]
  3. Next, we define two overload constructors of the Button class:
    [code]
    // Initialize the button without text
    public Button(Texture2D texture, Vector2 position, SpriteFont
    font, SpriteBatch spriteBatch)
    {
    this.texButton = texture;
    this.Position = position;
    this.spriteBatch = spriteBatch;
    this.font = font;
    }
    // Initialize the button with text
    public Button(Texture2D texture, Vector2 position, String
    text, SpriteFont font, SpriteBatch spriteBatch)
    {
    this.texButton = texture;
    this.Position = position;
    this.spriteBatch = spriteBatch;
    this.Text = text;
    // Compute the text size and place the text in the center
    // of the button
    TextSize = font.MeasureString(Text);
    this.TextPosition = new Vector2(position.X +
    texture.Width / 2 – TextSize.X / 2, position.Y);
    this.font = font;
    }
    [/code]
  4. In the following step, we will make the button react to the tap gesture. Add the Update() code as follows:
    [code]
    // Update the button
    public void Update(GameTime gameTime, Vector2 touchPosition)
    {
    // React to the tap gesture
    Point point = new Point((int)touchPosition.X,
    (int)touchPosition.Y);
    // If tapped button, set the Hovered to true and trigger
    // the OnClick event
    if (HitRegion.Contains(point))
    {
    Clicked = true;
    OnClicked(this, null);
    }
    // Update the button
    public void Update(GameTime gameTime, Vector2 touchPosition)
    {
    // React to the tap gesture
    Point point = new Point((int)touchPosition.X,
    (int)touchPosition.Y);
    // If tapped button, set the Hovered to true and trigger
    // the OnClick event
    if (HitRegion.Contains(point))
    {
    Clicked = true;
    OnClicked(this, null);
    }
    [/code]
  5. The final step for the Button class is to draw it on the screen. To do this, we use this block of code:
    [code]
    // Draw the button
    public virtual void Draw()
    {
    // Draw the button texture
    if (!Clicked)
    {
    spriteBatch.Draw(texButton, HitRegion, Color.White);
    }
    else
    {
    spriteBatch.Draw(texButton, HitRegion, Color.Red);
    }
    // Draw the button text
    spriteBatch.DrawString(font, Text, TextPosition,
    Color.White);
    }
    [/code]
  6. In this step, we begin to write the TextBoxControl class. In TextBoxControl.cs, add the lines to the TextBoxControl class as fields:
    [code]
    // SpriteBatch for drawing the textbox texture
    SpriteBatch spriteBatch;
    // SpriteFont for drawing the textbox font
    SpriteFont spriteFont;
    // Textbox background texture
    Texture2D texBackGround;
    // Textbox cursor texture
    Texture2D texCursor;
    // Textbox Bound for showing the text
    public Rectangle Bound;
    // Textbox position
    public Vector2 Position;
    // Textbox cursor position
    public Vector2 CursorPosition;
    // Timer used to control the cursor alpha value
    float timer;
    // Text position in the textbox
    public Vector2 TextPosition;
    // The text size of the showing text
    public Vector2 textSize;
    // The character size of the textbox text
    private float characterSize;
    // Alpha value for the cursor
    int alpha = 255;
    // The cursor color
    Color cursorColor;
    // TypedText stores the typed letters
    public string TypedText = “”;
    // ShowedText saves the text shown in the textbox
    public string ShowedText = “”;
    [/code]
  7. Next, we add the properties to the TextBoxControl class:
    [code]
    // Get the character size
    public float CharacterSize
    {
    get
    {
    textSize = spriteFont.MeasureString(TypedText);
    characterSize = textSize.X / TypedText.Length;
    return characterSize;
    }
    }
    // Get the text size
    public Vector2 TextSize
    {
    get
    {
    return textSize = spriteFont.MeasureString(TypedText);
    }
    }
    // Get the bound for showing the text
    public int ShowedCharacterBound
    {
    get
    {
    return (int)(Bound.Width / CharacterSize);
    }
    }
    [/code]
  8. The following part is about the TextBoxControl class initialization, and the constructer looks as follows:
    [code]
    // Initialize the textbox
    public TextBoxControl(Vector2 position, Texture2D texCursor,
    Texture2D texBackground, SpriteFont font, SpriteBatch
    spriteBatch)
    {
    this.Position = position;
    this.spriteBatch = spriteBatch;
    this.texCursor = texCursor;
    this.spriteFont = font;
    this.texBackGround = texBackground;
    // Set the bound of textbox control
    Bound = new Rectangle((int)position.X, (int)position.Y,
    texBackGround.Width, texBackGround.Height);
    // Set the cursor position
    this.CursorPosition = new Vector2(position.X + 10,
    position.Y + 10);
    // Set the text position
    this.TextPosition = new Vector2(position.X + 10,
    position.Y);
    // Set the cursor color with alpha value
    cursorColor = new Color(255, 255, 255, alpha);
    }
    [/code]
  9. After the initialization, the following code is the definition of the Update() method:
    [code]
    public void Update(GameTime time)
    {
    // Accumulate the game elapsed milliseconds
    timer += (float)time.ElapsedGameTime.TotalMilliseconds;
    // Every 500 milliseconds the alpha value of the cursor
    will
    // change from 255 to 0 or 0 to 255.
    if (timer > 500)
    {
    if (alpha == 255)
    {
    alpha = 0;
    }
    else if (alpha == 0)
    {
    alpha = 255;
    }
    cursorColor.A = (byte)alpha;
    timer = 0;
    }
    }
    [/code]
  10. Then we define the Draw() method :
    [code]
    public void Draw()
    {
    // Draw the textbox control background
    spriteBatch.Draw(texBackGround, Position, Color.White);
    // Draw the textbox control cursor
    spriteBatch.Draw(texCursor, CursorPosition, cursorColor);
    // Draw the textbox showing text
    spriteBatch.DrawString(spriteFont, ShowedText,
    TextPosition, Color.White);
    }
    [/code]
  11. From this step, we will use the Button class and the TextBoxControl class in the main game class. Now, add the lines to the TextBoxGame class fields:
    [code]
    // SpriteFont object
    SpriteFont font;
    // TextboxControl object
    TextBoxControl textBox;
    // Button objects
    Button buttonA;
    Button buttonB;
    Button buttonBackspace;
    [/code]
  12. Initialize the textbox control and buttons. Insert the code to the LoadContent() method:
    [code]
    // Load the textbox textures
    Texture2D texCursor = Content.Load<Texture2D>(“cursor”);
    Texture2D texTextboxBackground =
    Content.Load<Texture2D>(“TextboxBackground”);
    // Load the button textures
    Texture2D texButton = Content.Load<Texture2D>(“button”);
    Texture2D texBackSpace = Content.Load<Texture2D>(“Backspace”);
    font = Content.Load<SpriteFont>(“gameFont”);
    // Define the textbox position
    Vector2 position = new Vector2(400, 240);
    // Initialize the textbox
    textBox = new TextBoxControl(position, texCursor,
    texTextboxBackground, font, spriteBatch);
    // Initialize the buttonA
    buttonA = new Button(texButton, new Vector2(400, 350), “A”,
    font, spriteBatch);
    buttonA.OnClicked += new EventHandler(button_OnClicked);
    // Initialize the buttonB
    buttonB = new Button(texButton, new Vector2(460, 350), “B”,
    font, spriteBatch);
    buttonB.OnClicked += new EventHandler(button_OnClicked);
    // Initialize the backspace button
    buttonBackspace = new Button(texBackSpace,
    new Vector2(520, 350), font, spriteBatch);
    buttonBackspace.OnClicked += new
    EventHandler(buttonBackspace_OnClicked);
    [/code]
  13. Define the event handling code for buttonA and button, which is same for both:
    [code]
    void button_OnClicked(object sender, EventArgs e)
    {
    // Add the button text to the textbox TypedText
    // Update the position of textbox cursor
    textBox.CursorPosition.X = textBox.TextPosition.X +
    textBox.TypedText += ((Button)sender).Text;
    textBox.TextSize.X;
    // Get the textbox showed character bound
    int showedCharacterBound = textBox.ShowedCharacterBound;
    // check whether the textbox cursor goes outside of the
    // textbox bound
    if (textBox.CursorPosition.X > textBox.Bound.X +
    textBox.Bound.Width)
    {
    // If yes, set cursor positon at the right side of
    // the textbox
    textBox.CursorPosition.X = textBox.TextPosition.X +
    textBox.CharacterSize * showedCharacterBound;
    // Show the TypedText from end to the left in
    // the range for showing characters of textbox
    textBox.ShowedText =
    textBox.TypedText.Substring(textBox.TypedText.Length –
    showedCharacterBound – 1, showedCharacterBound);
    }
    else
    {
    // If not, just set the current TypedText to the
    // showedText
    textBox.ShowedText = textBox.TypedText;
    }
    }
    [/code]
  14. The next block of code is the handling code for the backspace button:
    [code]
    void buttonBackspace_OnClicked(object sender, EventArgs e)
    {
    // Get the length of TypedText
    int textLength = textBox.TypedText.Length;
    // Check whether the TypedText is greater than 0
    if (textLength > 0)
    {
    // If yes, delete the last character
    textBox.TypedText = textBox.TypedText.Substring(0,
    textLength – 1) ;
    // Get the current showed character count.
    int showedCharacterCount = (int)(textBox.TextSize.X /
    textBox.CharacterSize);
    // Check whether the current showed character count is
    less than
    // the textbox showed character bound
    if (showedCharacterCount <=
    textBox.ShowedCharacterBound)
    {
    // If yes, just update the cursor position with
    // current text size and the showedText with
    // current text
    textBox.CursorPosition.X = textBox.TextPosition.X
    + textBox.TextSize.X;
    textBox.ShowedText = textBox.TypedText;
    }
    else
    {
    // If not, show the TypedText from end to the
    // left in the range for showing characters
    // of textbox
    textBox.ShowedText = textBox.TypedText.Substring(
    textBox.TypedText.Length –
    textBox.ShowedCharacterBound,
    textBox.ShowedCharacterBound);
    }
    }
    }
    [/code]
  15. Trigger the button event. Add the code to the Update() method:
    [code]
    TouchCollection touches = TouchPanel.GetState();
    if(touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    buttonA.Update(gameTime, touches[0].Position);
    buttonB.Update(gameTime, touches[0].Position);
    buttonBackspace.Update(gameTime, touches[0].Position);
    }
    textBox.Update(gameTime);
    [/code]
  16. Draw the textbox and buttons on screen. Paste the code into the Draw() method:
    [code]
    spriteBatch.Begin();
    textBox.Draw();
    buttonA.Draw();
    buttonB.Draw();
    buttonBackspace.Draw();
    spriteBatch.End();
    [/code]
  17. Now, build and run the application. When you tap button A and button B, the textbox will show the input as shown in the following screenshot to the left. When you tap the backspace button, it will look similar to the following screenshot on the right:
    text input control to communicate

How it works…

Steps 2–5 are responsible for creating the Button class:

In step 2, the texButton stores the button texture; font will be used to render the button text, we use the button Text for the input text; the position variable tells you where the button is on the screen; the bool value Clicked indicates whether the tap gesture takes place in the button hit region; when the button is clicked, the OnClicked event will be triggered. The HitRegion property returns the bound of the button for clicking.

In step 3, the first constructor initializes the button without text. The second constructor initializes the button with text and places the text in the center of the button. The SpriteFont.MeasureString() method computes and returns the text size as a Vector2, the X value holds the text width, and the Y value holds the text height.

In step 4, the reacting code first gets the tapped position, then use the Rectangel. Contains() method to check whether the position is inside the hit region, if yes, we set the Clicked to true and trigger the OnClicked() event.

Steps 6–10 are about creating the TextBoxControl class:

In step 6, the first four variables deal with the textbox texture and font; the following Bound variable stores the textbox bound for showing text; Position indicates the location of the textbox control on the screen; the CursorPosition represents the cursor place within the textbox control bound; the timer variable will be used to control the alpha value of the cursor for the flashing effect; the TextPosition shows the text position inside the textbox control; textSize represents the size of the TypedText; the characterSize defines the size of a single character of the TypedText; the ShowedText stores the text that will be presented in the textbox.

In step 7, the CharacterSize returns the size of a single character in the TypedText, we use SpriteFont.MeasureString() to compute the size of the TypedText, then use the X value of the textSize and divide the TypedText length to get the unit character length; the TextSize returns the size of TypedText; ShowedCharacterBound returns the region for showing the TypedText.

In step 9, the Update() method checks whether the accumulated milliseconds are greater than 500 or not. If yes and the alpha value is equal to 255 (opaque), it will be set to 0 (transparent), and vice versa. After setting the latest alpha value to alpha factor of cursor color—cursorColor.A, we reset the timer for the next interval.

Steps 11–16 are about using the Button and TextBoxControl class in the main class. We will draw the button and textbox control on the Windows Phone 7 screen and perform the reactions for text input and delete.

In step 11, the textBox stands for the TextBoxControl; buttonA represents the button for input character A; buttonB is used to input character B; the buttonBackspace will delete the character of the TypedText from the end to the beginning.

In step 12, the code loads the textures for the textbox and buttons first. Then, it initializes their event handling code.

In step 13, the code reacts to the event triggered from buttonA or buttonB. The first line casts the sender to Button. Then add the Text value to the TextBoxControl.TypedText. After getting the text, the cursor position is updated following the new TypedText. The rest of the code deals with the situation when the length of the TypedText is greater than the textbox bound. If this happens, the cursor will still stay at the right-side of the textbox, the showedText will be the substring of the TypedText from the end to the left in the range for showing characters of the textbox. On the other hand, the entire TypedText will be drawn.

In step 14, as the reaction code for the backspace button, at the beginning, we get the length of the TypedText. Then check whether it is greater than 0. If yes, we delete the last character. The rest of the code works with the state when the deleted TypedText length is greater or less than the textbox bound. If greater, the showedText will range from the end of the deleted TypedText to the left about the showed character count of the textbox. Otherwise, the cursor will follow the current TypedText, which will be completely rendered on the screen.

Windows Phone Game User Interface, Heads Up Display (HUD) #Part 1

Scaling an image

Controlling the size of images is a basic technique in Windows Phone 7 XNA programming. Mastering this technique will help you implement many special visual effects in 2D. In this recipe, you will learn how to zoom an image in and out for a special visual effect. As an example, the image will jump up to you and go back. In the jumping phase, the image will fade out gradually. When the image becomes transparent, it will fall back with fading in.

How to do it…

The following mandatory steps will lead you to complete the recipe:

  1. Create a Windows Phone Game in Visual Studio 2010 named ImageZoomInOut, change Game1.cs to ImageZoomGame.cs. Then add the Next.png file from the code bundle to the content project. After the preparation work, insert the following code to the field of the ImageZoomGame class:
    [code]
    // Image texture object
    Texture2D texImage;
    // Image position
    Vector2 Position;
    // The scale factor
    float scale = 1;
    // The rotate factor
    float rotate = 0;
    // Alpha value for controlling the image transparency
    float alpha = 255;
    // The color of image
    Color color;
    // Timer object
    float timer;
    // Bool value for zooming out or in.
    bool ZoomOut = true;
    [/code]
  2. In the Initialize() method, we define the image centered in the middle of the screen and the color:
    [code]
    Position = new Vector2(GraphicsDevice.Viewport.Width / 2,
    GraphicsDevice.Viewport.Height / 2);
    color = Color.White;
    [/code]
  3. Load the Next.png file in the LoadContent() method with the following code:
    [code]
    texImage = Content.Load<Texture2D>(“Next”);
    [/code]
  4. Add the code to the Update() method to control the size transparency and rotation of the image:
    [code]
    // Accumulates the game elapsed time
    timer += (float)gameTime.ElapsedGameTime.TotalMilliseconds;
    // Zoom out
    if (ZoomOut && alpha >= 0 && timer > 50)
    {
    // If alpha equals 0, zoom the image in
    if (alpha == 0.0f)
    {
    ZoomOut = false;
    }
    // Amplify the image
    scale += 0.1f;
    // Rotate the image in clockwise
    rotate += 0.1f;
    // Fade the image out
    if (alpha > 0)
    {
    alpha -= 5;
    }
    color.A = (byte)alpha;
    // Reset the timer to 0 for the next interval
    timer = 0;
    }
    // Zoom in
    else if (!ZoomOut && timer > 50)
    {
    // If alpha equals 255, zoom the image out
    if (alpha == 255)
    {
    ZoomOut = true;
    }
    // Scale down the image
    scale -= 0.1f;
    // Rotate the image counter-clockwise
    rotate -= 0.2f;
    // Fade the image in
    if (alpha < 255)
    {
    alpha += 5;
    }
    color.A = (byte)alpha;
    // Reset the timer to 0 for the next interval
    timer = 0;
    }
    [/code]
  5. To draw the image and effects on the Windows Phone 7 screen, add the code to the Draw() method:
    [code]
    spriteBatch.Begin(SpriteSortMode.Immediate,
    BlendState.NonPremultiplied);
    spriteBatch.Draw(texImage, Position, null, color, rotate, new
    Vector2(texImage.Width / 2, texImage.Height / 2), scale,
    SpriteEffects.None, 0f);
    spriteBatch.End();
    [/code]
  6. Now, build and run the application. The application runs as shown in the following screenshots:
    Scaling an image

How it works…

In step 1, texImage holds the image for controlling; the Position variable represents the image position; scale will be used to control the image size; as the rotation factor, the rotate variable will store the value for rotation; alpha in color controls the image transparency degree; timer accumulates the game elapse milliseconds; the last variable ZoomOut determines whether to zoom the image out.

In step 2, we define the Position of the image in the center of the screen and set the color to white.

In step 4, the accumulated timer is used to control the period of time between the two frames. The next part is to check the direction of scale. If ZoomOut is true, we will increase the image size, decrease the alpha value and rotate the image in a clockwise direction, then reset the timer. Otherwise, the behavior is opposite.

In step 5, we set the BlendState in SpritBatch.Begin() to NonPremultiplied because we change the alpha value of color linearly, then draw the images with scale and rotation factors around the image center.

Creating a Simple Sprite Sheet animation in a 2D game

In Windows Phone 7 game programming, it is an obvious performance cost if you wanted to render a number of images. You need to allocate the same quantity of texture objects for the images in your game. If this happens in the game initialization phase, the time could be very long. Considering these reasons, the talented game programmers in the early days of the game industry found the solution through the Sprite Sheet technique, using a big image to contain a set of smaller images. It is very convenient for 2D game programming, especially for the sprite animation. The big advantage of using the Sprite Sheet is the ability to create character animation, complex effects, explosions, and so on. Applying the technique will be performance sweet for game content loading.

The Sprite Sheet has two types:

  • Simple Sprite Sheet, where all the smaller images have the same dimensions
  • Complex Sprite Sheet, where all the images in the sheet have different dimensions

In this recipe, you will learn how to create the Simple Sprite Sheet and use it in your Windows Phone 7 game.

In Simple Sprite Sheet, every subimage has the same size, the dimension could be defined when the Sprite Sheet was initiated. To locate the designated sub-image, you should know the X and Y coordinates, along with the width and height, and the offset for the row and column. For Simple Sprite Sheet, the following equation will help you complete the work:

[code]Position = (X * Offset, Y * Offset)[/code]

For convenience, you can define the size of the subimage in digital image processing software, such as Adobe Photoshop, Microsoft Expression Design, Paint.Net, GIMP, and so on.

In this recipe, you will learn how to use the Sprite Sheet in your own Windows Phone 7 Game.

Getting ready

In the image processing tool, we create a Sprite Sheet as the next screenshot. In the image, every subimage is surrounded by a rectangle 50 pixels wide and 58 pixels high, as shown in the left-hand image in the following screenshot.

In a real Windows Phone Game, I am sure you do not want to see the border of the rectangle. As part of the exportation process, we will make the rectangles transparent, you just need to change the alpha value from 100 to 0 in code, and the latest Sprite Sheet should look similar to the right-hand image in the following screenshot:

SimpleSpriteSheet

We name the Sprite Sheet used in our example SimpleSpriteSheet.png.

Now the Sprite Sheet is ready, the next part is to animate the Sprite Sheet in our Windows Phone 7 game.

How to do it…

The following steps will give you a complete guidance to animate a Simple Sprite Sheet:

  1. Create a Windows Phone Game project in Visual Studio 2010 named SimpleSpriteSheetAnimation, change Game1.cs to SimpleSpirteSheetGame.cs, adding the lines to the field of the SimpleSpriteSheetGame class:
    [code]
    // Sprite Texture
    Texture2D sprite;
    // A Timer variable
    float timer = 0f;
    // The interval
    float interval = 200;
    // Frame Count
    int FrameCount = 4;
    // Animation Count
    int AnimationCount = 2;
    // Current frame holder
    int currentFrame = 0;
    // Width of a single sprite image, not the whole Sprite
    int spriteWidth = 50;
    // Height of a single sprite image, not the whole Sprite
    int spriteHeight = 58;
    // A rectangle to store which ‘frame’ is currently being
    Rectangle sourceRect;
    // The center of the current ‘frame’
    Vector2 origin;
    // Index of Row
    int row = 0;
    // Position Center
    Vector2 screenCenter;
    [/code]
  2. Load the Sprite Sheet image. Put the code into the LoadContent() method:
    [code]
    sprite = Content.Load<Texture2D>(“SpriteSheet”);
    screenCenter = new Vector2(GraphicsDevice.Viewport.Width / 2,
    GraphicsDevice.Viewport.Height / 2);
    [/code]
  3. This step is to animate the Simple Sprite Sheet. Insert the code in to the Update() method:
    [code]
    // Increase the timer by the number of milliseconds since
    // update was last called
    timer += (float)gameTime.ElapsedGameTime.TotalMilliseconds;
    // Check the timer is more than the chosen interval
    if (timer > interval)
    {
    //Show the next frame
    currentFrame++;
    //Reset the timer
    timer = 0f;
    }
    // If reached the last frame, reset the current frame back to
    // the one before the first frame
    if (currentFrame == FrameCount)
    {
    currentFrame = 0;
    }
    // React to the tap gesture
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    // Change the sprite sheet animation
    row = (row + 1) % AnimationCount;
    }
    // Compute which subimage will be rendered
    sourceRect = new Rectangle(currentFrame * spriteWidth,
    row * spriteHeight, spriteWidth, spriteHeight);
    // Compute the origin position for image rotation and scale.
    origin = new Vector2(sourceRect.Width / 2,
    sourceRect.Height / 2);
    [/code]
  4. Draw the Sprite Sheet on the screen. Add the code in the Draw() method:
    [code]
    spriteBatch.Begin();
    //Draw the sprite in the center of an 800×600 screen
    spriteBatch.Draw(sprite, screenCenter, sourceRect,
    Color.White, 0f, origin, 3.0f, SpriteEffects.None, 0);
    spriteBatch.End();
    [/code]
  5. Now, build and run the application. When you tap on the screen, the animation will change, as shown in the following screenshots:
    Simple Sprite Sheet animation in a 2D game

How it works…

In step 1, the sprite object will hold the Sprite Sheet image; the timer variable will accumulate the game elapsed time; the interval variable defines the period of time between two frames in milliseconds; AnimationCount shows the number of animations that will be played; currentFrame indicates the playing frame, and also presents the column in Sprite Sheet image; spriteWidth and spriteHeight define the width and height of the currently rendering subimage. In this example the width and height of the Sprite subimage is 50 and 58 pixels respectively; the sourceRectangle lets the SpriteBatch.Draw() know which part of the Sprite Sheet image will be drawn; the row variable shows which animation will be rendered, 0 means the animation starts from the first.

In step 3, we accumulate the game elapsed time. If the accumulated time on the timer is greater than the interval, the current frame goes to the next frame, and then sets the timer to 0 for the next interval. We then check whether the current frame is equal to the FrameCount. If yes, it means that the animation ended, then we set the currentFrame to 0 to replay the animation. Actually, the currentFrame represents the current column of the Sprite Sheet image. Next, we should make sure which animation we want to render, here, the react to the tap gesture to set the row value within the AnimationCount to change the animation. When the currentFrame for the column and the row are ready, you can use them to locate the sub-image with the rectangle object sourceRect; the last line in this code will compute the origin position for the image rotation and scale, here we set it to the center of the Sprite Sheet subimage.

In step 4, the Draw() method receives the texture parameter; position of texture drawn on screen; the rectangle for defining the part of texture that will be drawn; the color White means render the texture in its original color; rotation angle set to 0 stands for no rotation; the scale factor will let the rendering device know how big the texture will be, 3 here means the final texture size will be three times larger than the original; SpriteEffects means the effects will be applied to render the texture, SpriteEffects.None tells the application not to use any effect for the texture; the final parameter layerDepth illustrates the texture drawing order.

Creating a Complex Sprite Sheet animation in a 2D game

The Complex Sprite Sheet contains subimages with different sizes. Moreover, for every Complex Sprite Sheet, it has an additional description file. The description file defines the location and size of every subimage. It is the key difference between a Simple Sprite Sheet and a Complex Sprite Sheet. For a Simple Sprite Sheet, you can compute the location and size of the sub-image with the same width and height; however, for a Complex Sprite Sheet, it is harder, because the subimages in this kind of Sprite Sheet are often placed for the efficient use of space. To help identify the coordinates of the sprites in the Complex Sprite Sheet, the description file offers you the subimage location and size information. For the Sprite Sheet animation use, the description file also provides the animation name and attributes. The following screenshot shows an example of a Complex Sprite Sheet:

Complex Sprite Sheet animation

Getting ready

In the Sprite Sheet, you can see that the subimages have different sizes and locations. They are not placed in a regular grid, and that’s why we need the description file to control these images. You may ask how we can get the description file for a designated Complex Sprite Sheet. Here, we use the tool SpriteVortex, which you can download from Codeplex at http://spritevortex.codeplex.com/.

The following steps show you how to process a Complex Sprite Sheet using SpriteVortex:

  1. When you run SpriteVortex, click Import Spritesheet Image, at the top-left, as shown in the following screenshot:
    SpriteVortex,  Import Spritesheet Image
  2. After choosing MegaManSheet.png from the image folder of this chapter, click on Alpha Cut, at the top bar of the main working area. You will see the subimages in the Complex Sprite Sheet individually surrounded by yellow rectangles, as shown in the following screenshot:
    Alpha Cut
  3. Next, you could choose the subimages to create your own animation. In the left Browser panel, we change the animation name from Animation 0 to Fire.
  4. Then, we choose the sub-images from the Complex Sprite Sheet. After that, we click the button—Add Selected to Animation. The selected images will show up in the Animation Manager from the first image to the last one frame-by-frame.
  5. Similar to these steps, we add other two animations in the project named Changing and Jump.
  6. The final step is to export the animation definition as XML, the important XML will be used in our project to animate the subimages, and the entire process can be seen in the following screenshot:
    XML, the important XML

The exported animation XML file named SpriteDescription.xml can be found in the project content directory for this recipe.

In the XML file, you will find the XML element—Texture, which saves the Sprite Sheet path. The XML—Animation—includes the animation name and its frames, the frames contain the subimages’ location and size. Next, you will learn how to use the Complex Sprite Sheet and the XML description to animate the sprite.

How to do it…

The following steps will show you how to animate a Complex Sprite Sheet:

  1. Create a Windows Phone Game project named ComplexSpriteSheetAnimation, change Game1.cs to ComplexSpriteSheetAnimationGame.cs. Then add the exported Sprite Sheet description file SpriteDescription.xml to the content project, changing the Build Action property of the XML file from Compile to None because we customized the content processing code for XML format, and the Copy to Output Directory property to Copy Always. This will always copy the description file to the game content output directory of the application.
  2. The description file is an XML document; we need to parse the animation information when loading it to our game. Subsequently, we add some description parsing classes: Frame, Animation, SpriteTexture, AnimationSet, and SpriteAnimationManager in the SpriteAnimationManager.cs of the main project. Before coding the classes, one more reference System.Xml. Serialization must be added to the project reference list as we will use the XML Serialization technique to parse the animation. Now, lets define the basic classes:
    [code]
    // Animation frame class
    public class Frame
    {
    // Frame Number
    [XmlAttribute(“Num”)]
    public int Num;
    // Sub Image X positon in the Sprite Sheet
    [XmlAttribute(“X”)]
    public int X;
    // Sub Image Y positon in the Sprite Sheet
    [XmlAttribute(“Y”)]
    public int Y;
    // Sub Image Width
    [XmlAttribute(“Width”)]
    public int Width;
    // Sub Image Height
    [XmlAttribute(“Height”)]
    public int Height;
    // The X offset of sub image
    [XmlAttribute(“OffSetX”)]
    public int OffsetX;
    // The Y offset for subimage
    [XmlAttribute(“OffsetY”)]
    public int OffsetY;
    // The duration between two frames
    [XmlAttribute(“Duration”)]
    public float Duration;
    }
    // Animation class to hold the name and frames
    public class Animation
    {
    // Animation Name
    [XmlAttribute(“Name”)]
    public string Name;
    // Animation Frame Rate
    [XmlAttribute(“FrameRate”)]
    public int FrameRate;
    public bool Loop;
    public bool Pingpong;
    // The Frames array in an animation
    [XmlArray(“Frames”), XmlArrayItem(“Frame”, typeof(Frame))]
    public Frame[] Frames;
    }
    // The Sprite Texture stores the Sprite Sheet path
    public class SpriteTexture
    {
    // The Sprite Sheet texture file path
    [XmlAttribute(“Path”)]
    public string Path;
    }
    // Animation Set contains the Sprite Texture and animation.
    [XmlRoot(“Animations”)]
    public class AnimationSet
    {
    // The sprite texture object
    [XmlElement(“Texture”, typeof(SpriteTexture))]
    public SpriteTexture SpriteTexture;
    // The animation array in the Animation Set
    [XmlElement(“Animation”, typeof(Animation))]
    public Animation[] Animations;
    [/code]
  3. Next, we will extract the Animation information from the XML description file using the XML deserialization technique:
    [code]
    // Sprite Animation Manager class
    public static class SpriteAnimationManager
    {
    public static int AnimationCount;
    // Read the Sprite Sheet Description information from the
    // description xml file
    public static AnimationSet Read(string Filename)
    {
    AnimationSet animationSet = new AnimationSet() ;
    // Create an XML reader for the sprite sheet animation
    // description file
    using (System.Xml.XmlReader reader =
    System.Xml.XmlReader.Create(Filename))
    {
    // Create an XMLSerializer for the AnimationSet
    XmlSerializer serializer = new
    XmlSerializer(typeof(AnimationSet));
    // Deserialize the Animation Set from the
    // XmlReader to the animation set object
    animationSet =
    (AnimationSet)serializer.Deserialize(reader);
    }
    // Count the animations to Animation Count
    AnimationCount = animationSet.Animations.Length;
    return animationSet;
    }
    }
    [/code]
  4. Now, from this step, we will begin to use the parsed AnimationSet to animate the Complex Sprite Sheet and switch the animations. So, add the code to the field of ComplexSpriteSheetAnimationGame class:
    [code]
    // A Timer variable
    float timer;
    // The interval
    float interval = 200;
    // Animation Set stores the animations in the sprite sheet
    // description file
    AnimationSet animationSet;
    // Texture object loads and stores the Sprite Sheet image
    Texture2D texture;
    // The location of subimage
    int X = 0;
    int Y = 0;
    // The size of subimage
    int height = 0;
    int width = 0;
    // A rectangle to store which ‘frame’ is currently being shown
    Rectangle sourceRectangle;
    // The center of the current ‘frame’
    Vector2 origin;
    // Current frame holder
    int currentFrame = 0;
    // Current animation
    int currentAnimation = 0;
    [/code]
  5. Read the Complex Sprite Sheet to the AnimationSet object. Add the line to the Initialize() method:
    [code]
    animationSet =
    SpriteAnimationManager.Read(@”ContentSpriteDescription.xml”);
    [/code]
  6. In this step, we animate the Complex Sprite Sheet using the parsed animation set. Add the following lines to the Update() method:
    [code]
    // Change the animation when tap the Touch screen
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    // make the animation index vary within the total
    // animation count
    currentAnimation = (currentAnimation + 1) %
    SpriteAnimationManager.AnimationCount;
    }
    // Accumulate the game elapsed time
    timer += (float)gameTime.ElapsedGameTime.TotalMilliseconds;
    // If the current frame is equal to the length of the current
    // animation frames, reset the current frame to the beginning
    if (currentFrame ==
    animationSet.Animations[currentAnimation].Frames.Length – 1)
    {
    currentFrame = 0;
    }
    // Check the timer is more than the chosen interval
    if (timer > interval && currentFrame <=
    animationSet.Animations[currentAnimation].Frames.Length – 1)
    {
    // Get the location of current subimage
    height = animationSet.Animations[currentAnimation].
    Frames[currentFrame].Height;
    width = animationSet.Animations[currentAnimation].
    Frames[currentFrame].Width;
    // Get the size of the current subimage
    X = animationSet.Animations[currentAnimation].
    Frames[currentFrame].X;
    Y = animationSet.Animations[currentAnimation].
    Frames[currentFrame].Y;
    // Create the rectangle for drawing the part of the
    //sprite sheet on the screen
    sourceRectangle = new Rectangle(X, Y, width, height);
    // Show the next frame
    currentFrame++;
    // Reset the timer
    timer = 0f;
    }
    // Compute the origin position for image rotation and scale.
    origin = new Vector2(sourceRectangle.Width / 2,
    sourceRectangle.Height / 2);
    [/code]
  7. Draw the animation on the Windows Phone 7 Touch screen, add the code to the Draw() method:
    [code]
    spriteBatch.Begin();
    spriteBatch.Draw(texture, new Vector2(400, 240),
    sourceRectangle,
    Color.White, 0f, origin, 2.0f, SpriteEffects.None, 0);
    spriteBatch.End();
    [/code]
  8. Now, build and run the application. When you tap on the Touch screen, the animation will change as shown in the following screenshots:
    Complex Sprite Sheet animation in a 2D game

How it works…

In step 2, the Frame class corresponds to the XML element: Frame and its attributes in the XML file. The same applies for the Animation class, SpriteTexture class, and the AnimationSet class. The XmlSerializer can recognize the XmlElement and XmlAttribute for every member variable unless the XmlAttribute or XmlElement attribute is defined above it. If the class type is customized, we also need to use the typeof() method to let the serializer know the object size. For identifying the array member variables—Frames and Animations—we should use the XmlArray attribute with array root name in the XML file and the XmlArrayItem with its element name. Additionally, the XmlRoot will help the XmlSerializer to locate the root of the whole XML file, here, it is Animations.

In step 3, since the loading of this data occurs independently of the instance of the manager class (in this case it deserializes an instance of a class from XML), we can declare it static to simplify our code, as an instance of the manager is not necessary as it contains no instance data. The Read() method creates an XmlReader object to read the Sprite Sheet XML description file. The XmlReader class supports reading XML data from a stream or file. It allows you to read the contents of a node. When the XmlReader for the description file is created, we instance the XmlSerializer object to read the AnimationSet type data. We use the XmlSerializer.Deserialize() method to decode the description XML file to meet the XML attributes we defined for the AnimationSet class, then the XmlSerializer will recursively read the subclasses with the XML attributes. After the XML deserialization, the code will set the count of the parsed animations to the AnimationCount variable. We will use the AnimationCount to control the index of animations for animation playing in the game.

In step 4, the timer variable will accumulate the game elapsed time; the interval variable defines the period of time between two frames; the animationSet variable will store the animations from the Sprite Sheet description file; the texture object holds the Complex Sprite Sheet image, with X,Y for the location and height, width for the size. The sourceRectangle variable will read the location and size information of the to create the region for drawing the subimage on screen; currentFrame defines the current animation playing frame; currentAnimation indicates the currently playing animation.

In step 6, the first part of the code is to react to the tap gesture for changing the current playing animation. The second part is to increase the currently playing frame, if the current frame reaches the maximum frame count of current animation, we will replay the current animation. The last part will get the location and size of the current subimage according to the current frame and use the location and size information to build the region of the current subimage for playing. Then, increase the frame to next before resetting the timer to 0 for a new frame. This information is controlled by the XML file that was generated from the Codeplex tool we exported in step 1.

Creating a text animation in Adventure Genre (AVG) game

Adventure Genre (AVG) game uses the text or dialog to describe game plots. With different branches, you will experience different directions in game underplaying. In most AVG games, text presentation is the key part, if you have ever played one or more of the games, you might have found the text was rendered character-by-character or word-by-word. In this recipe, you will learn how to work with sprite font and how to manipulate the font-related methods to compute the font and properly adjust the character and word positions in real time.

How to do it…

The following steps will lead you to complete text animation effect in Windows Phone 7:

  1. Create a Windows Phone Game in Visual Studio 2010 named AVGText, change Game1.cs to AVGTextGame.cs, and add the following code to the AVGTextGame class as field:
    [code]
    // SpriteFont object
    SpriteFont font;
    // Game text
    string text = “”;
    // The text showed on the screen
    string showedText = “”;
    // The bound for the showedText
    const int TextBound = 20;
    // Game timer
    float timer = 0;
    // Interval time
    float interval = 100;
    // Index for interate the whole original text
    int index = 0;
    [/code]
  2. Initialize the original text and process it for wrapped showing. Add the initialization and processing code to the Initialize() method:
    [code]
    // The original text
    text = “This is an AVG game text, you will find the text is
    + “showed character by character, I hope “
    + “this recipe is useful for you.”;
    // Split the original string to a string array
    string[] strArray = text.Split(‘ ‘);
    // Declare the temp string for each row, aheadstring for
    // looking ahead one word in this row
    string tempStr = strArray[0];
    string aheadString = “”;
    // Declare the StringBuilder object for holding the sliced
    // line
    StringBuilder stringBuild = new StringBuilder();
    // Iterate the word array, i for current word, j for next word
    for (int i = 0 ,j = i ; j < strArray.Length; j++)
    {
    // i is before j
    i = j – 1;
    // Check the temp string length whether less than the
    // TextBound
    if (aheadString.Length <= TextBound)
    {
    // If yes, check the string looks ahead one more word
    // whether less than the TextBound
    if ((aheadString = tempStr + “ “ + strArray[j]).Length
    <= TextBound)
    {
    // If yes, set the look-ahead string to the temp
    // string
    tempStr = aheadString;
    }
    }
    else
    {
    // If not, add the temp string as a row
    // to the StringBuilder object
    stringBuild.Append(tempStr.Trim() + “n”);
    // Set the current word to the temp string
    tempStr = strArray[i];
    aheadString = tempStr;
    j = i;
    }
    }
    [/code]
  3. Load the SpriteFont content in the LoadContent() method:
    [code]
    font = Content.Load<SpriteFont>(“gameFont”);
    [/code]
  4. Update for drawing the showedText character-by-character. Add the lines to the Update() method:
    [code]
    // Accumulate the game elapsed
    timer += (float)gameTime.ElapsedGameTime.TotalMilliseconds;
    // Show the text character by character
    if (timer > interval && index < text.Length)
    {
    // Every interval add the current index character to the
    // showedText.
    showedText += text.Substring(index, 1);
    // Increse the index
    index++;
    // Set the timer to 0 for the next interval
    timer = 0;
    }
    [/code]
  5. Draw the AVG text effect on the Windows Phone 7 Touch screen. Add the code to the Draw() method:
    [code]
    // Draw the string on screen
    spriteBatch.Begin();
    spriteBatch.DrawString(font, showedText, new Vector2(0, 0),
    Color.White);
    spriteBatch.End();
    [/code]
  6. Build and run the application. It should look similar to the following screenshots:
    text animation in Adventure Genre (AVG) game

How it works…

In step 1, the font object will be used to render the AVG game text; the text variable holds the original text for processing; the showedText stores the latest showing text; TextBound limits the bound for the showing text; the timer is used to count the game elapsed time; the interval variable represents the period of time between two frames; the index variable indicates the current position when showing the original text.

In step 2, we use the text.Split() method to split the original text to a word array, then declare three objects tempStr, aheadString, and stringBuild. tempStr stores the current row, composed of the words from the original text, aheadString saves one more word ahead of the tempStr for preventing the actual length of tempStr from becoming greater than the TextBound. In the for loop, we declare two iterating indices i and j: i is the index from the beginning of the original text, j goes after i. In the loop step, if the length of aheadString is less than the TextBound, one more word following the tempStr will be added to it. If the new look-ahead string length is still less than the Textbound, we will assign the aheadString to tempStr. On the other hand, if the length of aheadString is greater than Textbound, we will do line breaking. At this moment, the tempStr is appended to the stringBuild object as a row with a line breaking symbol—n. Then we set the current word to the tempStr and aheadString, and also set the current i to j for the next row.

In step 4, the first line accumulates the game elapsed milliseconds. If the elapsed time is greater than the interval, and the index is less than the processed text length, we will append the current character to showedText, after that, move the index pointer to the next character and set the timer to zero for the next interval.

Creating a text-based menu—the easiest menu to create

Menu plays the key role of a complete game; the player could use the menu to navigate to different parts of the game. As guidance to the game, the menu has a lot of appearances depending on the game type, animated or static, and so on. In this chapter, you will learn how to work with three kinds of menu, text-based, image-based, and model-based. As a beginning, this recipe shows you the text-based menu.

Getting ready

The text-based menu is made up of texts. Every menu item is an independent string. In this example, when you tap the item, the reaction will be triggered with the text color changing and text visual effects popping up. OK, let’s begin!

How to do it…

The following steps will show you how to create a simple text-based menu:

  1. Create a Windows Phone Game project named TextMenu, change Game1.cs to TextMenuGame.cs, and add gameFont.spriteFont to content project. Then add TextMenuItem.cs to the main project.
  2. Open TextMenuItem.cs file, in the field of the TextMenuItem class, add the following lines:
    [code]
    // SpriteBatch
    SpriteBatch spriteBatch;
    // Menu item text font
    SpriteFont font;
    // Menu item text
    public string Text;
    // Menu Item position
    public Vector2 Position;
    public Vector2 textOrigin;
    // Menu Item size
    public Vector2 Size;
    // Bool tap value shows whether tap on the screen
    public bool Tap;
    // Tap event handler
    public event EventHandler OnTap;
    // Timer object
    float timer = 0;
    // Alpha value of text color
    float alpha = 1;
    Color color;
    // The scale of text
    float scale = 1;
    [/code]
  3. Next, we define the Bound property of the text menu item:
    [code]
    // The Bound of menu item
    public Rectangle Bound
    {
    get
    {
    return new Rectangle((int)Position.X,
    (int)Position.Y, (int)Size.X, (int)Size.Y);
    }
    }
    [/code]
  4. Define the constructor of the TextMenuItem class:
    [code]
    // Text menu item constructor
    public TextMenuItem(Vector2 position, string text, SpriteFont
    font, SpriteBatch spriteBatch)
    {
    Position = position;
    Text = text;
    this.font = font;
    this.spriteBatch = spriteBatch;
    // Compute the text size
    Size = font.MeasureString(Text);
    textOrigin = new Vector2(Size.X / 2, Size.Y / 2);
    color = Color.White;
    }
    [/code]
  5. Then, we implement the Update() method:
    [code]
    // Text menu item update method, get the tapped position on
    screen
    public void Update(Vector2 tapPosition)
    {
    // if the tapped position within the text menu item bound,
    // set Tap to true and trigger
    // the OnTap event
    if (Bound.Contains((int)tapPosition.X,(int)tapPosition.Y))
    {
    Tap = true;
    OnTap(this, null);
    }
    else
    {
    Tap = false;
    }
    }
    [/code]
  6. The last method in the TextMenuItem class is the Draw() method, let’s add the code:
    [code]
    public void Draw(GameTime gameTime)
    {
    timer += (float)gameTime.ElapsedGameTime.TotalMilliseconds;
    // Draw the text menu item
    if (Tap)
    {
    // Draw text visual effect
    if (alpha >= 0 && timer > 100)
    {
    // Decrease alpha value of effect text
    alpha -= 0.1f;
    color *= alpha;
    // Increase the effect text scale
    scale++;
    // Draw the first layer of effect text
    spriteBatch.DrawString(font, Text, Position,
    color, 0, new Vector2(Size.X / 2, Size.Y / 2),
    scale, SpriteEffects.None, 0);
    // Draw the second layer of effect text
    spriteBatch.DrawString(font, Text, Position,
    color, 0, new Vector2(Size.X / 2, Size.Y / 2),
    scale / 2, SpriteEffects.None, 0);
    // Reset the timer for the next interval
    timer = 0;
    }
    // Draw the original text
    spriteBatch.DrawString(font, Text, Position,
    Color.Red);
    }
    else
    {
    // Reset the scale, alpha and color value of original
    // text
    scale = 1;
    alpha = 1;
    color = Color.White;
    // Draw the original text
    spriteBatch.DrawString(font, Text, Position,
    Color.White);
    }
    }
    [/code]
  7. When the TextMenuItem class is done, the following work is about using the class in our game class. Add the code to the TextMenuGame field:
    [code]
    // SpriteFont object for text menu item
    SpriteFont font;
    // Menu collection of text menu item
    List<TextMenuItem> Menu;
    // Random color for background
    Random random;
    Color backgroundColor;
    [/code]
  8. This step is to initialize the variables—Menu, random, and backgroundColor; add the code to the Initialize() method:
    [code]
    Menu = new List<TextMenuItem>();
    random = new Random();
    backgroundColor = Color.CornflowerBlue;
    [/code]
  9. Load the game sprite font and text menu items in Menu, and add the following code to the LoadContent() method:
    [code]
    font = Content.Load<SpriteFont>(“gameFont”);
    // Initialize the text menu items in Menu
    int X = 100;
    int Y = 100;
    for (int i = 0; i < 5; i++)
    {
    TextMenuItem item = new TextMenuItem(
    new Vector2(X, Y + 60 * i), “TextMenuItem”, font,
    spriteBatch);
    item.OnTap += new EventHandler(item_OnTap);
    Menu.Add(item);
    }
    [/code]
  10. Define the text menu item event reaction method item_OnTap():
    [code]
    void item_OnTap(object sender, EventArgs e)
    {
    // Set random color for back in every valid tap
    backgroundColor.R = (byte)random.Next(0, 256);
    backgroundColor.G = (byte)random.Next(0, 256);
    backgroundColor.B = (byte)random.Next(0, 256);
    }
    [/code]
  11. Get the tapped position and pass it to the text menu items for valid tap checking. Insert the code to the Update() method:
    [code]
    // Get the tapped position
    Vector2 tapPosition = new Vector2();
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    tapPosition = touches[0].Position;
    // Check the tapped positon whether inside one of the text
    // menu items
    foreach (TextMenuItem item in Menu)
    {
    item.Update(tapPosition);
    }
    }
    [/code]
  12. Draw the Menu, paste the code into the Draw() method:
    [code]
    // Replace the existing Clear code with this to
    // simulate the effect of the menu item selection
    GraphicsDevice.Clear(backgroundColor);
    // Draw the Menu
    spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.
    AlphaBlend);
    foreach (TextMenuItem item in Menu)
    {
    item.Draw(gameTime);
    }
    spriteBatch.End();
    [/code]
  13. Now, build and run the application, and tap the first text menu item. It should look similar to the following screenshot:
    easiest menu to create

How it works…

In step 2, the spriteBatch is the main object to draw the text; font object holds the SpriteFont text definition file; Text is the actual string shown in the text menu; Position indicates the location of the text menu item on screen; textOrigin defines the center of the text menu item for scale or rotation; the Size, Vector2 variable returns the width and height of Text through X and Y in Vector2; Tap represents whether the tap gesture takes place; the event handler OnTap listens to the occurrence of the tap gesture; timer object accumulates the game elapsed time for text visual effects; alpha value will be used to change the transparency of the text color; the last variable scale stores the scale factor of text menu item size.

In step 3, Bound returns a rectangle around the text menu item, and the property will be used to check whether the tapped position is inside the region of the text menu item.

In step 4, notice, we use the font.MeasureString() method to compute the size of the Text. Then set the origin position to the center of the Text.

In step 5, the Update() method receives the tapped position and checks whether it is inside the region of the text menu item, if yes, set the Boolean value Tap to true and trigger the OnTap event, otherwise, set Tap to false.

In step 6, the first line is to accumulate the game elapsed time. When the Tap is false—which means no tap on the text menu item—we set scale and alpha to 1, and the color to Color.White. Else, we will draw the text visual effects with two texts having the same text content of the text menu item; the size of the first layer is two times the second. As time goes by, the two layers will grow up and gradually disappear. After that, we draw the original text of the text menu item to Color.Red.

In step 7, the font object holds the game sprite font for rendering the Text of the text menu item; Menu is the collection of text menu items; the random variable will be used to generates random number for creating random backgroundColor.

In step 10, because the range for red, green, and blue factors of Color is from 0 to 255, the random numbers generated for them also meet the rule. With the random number, every valid text menu item tap will change the background color randomly.

Creating an image-based menu

The image-based menu is another menu-presentation approach. Unlike the text-based menu, the image-based menu uses 2D picture as the content of every menu item and makes the game navigation user interface more attractive and innovative. It gives graphic designers a much bigger space for conceiving ideas for a game. An image-based menu easily stimulates the designers’ and programmers’ creativities. The image menu item can swipe in and swipe out, jump in and jump out, or fade in and fade out. In this recipe, you will learn how to create an image-based menu system and use it in your own Windows Phone 7 game.

Getting ready

As an example, the image menu items are placed horizontally on the screen. When you tap one of them, it will grow up and the current item index shows at the top-left of the screen; once the tapped position is outside of its bound, the menu will restore to the initial state. Now, let’s build the application.

How to do it…

Follow these steps to create your own image-based menu:

  1. Create a Windows Phone Game project in Visual Studio 2010 named ImageMenu, change Game1.cs to ImageMenuGame.cs, and add the ImageMenuItem.cs in the main project. Then add Imageitem.png and gameFont.spriteFont to the content project.
  2. Create the ImageMenuItem class in the ImageMenuItem.cs file. First, add the code to the ImageMenuItem class field:
    [code]
    // SpriteBatch object
    SpriteBatch spriteBatch;
    // Menu item text font
    Texture2D texture;
    // Menu Item position
    public Vector2 Position;
    // Menu Item origin position for translation and rotation
    public Vector2 Origin;
    // Bool tap value shows whether tap on the screen
    public bool Tap;
    // Timer object
    float timer = 0;
    // The scale range from MinScale to MaxScale
    const float MinScale = 0.8f;
    const float MaxScale = 1;
    // The scale of text
    float scale = 0.8f;
    // Image menu item index
    public int Index = 0;
    [/code]
  3. Next, add the Bound property:
    [code]
    // The Bound of menu item
    public Rectangle Bound
    {
    get
    {
    return new Rectangle(
    (int)(Position.X – Origin.X * scale),
    (int)(Position.Y – Origin.Y * scale),
    (int)(texture.Width * scale),
    (int)(texture.Height * scale));
    }
    }
    [/code]
  4. Then, define the constructor of the ImageMenuItem class:
    [code]
    // Text menu item constructor
    public ImageMenuItem(Vector2 Location,Texture2D Texture,
    SpriteBatch SpriteBatch)
    {
    Position = Location;
    texture = Texture;
    spriteBatch = SpriteBatch;
    Origin = new Vector2(texture.Width / 2,
    texture.Height / 2);
    }
    [/code]
  5. The following method is the Update() method of the ImageMenuItem class, so let’s add its implementation code:
    [code]
    // Text menu item update method, get the tapped position on
    // screen
    public void Update(GameTime gameTime, Vector2 tapPosition)
    {
    // if the tapped position within the text menu item bound,
    // set Tap to true and trigger the OnTap event
    Tap = Bound.Contains((int)tapPosition.X,
    (int)tapPosition.Y);
    // Accumulate the game elapsed time
    timer += (float)gameTime.ElapsedGameTime.TotalMilliseconds;
    }
    [/code]
  6. The last method of the ImageMenuItem class is the Draw() method, so let’s add its implementation code:
    [code]
    public void Draw(GameTime gameTime)
    {
    // Draw the text menu item
    if (Tap)
    {
    // if tap gesture is valid, gradually scale to
    // MaxScale in
    if (scale <= MaxScale && timer > 200)
    {
    scale += 0.1f;
    }
    spriteBatch.Draw(texture, Position, null, Color.Red,
    0f, Origin, scale, SpriteEffects.None, 0f);
    }
    else
    {
    // If no valid tap, gradually restore scale to
    // MinScale in every frame
    if (scale > MinScale && timer > 200)
    {
    scale -= 0.1f;
    }
    spriteBatch.Draw(texture, Position, null, Color.White,
    0f, Origin, scale, SpriteEffects.None, 0f);
    }
    }
    [/code]
  7. So far, we have seen the ImageMenuItem class. Our next job is to use the class in the main class. Add the code to the ImageMenuGame class:
    [code]
    // SpriteFont object for the current index value
    SpriteFont font;
    // The image menu item texture
    Texture2D texImageMenuItem;
    // The collection of image menu items
    List<ImageMenuItem> Menu;
    // The count of image menu items
    int TotalMenuItems = 4;
    // The index for every image menu item
    int index = 0;
    // Current index of tapped image menu item
    int currentIndex;
    [/code]
  8. Initialize the Menu object in the Initialize() method:
    [code]
    // Initialize Menu
    Menu = new List<ImageMenuItem>();
    [/code]
  9. Load the SpriteFont and ImageMenuItem texture, and initialize the ImageMenuItem in Menu. Next, insert the code to the LoadContent() method:
    [code]
    texImageMenuItem = Content.Load<Texture2D>(“ImageItem”);
    font = Content.Load<SpriteFont>(“gameFont”);
    // Initialize the image menu items
    int X = 150;
    int Y = 240;
    // Instance the image menu items horizontally
    for (int i = 0; i < TotalMenuItems; i++)
    {
    ImageMenuItem item = new ImageMenuItem(
    new Vector2(
    X + i * (texImageMenuItem.Width + 20), Y),
    texImageMenuItem, spriteBatch);
    item.Index = index++;
    Menu.Add(item);
    }
    [/code]
  10. In this step, we will check the valid tapped position and get the index of the tapped image menu item, paste the code to the Update() method:
    [code]
    // Get the tapped position
    Vector2 tapPosition = new Vector2();
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    tapPosition = touches[0].Position;
    // Check the tapped positon whether inside one of the
    // image menu items
    foreach (ImageMenuItem item in Menu)
    {
    item.Update(gameTime, tapPosition);
    // Get the current index of tapped image menu item
    if (item.Tap)
    {
    currentIndex = item.Index;
    }
    }
    }
    [/code]
  11. Draw the menu and the current index value on the screen, and insert the lines to the Draw() method:
    [code]
    spriteBatch.Begin();
    // Draw the Menu
    foreach (ImageMenuItem item in Menu)
    {
    item.Draw(gameTime);
    }
    // Draw the current index on the top-left of screen
    spriteBatch.DrawString(font, “Current Index: “ +
    currentIndex.ToString(), new Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  12. Now, build and run the application. Tap the second image menu item, and it should look similar to the following screenshot:
    image-based menu

How it works…

In step 2, spriteBatch renders the image of the menu item on screen; texture loads the graphic content of the image menu item; Position represents the position of every image menu item; Origin defines the center for menu item rotation and translation; Tap is the mark for valid menu item tapping; the timer variable accumulates the game elapsed time for changing the scale of the image menu item. The following two objects MinScale and MaxScale limit the range of scale changing; the scale variable indicates the current scale value of the menu item; index holds the sequential position of a menu.

In step 3, the Bound property returns a rectangle around the image of the menu item according to the menu item position and the image size.

In step 6, we will draw the visual effect for the menu item with image zoom in and zoom out. The first line accumulates the game elapsed time for changing the menu item scale. If the tapped position is inside the image menu item, the item will grow up gradually along with the increasing scale value until MaxScale; otherwise, it will restore to the initial state.

In step 7, the font object will render the current index of the image menu item on the top-left of the screen; texImageMenuItem holds the image of the menu item; Menu is the collection of ImageMenuItem; TotalMenuItems declares the total number of image menu items in Menu; index is used to mark the index of every menu item in Menu; the currentValue variable saves the index of the tapped image menu item.

In step 9, we instance the image menu items horizontally and define the gap between each of the items at about 20 pixels wide.

In step 10, after calling the ImageMenuItem.Update() method, you can get the current index of the menu item when its Tap value is true.

Creating a 3D model-based menu

Text- or image-based menus are very common in games. As you know, they are both in 2D, but sometimes you want some exceptional effects for menus in 3D, such as rotating. 3D menus render the model as a menu item for any 3D transformation. They offer a way to implement your own innovative menu presentation in 3D. This recipe will specify the interesting technique in Windows Phone 7.

Getting ready

Programming the 3D model-based menu is an amazing adventure. You can use the 3D model rendering and transformation techniques to control the menu items and control the camera at different positions, getting closer or looking like a bird. As a demo, the model menu item of menu will pop up when selected. I hope this recipe will impress you. Let’s look into the code.

How to do it…

The following steps will lead you to build an impressive 3D model-based menu:

  1. Create the Windows Phone Game project named ModelMenu3D, change Game1. cs to ModelMenuGame.cs. Add a ModelMenuItem.cs to the project; gameFont. spriteFont and ModelMenuItem3D.FBX to the content project.
  2. Create the ModelMenuItem class in ModelMenuItem.cs. Add the code to its field:
    [code]
    // Model of menu item
    Model modelItem;
    // Translation of model menu item
    public Vector3 Translation;
    // The view and projection of camera for model view item
    public Matrix View;
    public Matrix Projection;
    // The index of model menu item
    public int Index;
    // The mark for selection
    public bool Selected;
    // The offset from menu item original position when selected
    public int Offset;
  3. Next, define the constructor of the ModelMenuItem class and set the default offset of the selected model menu item.
    [code]
    // Constructor
    public ModelMenuItem(Model model, Matrix view, Matrix
    projection)
    {
    modelItem = model;
    View = view;
    Projection = projection;
    Offset = 5;
    }
    [/code]
  4. This step is to give the definition of the Draw() method of the ModelMenuItem class:
    [code]
    // Draw the model menu item
    public void Draw()
    {
    Matrix[] modelTransforms = new
    Matrix[modelItem.Bones.Count];
    modelItem.CopyAbsoluteBoneTransformsTo(modelTransforms);
    foreach (ModelMesh mesh in modelItem.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    // Enable lighting
    effect.EnableDefaultLighting();
    // Set the ambient light color to white
    effect.AmbientLightColor = Color.White.ToVector3();
    if (Selected)
    {
    // If the item is not selected, it restores
    // to the original state
    effect.World =
    modelTransforms[mesh.ParentBone.Index]
    * Matrix.CreateTranslation(Translation +
    new Vector3(0,0, Offset));
    }
    else
    {
    // If the item is selected, it stands out
    effect.World =
    modelTransforms[mesh.ParentBone.Index]
    * Matrix.CreateTranslation(Translation);
    }
    effect.View = View;
    effect.Projection = Projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  5. Use the ModelMenuItem class in our game. Add the code to the field of the ModelMenuGame class:
    [code]
    // Sprite font object
    SpriteFont font;
    // Model of menu Item
    Model menuItemModel;
    // Camera position
    Vector3 cameraPositon;
    // Camera view and projection matrices
    Matrix view;
    Matrix projection;
    // The collection of Model Menu items
    List<ModelMenuItem> Menu;
    // The count of model menu items in Menu
    int TotalMenuItems = 4;
    // The left and right hit regions for menu item selection
    Rectangle LeftRegion;
    Rectangle RightRegion;
    // Current index of model menu item in Menu
    int currentIndex = 0;
    // Event handler of hit regions
    public event EventHandler OnTap;
    [/code]
  6. Initialize the camera, menu, and hit regions. Insert the code to the Initialize() method:
    [code]
    // Define the camera position
    cameraPositon = new Vector3(-40, 10, 40);
    // Define the camera view and projection matrices
    view = Matrix.CreateLookAt(cameraPositon, Vector3.Zero,
    Vector3.Up);
    projection =
    Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio, 1.0f, 1000.0f);
    // Initialize the Menu object
    Menu = new List<ModelMenuItem>();
    // Left hit region occupies the left half of screen
    LeftRegion = new Rectangle(
    0, 0,
    GraphicsDevice.Viewport.Width / 2,
    GraphicsDevice.Viewport.Height);
    // Right hit region occupies the right half of screen
    RightRegion = new Rectangle(GraphicsDevice.Viewport.Width / 2,
    0, GraphicsDevice.Viewport.Width / 2,
    GraphicsDevice.Viewport.Height);
    // Define the event handler OnTap with the delegate method
    OnTap = new EventHandler(item_OnTap);
    [/code]
  7. Next, define the reaction method item_OnTap of the OnTap event:
    [code]
    // Make the current index value change within the range of
    // total menu items
    currentIndex = currentIndex % TotalMenuItems;
    // If the current index is less than 0, set it to the last item
    if (currentIndex < 0)
    {
    // From the last item
    currentIndex = TotalMenuItems – 1;
    }
    // if the current index is greater than the last index, set it
    to the
    // first item
    else if (currentIndex > TotalMenuItems – 1)
    {
    // From the beginning item;
    currentIndex = 0;
    }
    // Select the menu item, of which the index equals the
    // current index
    foreach (ModelMenuItem item in Menu)
    {
    if (item.Index == currentIndex)
    {
    item.Selected = true;
    }
    else
    {
    item.Selected = false;
    }
    }
    [/code]
  8. Load the game content and initialize the menu items of Menu. Insert the code in the Initialize() method:
    [code]
    // Load and initialize the model and font objects
    menuItemModel = Content.Load<Model>(“ModelMenuItem3D”);
    font = Content.Load<SpriteFont>(“gameFont”);
    // Initialize the model menu items in Menu horizontally
    for (int i = 0; i < TotalMenuItems; i++)
    {
    int X = -20;
    ModelMenuItem item = new ModelMenuItem(
    menuItemModel, view,
    projection);
    item.Translation = new Vector3(X + (i * 20), 0, 0);
    // Set the index of menu item
    item.Index = i;
    Menu.Add(item);
    }
    // Setting the first menu item to be selected by default
    Menu[0].Selected = true;
    [/code]
  9. In this step, we make the current index value react to the tap on the hit regions. Paste the code to the Update() method:
    [code]
    // Get the tapped position
    Vector2 tapPosition = new Vector2();
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    tapPosition = touches[0].Position;
    Point point = new Point((int)tapPosition.X,
    (int)tapPosition.Y);
    // Check the tapped position whether in the left region
    if (LeftRegion.Contains(point))
    {
    // If yes, decrease the current index
    –currentIndex;
    OnTap(this, null);
    }
    // Check the tapped position whether in the right region
    else if (RightRegion.Contains(point))
    {
    // If yes, increase the current index
    ++currentIndex;
    OnTap(this, null);
    }
    }
    [/code]
  10. The last step is to draw the menu on screen. Insert the code to the Draw() method:
    [code]
    // The following three lines are to ensure that the models are
    // drawn correctly
    GraphicsDevice.DepthStencilState = DepthStencilState.Default;
    GraphicsDevice.BlendState = BlendState.AlphaBlend;
    // Draw the Menu
    foreach (ModelMenuItem item in Menu)
    {
    item.Draw();
    }
    // Draw the current index on the top-left of screen
    spriteBatch.Begin();
    spriteBatch.DrawString(font, “Current Index: “ +
    currentIndex.ToString(), new Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  11. Now, build and run the application. When you tap the right region, the second model menu will pop up, as shown in the following screenshot:
    3D model-based

How it works…

In step 2, the modelitem holds the model object of the menu item; Translation stores the world position of the model menu item; View and Projection stand for the view and projection matrices of the camera respectively; index saves the index of the menu item in the menu; Selected indicates the selection state of the model menu item; Offset is the offset value from the model menu item’s original position.

In step 4 and in the iteration of Mesh.Effects, we enable the light through calling the Effect.EnableDefaultLighting() method and set the Effect.AmbientLightColor to Color.White.ToVector3(). Notice, for popping up the model menu item, we create the translation matrix with positive 5 units offset at the Z-axis from the original position. If a menu is selected, it will pop up, otherwise, it will go back or remain in the initial state.

In step 5, font object will be used to draw the current index value on the top-left on screen; menuModel stores the model object for model menu item; cameraPosition defines the position of the camera; view and projection are the matrices for camera view and projection respectively; Menu is the collection of model menu items; TotalMenuItems indicates the total number of menu items; LeftRegion and RightRegion are the areas for item choosing.

In step 6, the first part of the code before is to define the camera; the second part is about initializing the left and right hit regions. LeftRegion takes up the left half of the screen; RightRegion occupies the other half.

In step 7, the first line is responsible for making the currentIndex value not less than -1 and greater than TotalMenuItems. Next, if the current index is less than 0, the last menu item will be selected; otherwise, if the current index is greater than TotalMenuItems minus 1, the first item will pop up. The following foreach loop checks which item is currently selected when its index is equal to the current index.

In step 8, the first two lines are to load the menu item model and the font for presenting the current index. The following for loop initializes the menu items of Menu horizontally and assigns i value to the item index. The last line sets the first selected item.

In step 9, this block of code first gets the tapped position on screen. If the tapped position is in the left region, the current index decreases by 1, else, current index increases by 1. Any valid tap on the regions will trigger the OnTap event.

First Step into XNA Game, Coordinates and View

Drawing the axes for a 2D game

In 2D or 3D game programming, the axes are the basis for the position of objects. With the coordinate system, it is convenient for you to place or locate the objects. In 2D games the coordinate system is made up of two axes, X and Y. In 3D games, there is another axis, that is the Z axis. Usually, drawing the axes on your screen is a handy tool for your game debugging. In this recipe, you will learn how to render the axes in 2D in Windows Phone 7.

How to do it…

Follow the given steps to draw the 2D axis:

  1. Create a Windows Phone Game project in Visual Studio 2010, change the name from Game1.cs to Draw2DAxesGame.cs. Then add a new class named Axes2D.cs. This class is responsible for drawing the 2D line on screen. We declare the field variables in the Axes2D class:
    [code]
    // Pixel Texture
    Texture2D pixel;
    public int Thickness = 5;
    // Render depth of the primitive line object (0 = front, 1 =
    // back)
    public float Depth;
    [/code]
  2. Then, we define the overload constructor of the Axes2D class:
    [code]
    //Creates a new primitive line object.
    public Axes2D(GraphicsDevice graphicsDevice, Color color)
    {
    // create pixels
    pixel = new Texture2D(graphicsDevice, 1, 1);
    Color[] pixels = new Color[1];
    pixels[0] = color;
    pixel.SetData<Color>(pixels);
    Depth = 0;
    }
    [/code]
  3. When the pixel data size and color are ready, the following code will draw the line object:
    [code]
    public void DrawLine(SpriteBatch spriteBatch, Vector2 start,
    Vector2 end)
    {
    // calculate the distance between the two vectors
    float distance = Vector2.Distance(start, end);
    // calculate the angle between the two vectors
    float angle = (float)Math.Atan2((double)(end.Y – start.Y),
    (double)(end.X – start.X));
    // stretch the pixel between the two vectors
    spriteBatch.Draw(pixel,
    start,
    null,
    Color.White,
    angle,
    Vector2.Zero,
    new Vector2(distance, Thickness),
    SpriteEffects.None,
    Depth);
    }
    [/code]
  4. Use the Axes2D class in the main game class and insert the following code at the top of the class:
    [code]
    // The axis X line object
    Axes2D axisX;
    // The axis Y line object
    Axes2D axisY;
    // The start and end of axis X line object
    Vector2 vectorAxisXStart;
    Vector2 vectorAxisXEnd;
    // The start and end of axis Y line object
    Vector2 vectorAxisYStart;
    Vector2 vectorAxisYEnd;
    [/code]
  5. Initialize the axes objects and their start and end positions, add the following code to the Initialize() method:
    [code]
    // Set the color of axis X to red
    axisX = new Axes2D(GraphicsDevice, Color.Red);
    // Set the color of axis Y to green
    axisY = new Axes2D(GraphicsDevice, Color.Green);
    // Set the start and end positions of axis X
    vectorAxisXStart = new Vector2(100,
    GraphicsDevice.Viewport.Height / 2);
    vectorAxisXEnd = new Vector2(700,
    GraphicsDevice.Viewport.Height / 2);
    // Set the start and end positions of axis Y
    vectorAxisYStart = new Vector2(
    GraphicsDevice.Viewport.Width / 2, 50);
    vectorAxisYEnd = new Vector2(
    GraphicsDevice.Viewport.Width /2, 450);
    [/code]
  6. Draw the two line objects on the screen and insert the following code in to the Draw() method:
    [code]
    spriteBatch.Begin();
    axisX.DrawLine(spriteBatch, vectorAxisXStart, vectorAxisXEnd);
    axisY.DrawLine(spriteBatch, vectorAxisYStart, vectorAxisYEnd);
    spriteBatch.End();
    [/code]
  7. Now, build and run the application, and it will run similar to the following screenshot. Please make sure that the Windows Phone screen has been rotated to landscape mode:
    Windows Phone screen landscape mode

How it works…

In step 1, in the 2D line drawing for Windows Phone 7, we use pixel texture to present the line point by point; the Thickness variable will be used to change the pixel size of the line object; the Depth value will be used to define the drawing order.

In step 2, the constructor receives a GraphicDevice parameter and a Color parameter. We use them to create the pixel texture, which is of one unit width and height, and set the color to the pixel texture through the SetData() method; this is another way of creating a texture in code.

In step 3, the SpriteBatch is the main object for drawing the line objects on the screen. The start parameter represents the start position of the line object; the end parameter indicates the end position. In the method body, the first line will compute the distance between the start and end points, the second line will compute the slope between the two positions; the third line will draw the pixels one by one from the start position along the line slope to the end along the angle. This is a more generic method of drawing every line.

In step 5, the X axis is located in the middle of screen height, the Y axis is located in the middle of screen width.

In step 6, within the SpriteBatch rendering code, we call the axisX.DrawLine() and axisY.DrawLine() to draw the lines.

Setting up the position, direction, and field of view of a fixed camera

In the 2D world of Windows Phone 7 game programming, the presentation of images or animations with X and Y axes is straightforward. You just need to know that the original point (0, 0) is located at the top left of the touchscreen, and the screen width and height. Now, in a 3D world, things are different, as there are now X, Y, and Z axes and the original point is not simply sitting at the top left of the touchscreen. In this recipe, you will learn how to deal with the new coordinate system.

Getting ready

In 3D programming, especially for Windows Phone 7, the first thing we must make sure of is the coordinate system, which is on either the right hand or the left hand. In Windows Phone 7 XNA 3D programming, the coordinate system is on the right-hand, which means the increasing Z axis points towards you when you are playing a Windows Phone 7 game.

The next step is to set the camera, like the eye, to make the objects in the 3D world visible. During the process, we save the position and direction in a matrix, which is called the View matrix. To create the View matrix, XNA uses the CreateLookAt() method. It needs Position, Target, and Up vectors of the camera:

[code]
public static Matrix CreateLookAt (
Vector3 Position,
Vector3 Target,
Vector3 Up
)
[/code]

The Position indicates the position of the camera in the 3D world; the Target defines where you want to face the camera; the Up variable is very important because it represents the rotation of your camera. If it is positive, everything goes well, otherwise, it will be inverted. In XNA, the Vector3.UP is equal to (0, 1, 0); Vector3.Forward stands for (0, 0, -1); Vector3.Right is the same as (1, 0, 0); Vector3.Down stands for (0,-1,0). These predefined vectors are easy for you to apply in your game. Once you have understood the View matrix, the next important matrix for the camera is called Projection. In the 3D world, every object has its own 3D position. If we want to render the 3D objects on the screen, which is a 2D plain, how do we do it? The Project matrix gives us a hand.

Before rendering the objects from the 3D environment to the 2D screen, we must know how many of them need to be rendered and the range. From the computer graphics perspective, the range is called Frustum, as shown in the following figure:

The Near, Right, Left, and Far

The Near, Right, Left, and Far planes compose the Frustum to determine whether the objects are inside it. In Windows Phone 7, you can use the Matrix. CreatePerspectiveFieldofView() to create the Projection matrix:

[code]
public static Matrix CreatePerspectiveFieldOfView (
float fieldOfView,
float aspectRatio,
float nearPlaneDistance,
float farPlaneDistance
)
[/code]

The first parameter here is the field of view angle around the Y axis which, similar to the human eye’s view, has a value of 45 degrees. Also, you can use MathHelper.PiOver4, which means ¼ of Pi which is the radian value of 45 degrees; the aspectRatio parameter specifies the view width divided by height, and this value corresponds to the ratio of the back buffer. The last two parameters individually represent the near plane and far plane for the frustum. The value for the near plane defines the beginning of the frustum. It means any object nearer than the plane will not be rendered, and for the far plane it is vice versa.

In your Windows Phone 7 game, you can update the View matrix, similar to the FPS game, and the value depends on the screen input. In the drawing phase of your 3D game, it is required to pass the View and Projection matrices to the effect when you are rendering certain objects to let the rendering hardware know how to transform the 3D objects to proper positions on the screen.

Now that, you have learned the essential ideas for the camera, it’s time for programming your own application.

How to do it…

  1. First, you need to create a Windows Phone Game project in Visual Studio 2010. Then change the name from Game1.cs to FixedCameraGame.cs and add Tree.fbx from the code bundle to your content project. For the 3D model creation, you can use commercial tools, such as AutoDesk 3DS MAX, MAYA, or the free alternative named Blender. Then, in the field of the FixedCameraGame class, insert the following lines:
    [code]
    Matrix view;
    Matrix projection;
    Model model;
    [/code]
  2. Then, in the Initialize() method, we will add the following lines:
    [code]
    Vector3 position = new Vector3(0, 40, 50);
    Vector3 target = new Vector3(0, 0, 0);
    view = Matrix.CreateLookAt(position, target, Vector3.Up);
    projection =
    Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,
    1, 1000.0f);
    [/code]
  3. Next, we load and initialize the Tree.fbx 3D model located at the associate content project, into our game. Add the following code to the LoadContent() method:
    [code]
    model = Content.Load<Model>(“Tree”);
    [/code]
  4. The last step for our game is to draw the model on the screen. Insert the following lines into the Draw() method:
    [code]
    // Define and copy the transforms of model
    Matrix[] transforms = new Matrix[this.model.Bones.Count];
    this.model.CopyAbsoluteBoneTransformsTo(transforms);
    // Draw the model. A model can have multiple meshes, so loop.
    foreach (ModelMesh mesh in this.model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    // Get the transform information from its parent
    effect.World = transforms[mesh.ParentBone.Index];
    // Pass the View and Projection matrix to effect
    // make the rendering hardware how to transform the
    // model
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    [/code]
  5. Now, build and run the application. It will run similar to the following screenshot:
    run similar to the following screenshot

How it works…

In step 1, the model variable will be used to load and show the 3D model.

In step 2, we define the View matrix with position, target, and up vector. After that, we give the definition of the Project matrix.

In step 4, the first two lines define the matrix array depending on the bone count. Then we use the CopyAbsoluteBoneTransformsTo() method to assign the actual values to the transform array. In the foreach loop, we iterate all of the meshes in the model. In the loop body, we use BasicEffect to render the mesh. In Windows Phone 7 XNA programming, so far, it supports five built-in effects; here we just use the simplest one. For effect, Effect. World indicates the mesh’s position; Effect.View represents the View matrix, Effect. Projection represents the Projection matrix. When all of the subloop effects are done, mesh.Draw()—in the loop for meshes—will render the mesh to the touchscreen.

Drawing the axes for a 3D game

The presentation of 3D lines is completely different from 2D; to draw the 3D axes in 3D will help you have a straightforward sense on the 3D world. The key to drawing the axes is the vertex format and vertex buffer, which holds the vertex data for rendering the lines. VertexBuffer is a sequence of allocated memory for storing vertices, which have the vertex position, color, texture coordinates, and the normal vector for rendering shapes or models. In other words, you can also use vertex buffer as an array of vertices. When the XNA application begins to render, it will read the vertex buffer and draw the vertex with the corresponding information that was saved into the game world. Based on the vertex buffer, the rendering performance will be much faster than passing the vertex one by one when requesting. In this recipe, you will learn how to use the vertex buffer to draw the axes in 3D. For a better view, the example will run in landscape view.

How to do it…

  1. Create a Windows Phone Game in Visual Studio 2010. Change the name from Game1.cs to Draw3DAxesGame.cs and then add the following class-level variables:
    [code]
    // Basic Effect object
    BasicEffect basicEffect;
    // Vertex Data with Positon and Color
    VertexPositionColor[] pointList;
    // Vertex Buffer to hold the vertex data for drawing
    VertexBuffer vertexBuffer;
    // Camera View and Projection matrix
    Matrix viewMatrix;
    Matrix projectionMatrix;
    // The Left and right hit region on the screen for rotating the
    // axes
    Rectangle recLeft;
    Rectangle recRight;
    // The rotation value
    float rotation = 45;
    [/code]
  2. Initialize the 3D world for the axes and axes vertex data. Insert the following code to the Initialize() method:
    [code]
    // Define the camera View matrix
    viewMatrix = Matrix.CreateLookAt(
    new Vector3(0.0f, 0.0f, 150f),Vector3.Zero,
    Vector3.Up
    );
    // Define the camera Projection matrix
    projectionMatrix = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,
    0.5f,
    1000.0f);
    // Initialize the basic effect
    basicEffect = new BasicEffect(GraphicsDevice);
    // Initialize the axes world matrix of the position in 3D world
    basicEffect.World = Matrix.Identity;
    // Initialize the vertex data
    pointList = new VertexPositionColor[6];
    // Define the vertex data of axis X
    pointList[0] = new VertexPositionColor(new Vector3(0, 0, 0),
    Color.Red);
    pointList[1] = new VertexPositionColor(new Vector3(50, 0, 0),
    Color.Red);
    // Define the vertex data of axis Y
    pointList[2] = new VertexPositionColor(new Vector3(0, 0, 0),
    Color.White);
    pointList[3] = new VertexPositionColor(new Vector3(0, 50, 0),
    Color.White);
    // Define the vertex data of axis Z
    pointList[4] = new VertexPositionColor(new Vector3(0, 0, 0),
    Color.Blue);
    pointList[5] = new VertexPositionColor(new Vector3(0, 0, 50),
    Color.Blue);
    // Initialize the vertex buffer and allocate the space in
    //vertex buffer for the vertex data
    vertexBuffer = new VertexBuffer(GraphicsDevice,
    VertexPositionColor.VertexDeclaration, 6,
    BufferUsage.None);
    // Set the vertex buffer data to the array of vertices.
    vertexBuffer.SetData<VertexPositionColor>(pointList);
    // Define the Left and Right hit region on the screen
    recLeft = new Rectangle(0, 0,
    GraphicsDevice.Viewport.Width / 2,
    GraphicsDevice.Viewport.Height);
    recRight = new Rectangle(GraphicsDevice.Viewport.Width / 2, 0,
    GraphicsDevice.Viewport.Width / 2,
    GraphicsDevice.Viewport.Height);
    [/code]
  3. Now, we need to check if the user has tapped the screen, so that we can rotate the axis. Add the following lines to the Update() method:
    [code]
    // Check whether the tapped position is in the Left or the
    Right // hit region
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    // Rotate the axis in the landscape mode
    if (recLeft.Contains(point))
    {
    rotation += 10f;
    }
    if (recRight.Contains(point))
    {
    rotation -= 10f;
    }
    }
    [/code]
  4. Draw the axes on screen. Insert the following code to the Draw() method:
    [code]
    // Rotate the axes
    basicEffect.World =
    Matrix.CreateRotationY(MathHelper.ToRadians(rotation)) *
    Matrix.CreateRotationX(MathHelper.ToRadians(50));
    // Give the view and projection to the basic effect
    basicEffect.View = viewMatrix;
    basicEffect.Projection = projectionMatrix;
    // Enable the vertex color in Basic Effect
    basicEffect.VertexColorEnabled = true;
    // Draw the axes on screen, iterate the pass in Basic Effect
    foreach (EffectPass pass in
    basicEffect.CurrentTechnique.Passes)
    {
    // Begin Drawing
    pass.Apply();
    // Set the vertex buffer to graphic device
    GraphicsDevice.SetVertexBuffer(vertexBuffer, 0);
    // Draw the axes with LineList Type
    GraphicsDevice.DrawUserPrimitives<VertexPositionColor>(
    PrimitiveType.LineList, pointList, 0, 3);
    }
    [/code]
  5. Let’s build and run the example. It will run similar to the following screenshot:
    build and run the example

How it works…

In step 1, we declare the BasicEffect for axes drawing, the VertexPositionColor array to store the vertex information of the axes, and then use the declared VertexBuffer object to hold the VertexPositionColor data. The following two matrices indicate the camera view and projection. The other two rectangular objects will be used to define the left and right hit regions on the screen. The last rotation variable is the controlling factor for axes rotation in 3D.

In step 2, we first define the camera View and Project matrices and then initialize the vertex data for the 3D axes. We use the Vector3 and Color objects to initialize the VertexPositionColor structure and then we define the VertexBuffer object. The VertexBuffer object has two overload constructors. The first one is:

[code]
public VertexBuffer (
GraphicsDevice graphicsDevice,
VertexDeclaration vertexDeclaration,
int vertexCount,
BufferUsage usage
)
[/code]

The first parameter is graphic device value; the second is vertex declaration, which describes per-vertex data, the size, and usage of each vertex element; the vertexCount parameter indicates how many vertices the vertex buffer will store; and the last parameter BufferUsage defines the access right for the vertex buffer, read and write or write only.

The second parameter of the second overload method is a little different. This parameter gets the type of the vertex, especially for the custom vertex type:

[code]
public VertexBuffer (
GraphicsDevice graphicsDevice,
Type vertexType,
int vertexCount,
BufferUsage usage
)
[/code]

Here, we use the first overload method and pass 6 as the vertex total number. When the vertex buffer is allocated, you will need to set the vertex data to the vertex buffer for drawing. The last two lines are the definition of left and right regions on the touchscreen.

In step 3, the code checks whether the tapped position is located within the left or right rectangle region and changes the rotation value according to the different rectangles.

In step 4, the first line is to rotate the axes around Y based on the value of rotation. Then you give the View and Projection matrices for the camera. Next, you can enable the BasicEffect.VertexColorEnabled to color the axes. The last foreach loop will draw the vertex data about the 3D axes on the screen. The DrawUserPrimitives() method has four parameters:

[code]
public void DrawUserPrimitives<T> (
PrimitiveType primitiveType,
T[] vertexData,
int vertexOffset,
int primitiveCount
)
[/code]

The PrimitiveType describes the type of primitive to render. Here, we use PrimitiveType.LineList, which will draw the line segments as the order of vertex data. The vertex data describes the vertex array information. Vertex offset tells the rendering functions the start index of the vertex data. The primitiveCount indicates the number of primitives to render, in this example, the number is three for the three axes.

Implementing a first-person shooter (FPS) camera in your game

Have you ever played a first-person shooter (FPS) game, such as Counter-Strike, Quake, or Doom? In these kind of games your eyes will be the main view. When you are playing, the game updates the eye view and makes you feel like it is real. On the computer, it is easy to change the view using the mouse or the keyboard; the challenge for Windows Phone 7 FPS camera is how to realize these typical behaviors without the keyboard or the mouse. In this recipe, you will master the technique to overcome it.

Getting ready

It is amazing and exciting to play a FPS game on the PC. In Windows Phone 7, you would want to have similar experiences. Actually, the experiences may be different; you just use the screen for everything. A Windows Phone FPS game also needs to define the camera first. The difference between this and the third-person shooter (TPS) camera is that, in the FPS camera, you should update the position of the camera itself and for the TPS camera you need to make the camera follow the updating position of the main player object at a reasonable distance. In a FPS game, you can use the arrow keys to move the player’s position and the mouse to change the direction of your view. In Windows Phone 7, we could use different regions of the touchscreen to move and use FreeDrag to update the view.

How to do it…

Now, let’s begin the exciting work:

  1. Create a Windows Phone Game in Visual Studio 2010, and change the name from Game1.cs to FPSCameraGame.cs. Then, add the 3D models box.fbx and tree. fbx, XNA font object, and gameFont.font to content project. After the preparation work, you need to insert the variables in the class field:
    [code]
    // Game Font
    SpriteFont spriteFont;
    // Camera View matrix
    Matrix view;// Camera Projection matrix
    Matrix projection;
    // Position of Camera
    Vector3 position;
    // Models
    Model modelTree;
    Model modelBox;
    // Hit regions on the touchscreen
    Rectangle recUp;
    Rectangle recDown;
    Rectangle recRight;
    Rectangle recLeft;
    // Angle for rotation
    Vector3 angle;
    // Gesture delta value
    Vector2 gestureDelta;
    [/code]
  2. You need to initialize the camera View matrix and projection and the hit regions on the touchscreen. Now, add the following lines to the Initialize() method:
    [code]
    angle = new Vector3();
    // Enable the FreeDrag gesture
    TouchPanel.EnabledGestures = GestureType.FreeDrag;
    // Define the camera position and the target position
    position = new Vector3(0, 40, 50);
    Vector3 target = new Vector3(0, 0, 0);
    // Create the camera View matrix and Projection matrix
    view = Matrix.CreateLookAt(position, target, Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio, 1, 1000.0f);
    // Define the four hit regions on touchscreen
    recUp = new Rectangle(GraphicsDevice.Viewport.Width / 4, 0,
    GraphicsDevice.Viewport.Width / 2,
    GraphicsDevice.Viewport.Height / 2);
    recDown = new Rectangle(GraphicsDevice.Viewport.Width / 4,
    GraphicsDevice.Viewport.Height / 2,
    GraphicsDevice.Viewport.Width / 2,
    GraphicsDevice.Viewport.Height / 2);
    recRight = new Rectangle( GraphicsDevice.Viewport.Width –
    GraphicsDevice.Viewport.Width / 4, 0,
    GraphicsDevice.Viewport.Width /
    4,GraphicsDevice.Viewport.Height);
    recLeft = new Rectangle(0, 0,
    GraphicsDevice.Viewport.Width –
    GraphicsDevice.Viewport.Width / 4,
    GraphicsDevice.Viewport.Height);
    [/code]
  3. In this step, you have to load the model into your game and add the following code to the LoadContent() method:
    [code]
    modelBox = Content.Load<Model>(“box”);
    modelTree = Content.Load<Model>(“Tree”);
    spriteFont = Content.Load<SpriteFont>(“gameFont”);
    [/code]
  4. Add the core logic code for the FPS camera updating in the Update() method. This code reacts to the tap and flick gestures to change the camera view:
    [code]
    // Get the touch data
    TouchCollection touches = TouchPanel.GetState();
    // Check the tapped point whether in the hit regions
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    // Get the tapped position
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    // Check whether the point is inside the UP region
    if(recUp.Contains(point))
    {
    // Move the camera forward
    view.Translation += new Vector3(0, 0, 5);
    }
    // Check whether the point is inside the DOWN region
    else if (recDown.Contains(point))
    {
    // Move the camera backward
    view.Translation += new Vector3(0, 0, -5);
    }
    // Check whether the point is inside the LEFT region
    else if (recLeft.Contains(point))
    {
    // Rotate the camera around Y in clockwise
    view *= Matrix.CreateRotationY(
    MathHelper.ToRadians(-10));
    }
    // Check whether the point is inside the RIGHT region
    else if (recRight.Contains(point))
    {
    // Rotate the camera around Y in counter-
    // clockwise
    view *= Matrix.CreateRotationY(
    MathHelper.ToRadians(10));
    }
    }
    // Check the available gestures
    while (TouchPanel.IsGestureAvailable)
    {
    // Read the on-going gesture
    GestureSample gestures = TouchPanel.ReadGesture();
    switch (gestures.GestureType)
    {
    // If the GestureType is FreeDrag
    case GestureType.FreeDrag:
    // Read the Delta.Y to angle.X, Delta.X to angle.Y
    // Because the rotation value around axis Y
    // depends on the Delta changing on axis X
    angle.X = gestures.Delta.Y * 0.001f;
    angle.Y = gestures.Delta.X * 0.001f;
    gestureDelta = gestures.Delta;
    // Identify the view and rotate it
    view *= Matrix.Identity;
    view *= Matrix.CreateRotationX(angle.X);
    view *= Matrix.CreateRotationY(angle.Y);
    // Reset the angle to next coming gesture.
    angle.X = 0;
    angle.Y = 0;
    break;
    }
    }
    [/code]
  5. Render the models on screen. We define a DrawModel() method, which will be called in the main Draw() method for showing the models:
    [code]
    public void DrawModel(Model model)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    // Draw the model. A model can have multiple meshes.
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.World = transforms[mesh.ParentBone.Index];
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  6. Then insert the following code to the Draw() method:
    [code]
    GraphicsDevice.DepthStencilState = DepthStencilState.Default;
    GraphicsDevice.BlendState = BlendState.Opaque;
    DrawModel(modelTree);
    DrawModel(modelBox);
    spriteBatch.Begin();
    spriteBatch.DrawString(spriteFont, gestureDelta.ToString(),
    new Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  7. Now, build and run the application. It will look similar to the following screenshots. Flick on the screen and you will see a different view:
    build and run the application

How it works…

In step 1, in this code we declare two matrices, one for camera View matrix and one for Projection matrix. A Vector3 variable, position, indicates the camera position. The other two Model variables modelTree and modelBox will be used to load the 3D model. The following four Rectangle variables individually represent the Up, Down, Right, and Left hit regions on the Windows Phone 7 touchscreen. The angle variable will let the game know how to rotate the View matrix and the last variable, gestureDelta, will show the actual delta value the gestures take.

In step 2, in the initialize process, you will need to enable the GestureType.FreeDrag for the view rotation changing in the Update process. Then define the camera View matrix and Projection matrix. The block of code after that is about the hit region on screen definition. You can understand the basic logic from the following figure:

 GestureType.FreeDrag

We use four rectangles to divide the screen into four parts, UP, LEFT, RIGHT, and DOWN. The width of UP and DOWN rectangles is half of the screen width in landscape mode and their height is half of the screen height. Similarly, the width of the LEFT and RIGHT rectangle is a quarter of the screen width in landscape mode and their height is the height of the screen. According to the width and height of each rectangle, it will be easy for you to know their start positions.

In step 3, all the models or fonts must be loaded from the ContentManager, which you have used for 2D image manipulation.

In step 4, the first part of code before the while loop is to get the tapped position and check whether it is in the four hit regions and do the corresponding operations. If the tapped position is within the UP or DOWN rectangle, we translate the camera; if it is located at the range of the LEFT or RIGHT rectangle, the camera view will be rotated. The next part is to react to the FreeDrag gesture, which will change the direction of the camera freely. In the code block of step 4, you need to read the gesture after gesture availability checking, and then determine which gesture type is taking place. Here, we deal with the GestureType. FreeDrag. If you flick horizontally, the X axis will change, you could rotate the camera around the Y axis by delta X; if you flick vertically, the Y axis will change, you could rotate the camera around the X axis by delta Y. Following the law, we assign Delta.X to angle.Y for rotation around the Y axis and assign Delta.Y to angle.X for pitch rotation around the X axis. When all the necessary gesture delta values are ready, you can do the camera rotation. Identify the View matrix, then use Matrix.CreateRotationX() and Matrix.CreateRotationY() to rotate the camera around the Y and X axes. Finally, you need to reset the angle for the next gesture delta value.

In step 5, we also copy and transform model matrices and apply the effect in every mesh in the model.

In step 6, you may be curious on the DepthStencilState. The depth stencil state controls how the depth buffer and the stencil buffer are used, from XNA SDK.

During rendering, the z position (or depth) of each pixel is stored in the depth buffer. When rendering pixels more than once—such as when objects overlap—depth data is compared between the current pixel and the previous pixel to determine which pixel is closer to the camera. When a pixel passes the depth test, the pixel color is written to a render target and the pixel depth is written to the depth buffer.

A depth buffer may also contain stencil data, which is why a depth buffer is often called a depth-stencil buffer. Use a stencil function to compare a reference stencil value—a global value you set—to the per-pixel value in the stencil buffer to mask which pixels get saved and which are discarded.

The depth buffer stores floating-point depth or z data for each pixel while the stencil buffer stores integer data for each pixel. The depth-stencil state class, DepthStencilState, contains the state that controls how depth and stencil data impact rendering.

Implementing a round rotating camera in a 3D game

When a 3D game reaches the end, sometimes the camera will go up and rotate around the player. On the other hand, when a 3D game just begins, a camera will fly from a very far point to the player’s position very fast, like a Hollywood movie. It’s impressive and fantastic. In this recipe, you will learn how to create this effect.

How to do it…

  1. First of all, we create a Windows Phone Game project, change the name from Game1. cs to RoundRotateCameraGame.cs. Then, add two 3D models, tree.fbx and box.fbx, to the content project.
  2. Declare the variables used in the game in the RoundRotateCamerGame class:
    [code]
    // View matrix for camera
    Matrix view;
    // Projection matrix for camera
    Matrix projection;
    // Camera position
    Vector3 position;
    // Tree and box models
    Model modelTree;
    Model modelBox;
    [/code]
  3. Define the View and Project matrix and add the following code to the Initialize() method:
    [code]
    // Camera position
    position = new Vector3(0, 40, 50);
    // Camera lookat target
    Vector3 target = new Vector3(0, 0, 0);
    // Define the View matrix
    view = Matrix.CreateLookAt(position, target, Vector3.Up);
    // Define the Project matrix
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio, 1, 1000.0f);
    [/code]
  4. We load and initialize the 3D models and insert the following lines into LoadContent():
    [code]
    modelBox = Content.Load<Model>(“box”);
    modelTree = Content.Load<Model>(“Tree”);
    [/code]
  5. This step is most important for rotating the camera. Paste the following code into the Update() method:
    [code]
    // Get the game time
    float time = (float)gameTime.TotalGameTime.TotalSeconds;
    // Get the rotate value from -0.1 to +0.1 around the Sin(time)
    Matrix rotate = Matrix.CreateRotationY(
    (float)Math.Sin(time) * 0.1f);
    // Update the view camera’s position according to the rotate
    //value;
    position = (Matrix.CreateTranslation(position) *
    rotate).Translation;
    view = Matrix.CreateLookAt(position, Vector3.Zero,
    Vector3.Up);
    [/code]
  6. The last step is to draw the models on the touchscreen. We defined a model drawing method:
    [code]
    public void DrawModel(Model model)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    // Draw the model. A model can have multiple meshes, so
    // loop.
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.World =
    transforms[mesh.ParentBone.Index];
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  7. Then we add the other code to the Draw() method:
    [code]
    GraphicsDevice.DepthStencilState = DepthStencilState.Default;
    GraphicsDevice.BlendState = BlendState.Opaque;
    DrawModel(modelTree);
    DrawModel(modelBox);
    [/code]
  8. All done! Build and run the application. You will see a rotating camera around the 3D objects, as shown in the following screenshots:
    rotating camera around the 3D objects

How it works…

In step 2, we declare the view and projection matrices for the camera and the position vector for the camera’s location. The last two model variables will be used to load the 3D model objects.

In step 3, the camera will locate at (X:0, Y:40, Z:50) forwards to (X:0, Y:0, Z:0) and have an Up vector(0, 1, 0), the Projection will have a 45 degree view scope from 1 to 1000.

In step 4, you should always load the content from ContentManager with different types as you need. Here, the type is Model—a 3D object.

In step 5, we read the game time to rotate the camera automatically as time passes by. Then we use Math.Sin() to change the rotation value within the range of -1 to +1. This is required because without this method, the time will keep on increasing and the camera will rotate faster and faster. Matrix.CreateRotationY() receives a radian value to rotate around the Y axis, here the value will be -0.1 to +0.1. The last part is about updating the view matrix; we translate and rotate the camera’s position for creating a new view matrix.

In step 6, this code is the basic for drawing a static model, we have discussed in former recipes, here we just need to pay attention on the View matrix and Projection matrix and these two matrices actually impact the rendered effect.

In step 7, notice the GraphicsDevice.DepthStencilState, which is important for the rendering order.

Implementing a chase camera

A chase camera will move smoothly around a 3D object and regardless of how the camera view is changed, the camera will restore to its original position. This kind of camera is useful for a racing game or an acceleration effect. In this recipe, you will learn how to make your own chase camera in Windows Phone 7.

How to do it…

  1. Create a Windows Phone Game in Visual Studio 2010, change the name from Game1.cs to ChaseCameraGame.cs. Then add the box.fbx 3D model to the content project. After the initial work, you should insert the following code to the ChaseGameCamera class as fields:
    [code]
    // Loading for box model
    Model boxModel;
    // Camera View and Projection matrix
    Matrix view;
    Matrix projection;
    // Camera’s position
    Vector3 position;
    // Camera look at target
    Vector3 target;
    // Offset distance from the target.
    Vector3 offsetDistance;
    // Yaw, Pitch values
    float yaw;
    float pitch;
    // Angle delta for GestureType.FreeDrag
    Vector3 angle;
    [/code]
  2. Instantiate the variables. Add the following lines into the Initialize() method:
    [code]
    // Enable the FreeDrag gesture type
    TouchPanel.EnabledGestures = GestureType.FreeDrag;
    // Define the camera position and desired position
    position = new Vector3(0, 1000, 1000);
    // Define the target position and desired target position
    target = new Vector3(0, 0, 0);
    // the offset from target
    offsetDistance = new Vector3(0, 50, 100);
    yaw = 0.0f;
    pitch = 0.0f;
    // Identify the camera View matrix
    view = Matrix.Identity;
    // Define the camera Projection matrix
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    1f, 1000f);
    // Initialize the angle
    angle = new Vector3();
    [/code]
  3. Load the box model in the LoadContent() method:
    [code]
    boxModel = Content.Load<Model>(“box”);
    [/code]
  4. Update the chase camera. First, we define the UpdateView() method, as follows:
    [code]
    private void UpdateView(Matrix World)
    {
    // Normalize the right and up vector of camera world
    // matrix
    World.Right.Normalize();
    World.Up.Normalize();
    // Assign the actual world matrix translation to target
    target = World.Translation;
    // Rotate the right vector of camera world matrix
    target += World.Right * yaw;
    // Rotate the up vector of camera world matrix
    target += World.Up * pitch;
    // Interpolate the increment in every frame until the
    // position is
    // equal to the offset distance from target
    position = Vector3.SmoothStep(position, offsetDistance,
    0.15f);
    // Interpolate the increment or decrement
    // from current yaw value to 0 to yaw in every frame
    yaw = MathHelper.SmoothStep(yaw, 0f, 0.1f);
    // Interpolate the increment or decrement from current
    // pitch value to 0 to pitch in every frame
    pitch = MathHelper.SmoothStep(pitch, 0f, 0.1f);
    // Update the View matrix.
    view = Matrix.CreateLookAt(position, target, World.Up);
    }
    [/code]
  5. In the Update() method, we insert the following code:
    [code]
    // Check the available gestures
    while (TouchPanel.IsGestureAvailable)
    {
    // Read the on-going gesture
    GestureSample gestures = TouchPanel.ReadGesture();
    // Make sure which gesture type is taking place
    switch (gestures.GestureType)
    {
    // If the gesture is GestureType.FreeDrag
    case GestureType.FreeDrag:
    // Read the Delta.Y to angle.X, Delta.X to angle.Y
    // Because the rotation value around axis Y
    // depends on the Delta changing on axis X
    angle.Y += gestures.Delta.X ;
    angle.X += gestures.Delta.Y ;
    // assign the angle value to yaw and pitch
    yaw = angle.Y;
    pitch = angle.X;
    // Reset the angle value for next FreeDrag gesture
    angle.Y = 0;
    angle.X = 0;
    break;
    }
    }
    // Update the viewMatrix
    UpdateView(Matrix.Identity);
    [/code]
  6. The final step is to draw the model. The drawing code will be as follows:
    [code]
    protected override void Draw(GameTime gameTime)
    {
    GraphicsDevice.Clear(Color.CornflowerBlue);
    // The following three lines are to ensure that the
    // models
    // are drawn correctly
    GraphicsDevice.DepthStencilState =
    DepthStencilState.Default;
    GraphicsDevice.BlendState = BlendState.AlphaBlend;
    DrawModel(boxModel);
    base.Draw(gameTime);
    }
    // Draw the model
    private void DrawModel(Model model)
    {
    Matrix[] modelTransforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(modelTransforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.World =
    modelTransforms[mesh.ParentBone.Index];
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  7. Now, build and run the application. You will see the application runs similar to the following screenshots:
    build and run the application

How it works…

In step 1, we declare a boxModel object for loading the box model and the view and projection matrices for camera. The position vector specifies the camera position. The offsetDistance indicates the distance from the target. The yaw and pitch variables represent the rotation value of the camera and the angle stores the actual value that the gesture generates.

In step 2, the initialization phase, you should enable the FreeDrag gesture, give the camera the startup position and its target and define the offset distance from the target, assign the initial value to yaw and pitch and then identify the camera View matrix, which will be used to update the rotation of the camera. Moreover, you still need to specify the camera Projection matrix and instantiate the angle variable.

In step 4, the UpdateView() method does the actual rotation and chase operations depending on the gesture value. First, we normalize the right and up vectors of the world matrix to make the directions easy and accurate to compute. Then, assign the world translation to the target variable, which will be used as the camera look-at position. For camera rotation, we use World.Right * yaw to rotate the camera around the Y axis and World.Up * pitch to rotate the camera around the X axis. Next, we interpolate the increment about 0.15 to the camera’s position until it is equal to the predefined offset distance in every frame. The MathHelper.SmoothStep() method generates smooth values between the source and end settings. Then, we interpolate the increment or decrement about 0.1 from the current yaw value to 0 in every frame. The camera will be rotated to the original position around the Y axis. Similarly, we interpolate the smooth values to the pitch variable. The final step will update the view matrix based on the position and target values.

In step 5, the first part is to handle the FreeDrag gesture. When the gesture is caught, the code will store the gesture delta value to angle. angle.X saves the delta value around the Y axis and angle.Y maintains the value around the X axis. Then we pass angle.Y to the yaw variable for the rotation around the Y axis and assign angle.X to the pitch variable for rotating around the X axis. After that, we reset the angle value for the next gesture. Eventually, we call the UpdateView() to update the camera view.

Using culling to remove the unseen parts and texture mapping

In a real 3D game, a large number of objects exist in the game world. Every object has hundreds or thousands of faces. To render all of the faces is a big performance cost. Therefore, we use Frustum to filter the outside objects, and then use the culling algorithm to remove the unseen part of the objects. These approaches cut the unnecessary parts in the 3D game, improving the performance significantly. In this recipe, you will learn how to use the culling method in Windows Phone 7 Game development.

Getting ready

In Windows Phone 7 XNA, the culling method uses the back-face culling algorithm to remove the unseen parts. This culling method is based on the observation that, if all objects in the world are closed, then the polygons, which do not face the viewer, cannot be seen. This directly translates to the vector angle between the direction where the viewer is facing and the normal of the face; if the angle is more than 90 degrees, the polygon can be discarded. Backface culling is automatically performed by XNA. It can be expected to cull roughly half of the polygons in the view Frustum.

How to do it…

Now, let’s see how Windows Phone 7 XNA performs the culling method:

  1. Create a Windows Phone Game project in Visual Studio 2010, change the name from Game1.cs to CullingGame.cs, and add the Square.png file to the content project.
  2. Declare the variables for the project. Add the following lines to the CullingGame class:
    [code]
    // Texture
    Texture2D texSquare;
    // Camera’s Position
    Vector3 position;
    // Camera look at target
    Vector3 target;
    //Camera World matrix
    Matrix world;
    //Camera View matrix
    Matrix view;
    //Camera Projection matrix
    Matrix projection;
    BasicEffect basicEffect;
    // Vertex Structure
    VertexPositionTexture[] vertexPositionTextures;
    // Vertex Buffer
    VertexBuffer vertexBuffer;
    // Rotation for the texture
    float rotation;
    // Translation for the texture
    Matrix translation;
    // Bool value for whether keeps rotating the texture
    bool KeepRotation = false;
    [/code]
  3. Initialize the basic effect, camera, vertexPositionTextures array, and set the culling mode in Windows Phone 7 XNA. Insert the following code to the Initialize() method:
    [code]
    // Initialize the basic effect
    basicEffect = new BasicEffect(GraphicsDevice);
    // Define the world matrix of texture
    translation = Matrix.CreateTranslation(new Vector3(25, 0, 0));
    // Initialize the camera position and look-at target
    position = new Vector3(0, 0, 200);
    target = Vector3.Zero;
    // Initialize the camera transformation matrices
    world = Matrix.Identity;
    view = Matrix.CreateLookAt(position, target, Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    1, 1000);
    // Allocate the VertexPositionTexture array
    vertexPositionTextures = new VertexPositionTexture[6];
    // Define the vertex information
    vertexPositionTextures[0] = new VertexPositionTexture(
    new Vector3(-25, -25, 0), new Vector2(0, 1));
    vertexPositionTextures[1] = new VertexPositionTexture(
    new Vector3(-25, 25, 0), new Vector2(0, 0));
    vertexPositionTextures[2] = new VertexPositionTexture(
    new Vector3(25, -25, 0), new Vector2(1, 1));
    vertexPositionTextures[3] = new VertexPositionTexture(
    new Vector3(-25, 25, 0), new Vector2(0, 0));
    vertexPositionTextures[4] = new VertexPositionTexture(
    new Vector3(25, 25, 0), new Vector2(1, 0));
    vertexPositionTextures[5] = new VertexPositionTexture(
    new Vector3(25, -25, 0), new Vector2(1, 1));
    // Define the vertex buffer
    vertexBuffer = new VertexBuffer(
    GraphicsDevice,
    VertexPositionTexture.VertexDeclaration, 6,
    BufferUsage.None);
    // Set the VertexPositionTexture array to vertex buffer
    vertexBuffer.SetData<VertexPositionTexture>(
    vertexPositionTextures);
    // Set the cull mode
    RasterizerState rasterizerState = new RasterizerState();
    rasterizerState.CullMode = CullMode.CullCounterClockwiseFace;
    GraphicsDevice.RasterizerState = rasterizerState;
    // Set graphic sample state to PointClamp
    graphics.GraphicsDevice.SamplerStates[0] =
    SamplerState.PointClamp;
    [/code]
  4. Load the square texture in the LoadContent() method:
    [code]
    texSquare = Content.Load<Texture2D>(“Square”);
    [/code]
  5. React to the Tap to rotate the square texture. Add the following code to the Update() method:
    [code]
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    if (GraphicsDevice.Viewport.Bounds.Contains(point))
    {
    KeepRotation = true;
    }
    }
    if (KeepRotation)
    {
    rotation += 0.1f;
    }
    [/code]
  6. Draw the texture on the screen and add the following lines to the Draw() method:
    [code]
    // Set the matrix information to basic effect
    basicEffect.World = world * translation *
    Matrix.CreateRotationY(rotation);
    basicEffect.View = view;
    basicEffect.Projection = projection;
    // Set the texture
    basicEffect.TextureEnabled = true;
    basicEffect.Texture = texSquare;
    // Iterate the passes in the basic effect
    foreach (var pass in basicEffect.CurrentTechnique.Passes)
    {
    pass.Apply();
    GraphicsDevice.SetVertexBuffer(vertexBuffer, 0);
    GraphicsDevice.DrawUserPrimitives<VertexPositionTexture>
    (PrimitiveType.TriangleList, vertexPositionTextures,
    0, 2);
    }
    [/code]
  7. Now, build and run the application. The application will run similar to the following screenshot:
    Coordinates and View
  8. When you tap on the screen, the texture will run similar to the following screenshots. The last one is blank because the rotation is 90 degrees.
    rotation is 90 degrees

How it works…

In step 2, the basicEffect represents the effect for rendering the texture. The VertexPositionTexture array will be used to locate and scale the texture on screen. The VertexBuffer holds the VertexPositionTexture data for the graphic device to draw the texture. The rotation variable will determine how much rotation will take place around the Y axis. The translation matrix indicates the world position of the texture. The bool value KeepRotation is a flag that signals to the object whether it has to keep rotating or not. This value could be changed by touching the Windows Phone 7 screen.

In step 3, in the code, you need to notice the VertexPositionTexture array initialization. The texture is a square composed of two triangles having six vertices. We define the position and texture UV coordinates of every vertex. You can find a detailed explanation of texture coordinates from a computer graphic introduction book such as Computer Graphics with OpenGL written by Donald D. Hearn, M. Pauline Baker, and Warren Carithers. After the vertex initialization, we define the vertex buffer with the VertexPositionTexture type and pass 6 to vertex count and let the vertex buffer be read and written by setting the BufferUsage to None. Next, we need to fill the vertex buffer with VertexPositionTexture data defined previously. After that, the configuration of CullMode in RasterizerState will impact the culling method for the texture. Here, we set the CullMode to CullMode. CullCounterClockwiseFace. From the back-face algorithm, the normal will face the camera, and none of the backward polygons will be seen, as they will be removed. The last setting on GraphicDevice.SamplerState is important. The SamplerState class determines how to sample texture data. When covering a 3D triangle mesh with a 2D texture, you supply 2D texture coordinates that range from (0, 0), the upper-left corner, to (1, 1), the lower-right corner. But you can also supply texture coordinates outside that range, and based on the texture address mode setting the image will be clamped (that is, the outside rim of pixels will just be repeated) or a tiled pattern or wrapped in a flip-flop mirror effect. The SamplerState supports Wrap, Mirror, and Clamp effects.

In step 5, the code first checks whether the tapped position is within the touchscreen. If so, set the KeepRotation value to true. Then the updating will increase the rotation value by 0.1 in every frame if the KeepRotation is true.

In step 6, the BasicEffect.World is used to translate and rotate the texture in 3D, where BasicEffect.View and Projection define the camera view. Then, we set the texture to basic effect. When all the necessary settings have been done, the foreach loop will use the Pass in basic effect technique to draw the primitives. We set the vertexBuffer defined before to the current graphic device. Finally, we draw the primitives with texture from the vertex buffer.