Ironpython Interacting with COM Objects

An Overview of COM Access Differences with Python

COM access is an area where IronPython and Python take completely different approaches. In fact, it’s safe to say that any Python code you want to use definitely won’t work in IronPython. Python developers normally rely on a library such as Python for Windows Extensions (http://sourceforge.net/ projects/pywin32/). This is a library originally created by Mark Hammond (http://starship .python.net/crew/mhammond/win32/) that includes not only the COM support but also a really nice Python editor. You can see a basic example of using this library to access COM at http://www .boddie.org.uk/python/COM.html. Even if you download the required library and try to follow the tutorial, you won’t get past step 1. The tutorial works fine with standard Python, but doesn’t work at all with IronPython.

It’s important to remember that IronPython is a constantly moving target. The developers who support IronPython constantly come out with new features and functionality, as do the third parties that support it. You may find at some point that there’s a COM interoperability solution that does work for both Python and IronPython. The solution doesn’t exist today, but there’s always hope for tomorrow. If you do encounter such a solution, please be sure to contact me at [email protected]

Fortunately, IronPython developers aren’t left out in the cold. COM support is built right into IronPython in the form of the .NET Framework. An IronPython developer uses the same techniques as a C# or a Visual Basic.NET developer uses to access COM — at least at a code level.

When you work with COM in Visual Studio in either a C# or Visual Basic.NET project, the IDE does a lot of the work for you. If you want to use a COM component in your application, you right-click References in Solution Explorer and choose Add Reference from the context menu. At this point, you see the Add Reference dialog box where you choose the COM tab shown in Figure 9-1.

When you highlight an item, such as the Windows Media Player, and click OK, the IDE adds the COM component to the References folder of Solution Explorer, as shown in Figure 9-2. The IDE writes code for you in the background that adds the COM component and makes it accessible. You’ll find this code in the .CSProj file and it looks something like this:

[code]
<COMReference Include=”MediaPlayer”>
<Guid>{22D6F304-B0F6-11D0-94AB-0080C74C7E95}</Guid>
<VersionMajor>1</VersionMajor>
<VersionMinor>0</VersionMinor>
<Lcid>0</Lcid>
<WrapperTool>tlbimp</WrapperTool>
<Isolated>False</Isolated>
<EmbedInteropTypes>True</EmbedInteropTypes>
</COMReference>
[/code]

Fi gure 9-1: The Add Reference dialog box provides you with a list of COM components you can use.

In addition, the IDE creates Interop.MediaPlayer.DLL, which resides in the project’s objx86Debug or objx86Release folder. This interoperability (interop for short) assembly makes it easy for you to access the COM component features.

Any reference you add appears in the References folder of Solution Explorer.

Of course, if the COM component you want to use is actually a control, you right-click the Toolbox instead and select Choose Items from the context menu. The COM Components tab looks much like the one shown in Figure 9-3.

In this case, check the controls you want to use and click OK. Again, the IDE does some work for you in the background to make the control accessible and usable. For example, it creates the same interop assembly as it would for a reference. You’ll see the control in the Toolbox, as shown in Figure 9-4.

The tasks that the IDE performs for you as part of adding a reference or Toolbox item when working with C# or Visual Basic.NET are manual tasks when working with IronPython. As you might imagine, all of this manual labor makes IronPython harder to use with COM than when you work with Python. While a Python developer simply imports a module and then writes a little specialized code, you’re saddled with creating interop assemblies and jumping through coding hoops.

COM components and controls can also appear in the Choose Toolbox Items dialog box.

The control or controls you selected appear in the Toolbox.

You do get something for the extra work, though. IronPython provides considerably more flexibility than Python does and you can use IronPython in more places. For example, you might find it hard to access Word directly in Python. The bottom line is that IronPython and Python are incompatible when it comes to COM support, so you can’t use all the online Python sources of information you normally rely on when performing a new task.

Choosing a Binding Technique

Before you can use a COM component, you must bind to it (create a connection to it). The act of binding gives you access to an instance of the component. You use binding to work with COM because, in actuality, you’re taking over another application. For example, you can use COM to create a copy of Word, do some work with it, save the resulting file, and then close Word — all without user interaction. A mistake that many developers make is thinking of COM as just another sort of class, but it works differently and you need to think about it differently. For the purposes of working with COM in IronPython, the act of binding properly is one of the more important issues. The following sections describe binding in further detail.

Understanding Early and Late Binding

When you work with a class, you create an instance of the class, set the resulting object’s properties, and then use methods to perform a particular task. COM lets you perform essentially the same set of steps in a process called early binding. When you work with early binding, you define how to access the COM object during design time. In order to do this, you instantiate an object based on the COM class.

These sections provide an extremely simplified view of COM. You can easily become mired in all kinds of details when working with COM because COM has been around for so long. For example, COM supports multiple interface types, which in turn determines the kind of binding you can perform. This chapter looks at just the information you need to work with COM from IronPython. If you want a better overview of COM, check the site at http://msdn.microsoft.com/ library/ms809980.aspx. In fact, you can find an entire list of COM topics at http://msdn.microsoft.com/library/ms877981.aspx.

The COM approach relies on a technique called a virtual table (vtable) — essentially a list of interfaces that you can access, with IUnknown as the interface that’s common to all COM components. Your application gains access to the IUnknown interface and then calls the queryinterface() method to obtain a list of other interfaces that the component supports (you can read more about this method at http://msdn.microsoft.com/library/ms682521.aspx). Using this approach means that your application can understand a component without really knowing anything about it at the outset.

It’s also possible to tell COM to create an instance of an object after the application is already running. This kind of access is called late binding because you bind after the application starts. In order to support late binding, a COM component must support the IDispatch interface. This interface lets you create the object using CreateObject(). Visual Basic was the first language product to rely on late binding. You can read more about IDispatch at http://msdn.microsoft.com/library/ ms221608.aspx.

Late binding also offers the opportunity to gain access to a running copy of a COM component. For example, if the system currently has a copy of Excel running, you can access that copy, rather than create a new Excel object. In this case, you use GetObject() instead of CreateObject() to work with the object. If you call GetObject() where there isn’t any copy of the component already executing, you get an error message — Windows doesn’t automatically start a new copy of the application for you.

If a COM component supports both the vtable and IDispatch technologies, then it has a dual interface that works with any current application language. Most COM components today are dual interface because adding both technologies is relatively easy and developers want to provide the greatest exposure for their components. However, it’s always a good idea to consider the kind of binding that your component supports. You can read more about dual interfaces at http://msdn.microsoft.com/library/ ekfyh289.aspx.

Using Early Binding

As previously mentioned, using early binding means creating a reference to the COM component and then using that reference to interact with the component. IronPython doesn’t support the standard methods of early binding that you might have used in other languages. What you do instead is create an interoperability DLL and then import that DLL into your application. The “Defining an Interop DLL” section of the chapter describes this process in considerably more detail. Early binding provides the following benefits:

  • Faster execution: Generally, your application will execute faster if you use early binding because you rely on compiled code for the interop assembly. However, you won’t get the large benefits in speed that you see when working with C# or Visual Basic.NET because IronPython itself is interpreted.
  • Easier debugging: In most cases, using early binding reduces the complexity of your application, making it easier to debug. In addition, because much of the access code for the COM component resides in the interop assembly, you won’t have to worry about debugging it.
  • Fuller component access: Even though both early and late binding provide access to the component interfaces, trying to work through those interfaces in IronPython is hard. Using early binding provides you with tools that you can use to explore the interop assembly, and therefore discover more about the component before you use it.
  • Better access to enumerations and constants: Using early binding provides you with access to features that you might not be able to access when using late binding. In some cases, IronPython will actually hide features such as enumerations or constants when using late binding.

Using Late Binding

When using late binding, you create a connection to the COM component at run time by creating a new object or reusing a running object. Some developers prefer this kind of access because it’s less error prone than early binding where you might not know about runtime issues during design time. Here are some other reasons that you might use late binding.

  • More connectivity options: You can use late binding to create a connection to a new instance of a COM component (see the “Performing Late Binding Using Activator.CreateInstance()” section of this chapter) or a running instance of the COM component
  • Fewer modules: When you use late binding, you don’t need an interop assembly for each of the COM components you want to use, which decreases the size and complexity of your application.
  • Better version independence: Late binding relies on registry entries to make the connection. Consequently, when Windows looks up the string you use to specify the application, it looks for any application that satisfies that string. If you specify the Microsoft Excel 9.0 Object Library COM component (Office 2000 specific), Windows will substitute any newer version of Office on the system for the component you requested.
  • Fewer potential compatibility issues: Some environments don’t work well with interop assemblies. For example, you might be using IronPython within a Web-based application. In this case, the client machine would already have to have the interop assembly, too, and it probably doesn’t. In this case, using late binding allows your application to continue working when early binding would fail.

Defining an Interop DLL

Before you can do much with COM, you need to provide some means for .NET (managed code) and the component (native code) to talk. The wrapper code that marshals data from one environment to another, and that translates calls from one language to the other, is an interoperability (interop) assembly, which always appears as a DLL. Fortunately, you don’t have to write this code by hand because the task is somewhat mundane. Microsoft was able to automate the process required to create an interop DLL.

Of course, Microsoft couldn’t make the decision straightforward or simple. You use different utilities for controls and components. The Type Library Import (TLbImp) utility produces a DLL suitable for component work, while the ActiveX Import (AxImp) utility produces a pair of DLLs suitable for control work. In many cases, the decision is easy — a COM component that supports a visual interface should use AxImp. However, some COM components, such as Windows Media Player (WMP.DLL) are useful as either controls or components. The example in this chapter uses the control form because that’s the way you’ll use Windows Media Player most often, but it’s important to make the decision. The following sections describe how to use both the TLbImp and AxImp utilities.

Accessing the Visual Studio .NET Utilities

You want to create an interop assembly in the folder that you’ll use for your sample application. However, you also need access to the .NET utilities. The best way to gain this access is to open a Visual Studio command prompt by choosing Start ➪ Programs ➪ Microsoft Visual Studio 2010 ➪ Visual Studio Tools ➪ Visual Studio Command Prompt (2010). If you’re working with Vista or Windows 7, right-click the Visual Studio Command Prompt (2010) entry and choose Run As Administrator from the context menu to ensure you have the rights required to use the utilities. Windows will open a command prompt that provides the required access to the .NET utilities.

Understanding the Type Library Import Utility

Remember that you always use Type Library Import (TLbImp) for components, not for controls. Before you can use TLbImp, you need to know a bit more about it. Here’s the command line syntax for the tool:

[code]
TlbImp TypeLibName [Options]
[/code]

The TypeLibName argument is simply the filename of the COM component that you want to use to create an interop assembly. A COM component can have a number of file extensions, but the most common extensions are .DLL, .EXE, and .OCX.

The TypeLibName argument can specify a resource identifier when the library contains more than one resource. Simply follow the filename with a backslash and the resource number. For example, the command line TLbImp MyModule .DLL1 would create an output assembly that contains only resource 1 in the MyModule.DLL file.

You can also include one or more options that modify the behavior of TLbImp. The following list describes these options.

  • /out:FileName: Provides the name of the file you want to produce as output. If you don’t provide this argument, the default is to add Lib to the end of the filename for the type library. For example, WMP.DLL becomes WMPLib.DLL.
  • /namespace:Namespace: Defines the namespace of the classes within the interop assembly. The default is to add Lib to the filename of the type library. For example, if the file has a name of WMP.DLL, the namespace is WMPLib.
  • /asmversion:Version: Specifies the file version number of the output assembly. This information appears on the Version tab of the file Properties dialog box shown in Figure 9-5. The default version number is 1.0.0.0.

You must specify a version number using dotted syntax. The four version number elements are: major version, minor version, build number, and revision number. For example, 1.2.3.4 would specify a major version number of 1, minor version number of 2, a build number of 3, and a revision number of 4.

  • /reference:FileName: Determines the name of the assembly that TLbImp uses to resolve references. There’s no default value. You may use this command line switch as many times as needed to provide a complete list of assemblies.
  • /tlbreference:FileName: Determines the name of the type library that TLbImp uses to resolve references. There’s no default value. You may use this command line switch as many times as needed to provide a complete list of assemblies.
  • /publickey:FileName: Specifies the name of a file containing a strong name public key used to sign the assembly. There’s no default value.
  • /keyfile:FileName: Specifies the name of a file containing a strong name key pair used to sign the assembly. There’s no default value.
  • /keycontainer:FileName: Specifies the name of a key container containing a strong name key pair used to sign the assembly. There’s no default value.
  • /delaysign: Sets the assembly to force a delay in signing. Use this option when you want to use the assembly for experimentation only.
Include version information for the assembly so others know about it.
Fi gure 9-5: Include version information for the assembly so others know about it.
  • /product:Product: Defines the name of the product that contains this assembly. This information appears on the Version tab of the file Properties dialog box shown in Figure 9-5. The default is to say that the assembly is imported from a specific type library.
  • /productversion:Version: Defines the product version number of the output assembly. This information appears on the Version tab of the file Properties dialog box shown in Figure 9-5. The default version number is 1.0.0.0.
  • /company:Company: Defines the name of the company that produced the output assembly. This information appears on the Version tab of the file Properties dialog box shown in Figure 9-5. There’s no default value.
  • /copyright:Copyright: Defines the copyright information that applies to the output assembly. This information appears on the Version tab of the file Properties dialog box shown in Figure 9-5. There’s no default value.
  • /trademark:Trademark: Defines the trademark and registered trademark information that applies to the output assembly. This information appears on the Version tab of the file Properties dialog box shown in Figure 9-5. There’s no default value.
  • /unsafe: Creates an output assembly that lacks runtime security checks. Using this option will make the assembly execute faster and reduce its size. However, you shouldn’t use this option for production systems because it does reduce the security features that the assembly would normally possess.
  • /noclassmembers: Creates an output assembly that has classes, but the classes have no members.
  • /nologo: Prevents the TLbImp utility from displaying a logo when it starts execution. This option is useful when performing batch processing.
  • /silent: Prevents the TLbImp utility from displaying any output, except error information. This option is useful when performing batch processing.
  • /silence:WarningNumber: Prevents the TLbImp utility from displaying output for the specified warning number. This option is useful when an assembly contains a number of warnings that you already know about and you want to see only the warnings that you don’t know about. You can’t use this option with the /silent command line switch.
  • /verbose: Tells the TLbImp utility to display every available piece of information about the process used to create the output assembly. This option is useful when you need to verify the assembly before placing it in a production environment or when you suspect a subtle error is causing application problems (or you’re simply curious).
  • /primary: Creates a Primary Interop Assembly (PIO). A COM component may use only one PIO and you must sign the PIO (use the /publickey, /keyfile, or /keycontainer switches to sign the assembly). See http://msdn.microsoft.com/library/aax7sdch.aspx for additional information.
  • /sysarray: Specifies that the assembly should use SAFEARRAY in place of the standard System.Array.
  • /machine:MachineType: Creates an assembly for the specified machine type. The valid inputs for this command line switch are:
    • X86
    • X64
    • Itanium
    • Agnostic
  • /transform:TransformName: Performs the specified transformations on the assembly. You may use any of these values as a transformation.
    • SerializableValueClasses: Forces TLbImp to mark all of the classes as serializable.
    • DispRet: Applies the [out, retval] attribute to methods that have a dispatch-only interface.
  • /strictref: Forces TLbImp to use only the assemblies that you specify using the /reference command line switch, along with PIAs, to produce the output assembly, even if the source file contains other references. The output assembly might not work properly when you use this option.
  • /strictref:nopia: Forces TLbImp to use only the assemblies that you specify using the /reference command line switch to produce the output assembly, even if the source file contains other references. This command line switch ignores PIAs. The output assembly might not work properly when you use this option.
  • /VariantBoolFieldToBool: Converts all VARIANT_BOOL fields in structures to bool.
  • /? or /help: Displays a help message containing a list of command line options for the version of TLbImp that you’re using.

Understanding the ActiveX Import Utility

The example in this chapter relies on the ActiveX Import (AxImp) utility because it produces the files you need to create a control (with a visual interface) rather than a component. When you use this utility, you obtain two files as output. The first contains the same information you receive when using the TLbImp utility. The second, the one with the Ax prefix, contains the code for a control. Before you can use AxImp, you need to know a bit more about it. Here’s the command line syntax for the tool:

[code]
AxImp OcxName [Options]
[/code]

The OcxName argument is simply the filename of the COM component that you want to use to create a control version of an interop assembly. A COM component can have a number of file extensions, but the most common extensions are .DLL, .EXE, and .OCX. It’s uncommon for an OLE Control eXtension (OCX), a COM component with a visual interface, to have a .EXE file extension.

You can also include one or more options that modify the behavior of AxImp. The following list describes these options.

  • /out:FileName: Provides the name of the ActiveX library file you want to produce as output. If you don’t provide this argument, the default is to add Lib to the end of the filename for the type library. For example, WMP.DLL becomes WMPLib.DLL and AxWMPLib.DLL. Using this command line switch changes the name of the AxWMPLib.DLL file. For example, if you type AxImp WMP .DLL /out:WMPOut.DLL and press Enter, the utility now outputs WMPLib.DLL and WMPOut.DLL.
  • /publickey:FileName: Specifies the name of a file containing a strong name public key used to sign the assembly. There’s no default value.
  • /keyfile:FileName: Specifies the name of a file containing a strong name key pair used to sign the assembly. There’s no default value.
  • /keycontainer:FileName: Specifies the name of a key container containing a strong name key pair used to sign the assembly. There’s no default value.
  • /delaysign: Sets the assembly to force a delay in signing. Use this option when you want to use the assembly for experimentation only.
  • /source: Generates the C# source code for a Windows Forms wrapper. You don’t need to use this option when working in IronPython because the code doesn’t show how to use the wrapper — it simply shows the wrapper code itself.
  • /rcw:FileName: Specifies an assembly to use for Runtime Callable Wrapper (RCW) rather than generating a new one. In most cases, you want to generate a new RCW when working with IronPython.
  • /nologo: Prevents the AxImp utility from displaying a logo when it starts execution. This option is useful when performing batch processing.
  • /silent: Prevents the AxImp utility from displaying any output, except error information. This option is useful when performing batch processing.
  • /verbose: Tells the AxImp utility to display every available piece of information about the process used to create the output assembly. This option is useful when you need to verify the assembly before placing it in a production environment or when you suspect a subtle error is causing application problems (or you’re simply curious).
  • /? or /help: Displays a help message containing a list of command line options for the version of AxImp that you’re using.

Creating the Windows Media Player Interop DLL

Now that you have an idea of how to use the AxImp utility, it’s time to see the utility in action. The following command line creates an interop assembly for the Windows Media Player.

[code]
AxImp %SystemRoot%System32WMP.DLL
[/code]

This command line switch doesn’t specify any options. It does include %SystemRoot%, which points to the Windows directory on your machine (making it possible to use the command line on more than one system, even if those systems have slightly different configurations). When you execute this command line, you see the AxImp utility logo. After a few minutes work, you’ll see one or more warning or error messages if the AxImp utility encounters problems. Eventually, you see a success message, as shown in Figure 9-6.

The AxImp tells you that it has generated the two DLLs needed for a control.
Fi gure 9-6: The AxImp tells you that it has generated the two DLLs needed for a control.

Exploring the Windows Media Player Interop DLL

When working with imported Python modules, you use the dir() function to see what those modules contain. In fact, you often use dir() when working with .NET assemblies as well, even though you have the MSDN documentation at hand. Theoretically, you can also use dir() when working with imported COM components as well, but things turn quite messy when you do. The “Using the Windows Media Player Interop DLL” section of this chapter describes how to import and use an interop assembly, but for now, let’s just look at WMPLib.DLL using dir(). Figure 9-7 shows typical results.

Using dir() won’t work well with interop assemblies in many cases.
Fi gure 9-7: Using dir() won’t work well with interop assemblies in many cases.

The list goes on and on. Unfortunately, this is only the top level. You still need to drill down into the interop assembly, so things can become confusing and complex. Figuring out what you want to use is nearly impossible. Making things worse is the fact that any documentation you obtain for the interop assembly probably won’t work because the documentation will take the COM perspective of working with the classes and you need the IronPython perspective. Using dir() won’t be very helpful in this situation.

Fortunately, you have another alternative in the form of the Intermediate Language Disassembler (ILDasm) utility. This utility looks into the interop assembly and creates a graphic picture of it for you. Using this utility, you can easily drill down into the interop assembly and, with the help of the COM documentation, normally figure out how to work with the COM component — even complex COM components such as the Windows Media Player.

To gain access to ILDasm, you use the same process you use for TLbImp to create a Visual Studio Command Prompt. At the command prompt, type ILDasm WMPLib.DLL and press Enter (see more of the command line options in the “Using the ILDasm Command Line” section of the chapter). The ILDasm utility will start and show entries similar to those shown in Figure 9-8.

ILDasm is an important tool for the IronPython developer who wants to work with COM. With this in mind, the following sections provide a good overview of ILDasm and many of its usage details. Most important, these sections describe how to delve into the innermost parts of any interop assembly.

Use ILDASM to explore WMPLib.DLL.
Fi gure 9-8: Use ILDASM to explore WMPLib.DLL.

Using the ILDasm Command Line

The ILDasm utility usually works fine when you run it and provide the filename of the interop assembly you want to view. However, sometimes an interop assembly is so complex that you really do want to optimize the ILDasm view. Consequently, you use command line options to change the way ILDasm works. ILDasm has the following command line syntax.

[code]
ildasm [options] <file_name> [options]
[/code]

Even though this section shows the full name of all the command line switches, you can use just the first three letters. For example, you can abbreviate /BYTES as /BYT. In addition, ILDasm accepts both the dash (-) and slash (/) as command line switch prefixes, so /BYTES and -BYTES work equally well.

The options can appear either before or after the filename. You can divide the options into those that affect output redirection (sending the output to a location other than the display) and those that change the way the file/console output appears. ILDasm further divides the file/console options into those that work with EXE and DLL files, and those that work with EXE, DLL, OBJ, and LIB files. Here are the options for output redirection.

  • /OUT=Filename: Redirects the output to the specified file rather than to a GUI.
  • /TEXT: Redirects the output to a console window rather than to a GUI. This option isn’t very useful for anything but the smallest files because the entire content of the interop assembly simply scrolls by. Of course, you can always use a pipe (|) to send the output to the More utility to view the output one page at a time.
  • /HTML: Creates the file in HTML format (valid with the /OUT option only). This option is handy for making the ILDasm available for a group of developers on a Web site. For example, if you type ILDasm /OUT=WMPLib.HTML /HTML WMPLib.DLL and press Enter, you obtain WMPLib.HTML. The resulting file is huge — 7.53 MB for WMPLib.HTML. Figure 9-9 shows how this file will appear.
  • /RTF: Creates the file in RTF format (valid with the /OUT option only). This option is handy for making the ILDasm available for a group of developers on a local network using an application such as Word. For example, if you type ILDasm /OUT=WMPLib.RTF /RTF WMPLib.DLL and press Enter, you obtain WMPLib.RTF. The resulting file is huge — 5.2 MB for WMPLib.RTF, and may cause Word to freeze.

Of course, you might not want to redirect the output to a file, but may want to change the way the console appears instead. The following options change the GUI or file/console output for EXE and DLL files only.

  • /BYTES: Displays actual bytes (in hex) as instruction comments. Generally, this information isn’t useful unless you want to get into the low-level details of the interop assembly. For example, you might see a series of hex bytes such as // SIG: 20 01 01 08, which won’t be helpful to most developers. (In this case, you’re looking at the signature for the WMPLib .IAppDispatch.adjustLeft() method.)
HTML output is useful for viewing ILDasm output in a browser
Fi gure 9-9: HTML output is useful for viewing ILDasm output in a browser
  • /RAWEH: Shows the exception handling clauses in raw form. This isn’t a useful command line switch for interop assemblies because interop assemblies don’t require exception handlers in most cases.
  • /TOKENS: Displays the metadata tokens of classes and members as comments in the source code, as shown in Figure 9-10 for the WMPLib.IAppDispatch.adjustLeft() method. For example, the metadata token for mscorlib is /*23000001*/. Most developers won’t require this information.
The metadata tokens appear as comments beside the coded text.
Fi gure 9-10: The metadata tokens appear as comments beside the coded text.
  • /SOURCE: Shows the original source lines as comments when available. Unfortunately, when working with an interop assembly, there aren’t any original source lines to show, so you won’t need to use this command line switch.
  • /LINENUM: Shows the original source code line numbers as comments when available. Again, when working with an interop assembly, there aren’t any original source code line numbers to show so you won’t need to use this command line switch.
  • /VISIBILITY=Vis[+Vis…]: Outputs only the items with specified visibility. The valid inputs for this argument are:
    • PUB: Public
    • PRI: Private
    • FAM: Family
    • ASM: Assembly
    • FAA: Family and assembly
    • FOA: Family or assembly
    • PSC: Private scope
  • /PUBONLY: Outputs only the items with public visibility (same as /VIS=PUB).
  • /QUOTEALLNAMES: Places single quotes around all names. For example, instead of seeing mscorlib, you’d see ‘mscorlib‘. In some cases, using this approach makes it easier to see or find specific names in the code.
  • /NOCA: Suppresses the output of custom attributes.
  • /CAVERBAL: Displays all of the Custom Attribute (CA) blobs in verbal form. The default setting outputs the CA blobs in binary form. Using this command line switch can make the code more readable, but also makes it more verbose (larger).
  • /NOBAR: Tells ILDasm not to display the progress bar as it redirects the interop assembly output to another location (such as a file).

ILDasm includes a number of command line switches that affect file and console output only. The following command line switches work for EXE and DLL files.

  • /UTF8: Forces ILDasm to use UTF-8 encoding for output in place of the default ANSI encoding.
  • /UNICODE: Forces ILDasm to use Unicode encoding for output in place of the default ANSI encoding.
  • /NOIL: Suppresses Intermediate Language (IL) assembler code output. Unfortunately, this option isn’t particularly useful because it creates a file that contains just the disassembly comments, not any of the class or method information. You do get the resource (.RES) file containing the resource information for the interop assembly (such as the version number). To use this command line switch, you must include redirection such as ILDasm /OUT=WMPLib.HTML / HTML /NOIL WMPLib.DLL to produce WMPLib.HTML as output.
  • /FORWARD: Forces ILDasm to use forward class declaration. In some cases, this command line switch can reduce the size of the disassembly.
  • /TYPELIST: Outputs a full list of types. Using this command line switch can help preserve type ordering.
  • /HEADERS: Outputs the file header information in the output.
  • /ITEM=Class[::Method[(Signature)]]: Disassembles only the specified item. Using this command line switch can greatly reduce the confusion of looking over an entire interop assembly.
  • /STATS: Provides statistical information about the image. The statistics appear at the beginning of the file in comments. Here’s a small segment of the statistics you might see (telling you about the use of space in the file).
    [code]
    // File size : 331776
    // PE header size : 4096 (496 used) ( 1.23%)
    // PE additional info : 1015 ( 0.31%)
    // Num.of PE sections : 3
    // CLR header size : 72 ( 0.02%)
    // CLR meta-data size : 256668 (77.36%)
    // CLR additional info : 0 ( 0.00%)
    // CLR method headers : 9086 ( 2.74%)
    // Managed code : 51182 (15.43%)
    // Data : 8192 ( 2.47%)
    // Unaccounted : 1465 ( 0.44%)
    [/code]
  • /CLASSLIST: Outputs a list of the classes defined in the module. The class list appears as a series of comments at the beginning of the file. Here’s an example of the class list output for WMPLib.DLL
    [code]
    // Classes defined in this module:
    //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    // Interface IWMPEvents (public) (abstract) (auto) (ansi) (import)
    // Class WMPPlaylistChangeEventType (public) (auto) (ansi) (sealed)
    // Interface IWMPEvents2 (public) (abstract) (auto) (ansi) (import)
    // Interface IWMPSyncDevice (public) (abstract) (auto) (ansi) (import)
    // Class WMPDeviceStatus (public) (auto) (ansi) (sealed)
    // Class WMPSyncState (public) (auto) (ansi) (sealed)
    // Interface IWMPEvents3 (public) (abstract) (auto) (ansi) (import)
    // Interface IWMPCdromRip (public) (abstract) (auto) (ansi) (import)
    [/code]
  • /ALL: Performs the combination of the /HEADER, /BYTES, /STATS, /CLASSLIST, and /TOKENS command line switches.

This set of command line switches also affects just file and console output. However, you can use them for EXE, DLL, OBJ, and LIB files.

  • /METADATA[=Specifier]: Shows the interop assembly metadata for the elements defined by Specifier. Here are the values you can use for Specifier.
    • MDHEADER: MetaData header information and sizes
    • HEX: More data in hex as well as words
    • CSV: Record counts and heap sizes
    • UNREX: Unresolved externals
    • SCHEMA: MetaData header and schema information
    • RAW: Raw MetaData tables
    • HEAPS: Raw heaps
    • VALIDATE: MetaData consistency validation

The final set of command line switches affects file and console output for LIB files only.

  • /OBJECTFILE=Obj_Filename: Shows the MetaData of a single object file in library.

Working with ILDasm Symbols

When working with ILDasm, you see a number of special symbols. Unfortunately, the utility often leaves you wondering what the symbols mean. Here are some of the most common symbols you encounter when working with COM components.

  • Interface: Represents an interface with which you can interact.
  • Private Class: Represents an abstract or sealed class in most cases.
  • Enumeration: Contains a list of enumerated items you use to provide values for method calls and other tasks.
  • Attribute: Provides access to the attributes that describe a COM component. Common attributes and attribute containers include:
    • Manifest (and its associated attributes)
    • Extends (defines a class that the class extends)
    • Implements (defines an interface that the class implements)
    • ClassInterface (see http://msdn.microsoft.com/library/system .runtime.interopservices.classinterfaceattribute.aspx for details)
    • GuidAttribute (see http://msdn.microsoft.com/library/system .runtime.interopservices.guidattribute.aspx for details)
    • TypeLibTypeAttribute (see http://msdn.microsoft.com/library/ system.runtime.interopservices.typelibtypeattribute.aspx for details)
    • InterfaceTypeAttribute (see http://msdn.microsoft.com/library/ system.runtime.interopservices.interfacetypeattribute.aspx for details)
  • Method: Describes a method that you can use within an interface or private class.
  • Property: Describes a property that you can use within an interface or private class.
  • Variable: Defines a variable of some type within an interface or private class. The variable could be an interface, such as IConnectionPoint, or an array, such as ArrayList, or anything else that the developer wanted to include.
  • Event: Specifies an event that occurs within the interface or private class.

Exploring ILDasm entries

It’s important to remember that interop assemblies simply provide a reference to the actual code found in the COM component. Even so, you can use ILDasm to find out all kinds of interesting information about the component. At the top level, you can see a list of all of the interfaces, classes, and enumerations, as shown in Figure 9-8. The next level is to drill down into specific methods and properties, as shown in Figure 9-11.

Opening an interface displays all the methods it contains.
Fi gure 9-11: Opening an interface displays all the methods it contains.

The information shown in this figure is actually the most valuable information that ILDasm provides because you can use it to discover the names of methods and properties you want to use in your application. In addition, these entries often provide clues about where to look for additional information in the vendor help files. Sometimes these help files are a little disorganized and you might not understand how methods are related until you see this visual presentation of them.

It’s possible to explore the interop assembly at one more level. Double-click any of the methods, properties, or attributes and you’ll see a dialog box like the one shown in Figure 9-12. The amount of information you receive may seem paltry at first. However, look closer and you’ll discover that this display often tells you about calling requirements. For example, you can discover the data types you need to rely on to work with the COM component (something that COM documentation can’t tell you because the vendor doesn’t know that you’re using the component from .NET).

Discover the calling requirements for methods by reviewing the methods’ underlying code.
Fi gure 9-12: Discover the calling requirements for methods by reviewing the methods’ underlying code.

Using the Windows Media Player Interop DLL

It’s finally time to use early binding to create a connection to the Windows Media Player. This example uses the Windows Media Player as a control. You might find a number of online sources that say it’s impossible to use the Windows Media Player as a control, but it’s actually quite doable. Of course, you need assistance from yet another one of Microsoft’s handy utilities, Resource Generator (ResGen) to do it. The example itself relies on the normal combination of a form file and associated application file. The following sections provide everything needed to create the example.

Working with ResGen

Whenever you drop a control based on a COM component onto a Windows Forms dialog box, the IDE creates an entry for it in the .RESX file for the application. This entry contains binary data that describes the properties for the COM component. You may not know it, but most COM components have a Properties dialog box that you access by right-clicking the control and choosing Properties from the context menu. These properties are normally different from those shown in the Properties window for the managed control. Figure 9-13 shows the Properties dialog box for the Windows Media Player.

The COM component has properties that differ from the managed control.
Fi gure 9-13: The COM component has properties that differ from the managed control.

It’s essential to remember that the managed control is separate from the COM component in a Windows Forms application. The COM component properties appear in a separate location and the managed environment works with them differently. If you look in the .RESX file, you see something like this:

[code]
<data name=”MP.OcxState” mimetype=”application/x-microsoft.net.object.binary.base64”>
<value>
AAEAAAD/////AQAAAAAAAAAMAgAAAFdTeXN0ZW0uV2luZG93cy5Gb3JtcywgVmVyc2lvbj00LjAuMC4w
LCBDdWx0dXJlPW5ldXRyYWwsIFB1YmxpY0tleVRva2VuPWI3N2E1YzU2MTkzNGUwODkFAQAAACFTeXN0
ZW0uV2luZG93cy5Gb3Jtcy5BeEhvc3QrU3RhdGUBAAAABERhdGEHAgIAAAAJAwAAAA8DAAAAywAAAAIB
AAAAAQAAAAAAAAAAAAAAALYAAAAAAwAACAAUAAAAQgBlAGwAbABzAC4AdwBhAHYAAAAFAAAAAAAAAPA/
AwAAAAAABQAAAAAAAAAAAAgAAgAAAAAAAwABAAAACwD//wMAAAAAAAsA//8IAAIAAAAAAAMAMgAAAAsA
AAAIAAoAAABmAHUAbABsAAAACwAAAAsAAAALAP//CwD//wsAAAAIAAIAAAAAAAgAAgAAAAAACAACAAAA
AAAIAAIAAAAAAAsAAAAuHgAAfhsAAAs=
</value>
[/code]

This binary data contains the information needed to configure the COM aspects of the component. When the application creates the form, the binary data is added to the component using the OcxState property like this:

[code]
this.MP.OcxState =
((System.Windows.Forms.AxHost.State)(resources.GetObject(“MP.OcxState”)));
[/code]

Because of the managed code/COM component duality of a Windows Forms application, you can’t simply embed the COM component into an IronPython application using techniques such as the one shown at http://msdn.microsoft.com/library/dd564350.aspx. You must provide the binary data to the COM component using the OcxState property. Unfortunately, IronPython developers have an added twist to consider. The C# code shown previously won’t work because you don’t have access to a ComponentResourceManager for the IronPython form. Instead, you must read the resource from disk using code like this

[code]
self.resources = System.ComponentModel.ComponentResourceManager.
CreateFileBasedResourceManager(
‘frmUseWMP’, ‘C:/0255 – Source Code/Chapter09’, None)
[/code]

Now, here’s where the tricky part begins (you might have thought we were there already, but we weren’t). The CreateFileBasedResourceManager() method doesn’t support .RESX files. Instead, it supports .RESOURCES files. The ResGen utility can create .RESOURCES files. You might be tempted to think that you can duplicate the binary data from the .RESX file using .TXT files as suggested by the ResGen documentation. Unfortunately, .TXT files can only help you create string data in .RESOURCES files.

So your first step is to create a Windows Forms application, add the component to it, perform any required COM component configuration (no need to perform the managed part), save the result, and then take the resulting .RESX file for your IronPython application. You can then use ResGen to create the .RESOURCES file using a command line like this:

[code]
ResGen frmUseWMP.RESX
[/code]

ResGen outputs a .RESOURCES file you can use within your application. Of course, like every Microsoft utility, ResGen offers a little more than simple conversion. Here’s the command line syntax for ResGen:

[code]
ResGen inputFile.ext [outputFile.ext] [/str:lang[,namespace[,class[,file]]]]
ResGen [options] /compile inputFile1.ext[,outputFile1.resources] […]
[/code]

Here are the options you can use.

  • /compile: Performs a bulk conversion of files from one format to another format. Typically, you use this feature with a response file where you provide a list of files to convert.
  • /str:language[, namespace[, classname[, filename]]]: Defines a strongly typed resource class using the specified programming language that relies on Code Document Object Model (CodeDOM) (see http://msdn.microsoft.com/library/y2k85ax6.aspx for details). To ensure that the strongly typed resource class works properly, the name of your output file, without the .RESOURCES extension, must match the [namespace.]classname of your strongly typed resource class. You may need to rename your output file before using it or embedding it into an assembly.
  • /useSourcePath: Specifies that ResGen uses each source file’s directory as the current directory for resolving relative file paths.
  • /publicClass: Creates the strongly typed resource class as a public class. You must use this command line switch with the /str command line switch.
  • /r:assembly: Tells ResGen to load types from the assemblies that you specify. A .RESX file automatically uses newer assembly types when you specify this command line switch. You can’t form the .RESX file to rely on older assembly types.
  • /define:A[,B]: Provides a means for performing optional conversions specified by #ifdef structures within a .RESTEXT (text) file.
  • @file: Specifies the name of a response file to use for additional command line options. You can only provide one response file for any given session.

Creating the Media Player Form Code

As normal, the example relies on two files to hold the form and the client code. Because we’re using a COM component for this example, the form requires a number of special configuration steps. Listing 9-1 shows the form code.

Listin g 9-1: Creating a Windows Forms application with a COM component

[code]
# Set up the path to the .NET Framework.
import sys
sys.path.append(‘C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727’)
# Make clr accessible.
import clr
# Add any required references.
clr.AddReference(‘System.Windows.Forms.DLL’)
clr.AddReference(‘System.Drawing.DLL’)
clr.AddReference(‘AxWMPLib.DLL’)
# Import the .NET assemblies.
import System
import System.Windows.Forms
import System.Drawing.Point
import AxWMPLib
class frmUseWMP(System.Windows.Forms.Form):
# This function performs all of the required initialization.
def InitializeComponent(self):
# Create a Component Resource Manager
self.resources = System.ComponentModel.ComponentResourceManager.
CreateFileBasedResourceManager(
‘frmUseWMP’, ‘C:/0255 – Source Code/Chapter09’, None)
# Configure Windows Media Player
self.MP = AxWMPLib.AxWindowsMediaPlayer()
self.MP.Dock = System.Windows.Forms.DockStyle.Fill
self.MP.Enabled = True
self.MP.Location = System.Drawing.Point(0, 0)
self.MP.Name = “MP”
self.MP.Size = System.Drawing.Size(292, 266)
self.MP.OcxState = self.resources.GetObject(“MP.OcxState”)
# Configure the form.
self.ClientSize = System.Drawing.Size(350, 200)
self.Text = ‘Simple Windows Media Player Example’
# Add the controls to the form.
self.Controls.Add(self.MP)
[/code]

The code begins with the normal steps of adding the .NET Framework path, making clr accessible, importing the required DLLs, and importing the required assemblies. Notice that the example uses the AxWMPLib.DLL file and AxWMPLib assembly. Remember that the Ax versions of the files provide wrapping around the ActiveX controls to make them usable as a managed control.

The code begins by creating a ComponentResourceManager from a file, using the CreateFileBasedResourceManager() method. Normally, a managed application would create the ComponentResourceManager directly from the data stored as part of the form. This is a special step for IronPython that could cause you grief later if you forget about it.

Even though Listing 9-1 shows the CreateFileBasedResourceManager() method call on multiple lines, it appears on a single line in the actual source code. The IronPython call won’t work if you place it on multiple lines because IronPython lacks a line continuation character (or methodology).

Media Player (MP) configuration comes next. You must instantiate the control from the AxWMPLib .AxWindowsMediaPlayer() constructor, rather than using the COM component constructor. The Ax constructor provides a wrapper with additional features you need within the Windows Forms environment. Like most controls, you need to specify control position and size on the form. However, because of the nature of the Windows Media Player, you want it to fill the client area of the form, so you set the Dock property to System.Windows.Forms.DockStyle.Fill.

The one configuration item that you must perform correctly is setting the COM component values using the MP.OcxState property. The ComponentResourceManager, resources, contains this value. You simply set the MP.OcxState property to resources.GetObject(“MP.OcxState”) — this technique is also different from what you’d use in a C# or Visual Basic.NET application. The rest of the form code isn’t anything special — you’ve seen it in all of the Windows Forms examples so far.

Creating the Media Player Application Code

Some COM components require a lot of tinkering by the host application, despite being self-contained for the most part. However, the Windows Media Player is an exception to the rule. Normally, you want to tinker with it as little as possible to meet your programming requirements. In some cases, you won’t want to tinker at all, as shown in Listing 9-2.

Listin g 9-2: Interacting with the COM component

[code]
# Set up the path to the .NET Framework.
import sys
sys.path.append(‘C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727’)
# Make clr accessible.
import clr
# Add any required references.
clr.AddReference(‘System.Windows.Forms.DLL’)
# Import the .NET assemblies.
import System
import System.Windows.Forms
# import the form.
from frmUseWMP import *
# Define the Windows Form and the elements of this specific instance.
WMPForm = frmUseWMP()
WMPForm.InitializeComponent()
# Run the application.
System.Windows.Forms.Application.Run(WMPForm)
[/code]

This code does the minimum possible for a Windows Forms application. It contains no event handlers or anything of that nature. In fact, the code simply displays the forms. Believe it or not, the actual settings for the application appear as part of the .RESOURCES file. What you see when you run this application appears in Figure 9-14.

This is a fully functional Windows Media Player. You can adjust the volume, set the starting position, pause the play, or do anything else you normally do with the Windows Media Player. It’s even possible to right-click the Windows Media Player to see the standard context menu. The context menu contains options to do things like slow the play time, see properties, and change options. Play with the example a bit to see just how fully functional it is.

The example application shows a form with Windows Media Player on it.
Figure 9-14: The example application shows a form with Windows Media Player on it.

A Quick View of the Windows Media Player Component Form

You may encounter times when you really don’t want to display the Windows Media Player as a control — you simply want it to work in the background. In this case, you can use the Windows Media Player as a component. The following code snippet shows the fastest way to perform this task in IronPython (the sys.path.append() call should appear on a single line, even though it appears on two lines in the book). (You can find the entire source in the MPComponent example supplied with the book’s source code.)

[code]
# Set up the path to the .NET Framework.
import sys
sys.path.append(
‘C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727’)
# Make clr accessible.
import clr
# Add any required references.
clr.AddReference(‘System.Windows.Forms.DLL’)
clr.AddReference(‘WMPLib.DLL’)
# Import the .NET assemblies.
import System
import System.Windows.Forms
import WMPLib
# import the form.
from frmMPComponent import *
# Define the event handlers.
def btnPlay_Click(*args):
# Create the Media Player object.
MP = WMPLib.WindowsMediaPlayerClass()
# Assign the media player event.
MP.MediaError += PlayerError
# Assign a sound to the Media Player.
MP.URL = “Bells.WAV”
# Play the sound.
MP.controls.play()
[/code]

Notice that you start by adding a reference to WMPLib.DLL and importing WMPLib into IronPython, rather than using the Ax versions. The next step appears in the btnPlay_Click() event handler. After the code imports the required support, it instantiates an object (MP) of the WindowsMediaPlayerClass, not WindowsMediaPlayer (an interface) as many of the Microsoft examples show.

Now you can perform various tasks with the resulting component. The example is simple. All it does is assign a filename to the URL property, and then call on controls.play() to play the file. You can find additional information on using this technique at http://msdn.microsoft.com/library/dd562692.aspx.

Performing Late Binding Using Activator.CreateInstance()

The Activator.CreateInstance() method is one of the more powerful ways to work with objects of all kinds. In fact, this particular method can give your IronPython applications the same kind of support as the Windows scripting engines CScript and WScript.

When working with the Activator.CreateInstance() method, you describe the type of object you want to create. The object can be anything. In fact, if you look through the HKEY_CLASSES_ ROOT hive of the registry, you’ll find a number of objects to try on your system.

The example in this section does something a bit mundane, but also interesting — it demonstrates how to interact with the Shell objects. You can get a description of the Shell objects at http:// msdn.microsoft.com/library/bb774122.aspx. The main reason to look at the Shell objects is that every Windows machine has them and they’re pretty useful for detecting user preferences. Listing 9-3 shows the code used for this example.

Listin g 9-3: Working with Shell objects

[code]
# We only need the System assembly for this example.
from System import Activator, Type
# Import the time module to help with a pause.
import time
# Constants used for Shell settings.
from ShellSettings import *
# Create the Shell object.
ShObj = Activator.CreateInstance(Type.GetTypeFromProgID(‘Shell.Application’))
# Toggle the Desktop.
raw_input(‘Press Enter to show and then hide the Desktop’)
ShObj.ToggleDesktop()
time.sleep(2)
ShObj.ToggleDesktop()
# Show some of the settings.
print ‘nThe user wants to show file extensions:’,
print ShObj.GetSetting(SSF_SHOWEXTENSIONS)
print ‘The user wants to see system files:’,
print ShObj.GetSetting(SSF_SHOWSYSFILES)
print ‘The user also wants to see operating system files:’,
print ShObj.GetSetting(SSF_SHOWSUPERHIDDEN)
# Check Explorer policies.
print ‘nThe NoDriveTypeAutoRun policies are:’
# Obtain the bit values. These values are:
# 0 Unknown drives
# 1 No root directory
# 2 Removable drives (Floppy, ZIP)
# 3 Hard disk drives
# 4 Network drives
# 5 CD-ROM drives
# 6 RAM disk drives
# 7 Reserved
MyBits = ShObj.ExplorerPolicy(‘NoDriveTypeAutoRun’)
# Display the results.
if MyBits.__and__(0x01) == 0x01:
print(‘tAutorun Disabled for Unknown Drives’)
else:
print(‘tAutorun Enabled for Unknown Drives’)
if MyBits.__and__(0x02) == 0x02:
print(‘tAutorun Disabled for No Root Directory’)
else:
print(‘tAutorun Enabled for No Root Drives’)
if MyBits.__and__(0x04) == 0x04:
print(‘tAutorun Disabled for Removable (Floppy/ZIP) Drives’)
else:
print(‘tAutorun Enabled for Removable (Floppy/ZIP) Drives’)
if MyBits.__and__(0x08) == 0x08:
print(‘tAutorun Disabled for Hard Disk Drives’)
else:
print(‘tAutorun Enabled for Hard Disk Drives’)
if MyBits.__and__(0x10) == 0x10:
print(‘tAutorun Disabled for Network Drives’)
else:
print(‘tAutorun Enabled for Network Drives’)
if MyBits.__and__(0x20) == 0x20:
print(‘tAutorun Disabled for CD-ROM Drives’)
else:
print(‘tAutorun Enabled for CD-ROM Drives’)
if MyBits.__and__(0x40) == 0x40:
print(‘tAutorun Disabled for RAM Disk Drives’)
else:
print(‘tAutorun Enabled for RAM Disk Drives’)
# Pause after the debug session.
raw_input(‘Press any key to continue…’)
[/code]

This example starts by showing a different kind of import call. In this case, the import retrieves only the Activator and Type classes from the System assembly. Using this approach reduces environmental clutter. In addition, using this technique reduces the memory requirements for your application and could mean the application runs faster. The example also imports the time module.

The first step in this application can seem a little complicated so it pays to break it down into two pieces. First, you must get the type of a particular object by using its identifier within the registry with the Type.GetTypeFromProgID() method. As previously mentioned, the object used in this example is Shell.Application. After the code obtains the type, it can create an instance of the object using Activator.CreateInstance().

The Shell.Application object, ShObj, provides several interesting methods and this example works with three of them. The first method, ToggleDesktop(), provides the same service as clicking the Show Desktop icon in the Quick Launch toolbar. Calling ToggleDesktop() the first time shows the desktop, while the second call restores the application windows to their former appearance. Notice the call to time.sleep(2), which provides a 2-second pause between the two calls.

The second method, GetSetting(), accepts a constant value as input. Listing 9-4 shows common settings you can query using GetSetting(). The example shows the results of three queries about Windows Explorer settings for file display. You can see these results (as well as the results for the third method) in Figure 9-15.

Listin g 9-4: Queryable information for GetSetting()

[code]
SSF_SHOWALLOBJECTS = 0x00000001
SSF_SHOWEXTENSIONS = 0x00000002
SSF_HIDDENFILEEXTS = 0x00000004
SSF_SERVERADMINUI = 0x00000004
SSF_SHOWCOMPCOLOR = 0x00000008
SSF_SORTCOLUMNS = 0x00000010
SSF_SHOWSYSFILES = 0x00000020
SSF_DOUBLECLICKINWEBVIEW = 0x00000080
SSF_SHOWATTRIBCOL = 0x00000100
SSF_DESKTOPHTML = 0x00000200
SSF_WIN95CLASSIC = 0x00000400
SSF_DONTPRETTYPATH = 0x00000800
SSF_SHOWINFOTIP = 0x00002000
SSF_MAPNETDRVBUTTON = 0x00001000
SSF_NOCONFIRMRECYCLE = 0x00008000
SSF_HIDEICONS = 0x00004000
SSF_FILTER = 0x00010000
SSF_WEBVIEW = 0x00020000
SSF_SHOWSUPERHIDDEN = 0x00040000
SSF_SEPPROCESS = 0x00080000
SSF_NONETCRAWLING = 0x00100000
SSF_STARTPANELON = 0x00200000
SSF_SHOWSTARTPAGE = 0x00400000
[/code]

The shell objects provide access to all sorts of useful information.
Fi gure 9-15: The shell objects provide access to all sorts of useful information.

The third method, ExplorerPolicy(), is a registry-based query that relies on bit positions to define a value. You find these values in the HKEY_CURRENT_USERSoftwareMicrosoft WindowsCurrentVersionPoliciesExplorer registry key. The two most common policies are NoDriveAutorun and NoDriveTypeAutoRun. When working with the NoDriveAutorun policy, Windows enables or disables autorun on a drive letter basis where bit 0 is drive A and bit 25 is drive Z. Listing 9-3 shows how to work with the bits for the NoDriveTypeAutoRun policy, while Figure 9-15 shows the results for the host machine.

You can find a number of other examples of this kind of late binding for IronPython on the Internet. For example, you can see a Word late binding example at http://www.ironpython.info/index .php/Extremely_Late_Binding. This particular example would possibly be the next step for many developers in working with Activator.CreateInstance(). The important thing to remember is that this method is extremely flexible and that you need to think of the impossible, as well as the possible, when using it.

Performing Late Binding Using Marshal.GetActiveObject()

Sometimes you need to interact with an application that’s already running. In this case, you don’t want to create a new object; you want to gain access to an existing object. The technique used to perform this type of late binding is to call Marshal.GetActiveObject() with the type of object you want to access. Typically, you use this technique with application objects, such as a running copy of Word. Listing 9-5 shows an example of how to use Marshal.GetActiveObject() to gain access to a running Word application.

Listin g 9-5: Working with a running copy of Word

[code]
# Import only the required classes from System.
from System.Runtime.InteropServices import Marshal
# Obtain a pointer to the running Word application.
# Word must be running or this call will fail.
WordObj = Marshal.GetActiveObject(‘Word.Application’)
# Add a new document to the running copy of Word.
MyDoc = WordObj.Documents.Add()
# Get the Application object.
App = MyDoc.Application
# Type some text in the document.
App.Selection.TypeText(‘Hello World’)
App.Selection.TypeParagraph()
App.Selection.TypeText(‘Goodbye!’)
[/code]

The import statement differs from normal in this example. Notice that you can drill down into the namespace or class you want, and then import just the class you need. In this case, the example requires only the Marshal class from System.Runtime.InteropServices.

The first step is to get the running application. You must have a copy of Word running for this step to work; otherwise, you get an error. The call to Marshal.GetActiveObject() with Word.Application returns a Word object, WordObj. This object is the same object you get when working with Visual Basic for Applications (VBA). In fact, if you can do it with VBA, you can do it with IronPython.

After gaining access to Word, the application adds a new document using WordObj.Documents.Add(). It then creates an Application object, App. Using the App.Selection.TypeText() method, the application types some text into Word, as shown in Figure 9-16. Of course, you can perform any task required — the example does something simple for demonstration purposes.

 You can control Word using IronPython as easily as you can using VBA.
Fi gure 9-16: You can control Word using IronPython as easily as you can using VBA.

 

Windows Phone Entering the Exciting World of 3D Models #part2

Creating a terrain with texture mapping

In most of the modern outdoor games, such as Delta Force and Crysis, you will see the trees, rivers, mountains, and so on, all of which are dedicated to simulating the real world, as the game developer hopes to bring a realistic environment when you are playing. In order to achieve this aim, a key technique called Terrain Rendering is used. In the following recipe, you will learn how to use this technique in your game.

Getting ready

In this recipe, we will build the terrain model based on height map. In computer graphics, a heightmap or heightfield is a raster image used to store values, such as surface elevation data. A heightmap contains one channel interpreted as a distance of displacement or height from the floor of a surface and is sometimes visualized as luma of a grayscale image, with black representing minimum height and white representing maximum height. Before rendering, the terrain presentation application will process the grey image to get the grey value of each pixel, which will represent the vertex in the terrain model, then calculate the height depending on the value, with higher values having greater height, and vice versa. When the process is done, the application will have a set of terrain vertices with specified height (the Y-axis or the Z-axis value). Finally, the application should read and process the vertex set to render the terrain.

How to do it…

Follow the steps below to master the technique for creating a texture-mapped terrain:

  1. Create a Windows Phone Game project named TerrainGeneration, change Game1.cs to TerrainGenerationGame.cs. In the content project, add two images—Grass.dds and HeightMap.png. Then, we add a Content Pipeline Extension Project named TerrainProcessor to the solution, replacing ContentProcessor1. cs with TerrainProcessor.cs in the content pipeline library.
  2. Implement the TerrainProcessor class for the terrain processor in TerrainProcessor.cs. At the beginning, put the following code into the class field:
    [code]
    // Scale of the terrain
    const float terrainScale = 4;
    // The terrain height scale
    const float terrainHeightScale = 64;
    // The texture coordinate scale
    const float texCoordScale = 0.1f;
    // The texture file name
    const string terrainTexture = “grass.dds”;
    [/code]
  3. Next, the Process() method is the main method of the TerrainProcessor class:
    [code]
    // Generate the terrain mesh from the heightmap image
    public override ModelContent Process(Texture2DContent input,
    ContentProcessorContext context)
    {
    // Initialize a MeshBuilder
    MeshBuilder builder = MeshBuilder.StartMesh(“terrain”);
    // Define the data type of every pixel
    input.ConvertBitmapType(typeof(PixelBitmapContent<float>));
    // Get the bitmap object from the imported image.
    PixelBitmapContent<float> heightmap =
    (PixelBitmapContent<float>)input.Mipmaps[0];
    // Create the terrain vertices.
    for (int y = 0; y < heightmap.Height; y++)
    {
    for (int x = 0; x < heightmap.Width; x++)
    {
    Vector3 position;
    // Put the terrain in the center of game
    //world and scale it to the designated size
    position.X = (x – heightmap.Width / 2) *
    terrainScale;
    position.Z = (y – heightmap.Height / 2) *
    terrainScale;
    // Set the Y factor for the vertex
    position.Y = (heightmap.GetPixel(x, y) – 1) *
    terrainHeightScale;
    // Create the vertex in MeshBuilder
    builder.CreatePosition(position);
    }
    }
    // Create a vertex channel for holding texture coordinates.
    int texCoordId = builder.CreateVertexChannel<Vector2>(
    VertexChannelNames.TextureCoordinate(0));
    // Create a material and map it on the terrain
    // texture.
    BasicMaterialContent material = new BasicMaterialContent();
    // Get the full path of texture file
    string directory =
    Path.GetDirectoryName(input.Identity.SourceFilename);
    string texture = Path.Combine(directory, terrainTexture);
    // Set the texture to the meshbuilder
    material.Texture = new
    ExternalReference<TextureContent>(texture);
    // Set the material of mesh
    builder.SetMaterial(material);
    // Create the individual triangles that make up our terrain.
    for (int y = 0; y < heightmap.Height – 1; y++)
    {
    for (int x = 0; x < heightmap.Width – 1; x++)
    {
    // Draw a rectancle with two triangles, one at top
    // right, one at bottom-left
    AddVertex(builder, texCoordId, heightmap.Width,
    x, y);
    AddVertex(builder, texCoordId, heightmap.Width,
    x + 1, y);
    AddVertex(builder, texCoordId, heightmap.Width,
    x + 1, y + 1);
    AddVertex(builder, texCoordId, heightmap.Width,
    x, y);
    AddVertex(builder, texCoordId, heightmap.Width,
    x + 1, y + 1);
    AddVertex(builder, texCoordId, heightmap.Width,
    x, y + 1);
    }
    }
    // Finish creating the terrain mesh.
    MeshContent terrainMesh = builder.FinishMesh();
    // Convert the terrain from MeshContent to ModelContent
    return context.Convert<MeshContent,
    ModelContent>(terrainMesh, “ModelProcessor”);
    }
    [/code]
  4. From this step, we will render the terrain model to the screen in the game. In this step, we declare the terrain model in the TerrainGenerationGame class field:
    [code]
    // Terrain ModelModel terrain;
    // Camera view and projection matrices
    Matrix view;
    Matrix projection;
    [/code]
  5. Create the projection matrix in the Initialize() method with the following code:
    [code]
    projection =
    Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
    GraphicsDevice.Viewport.AspectRatio,1, 10000);
    [/code]
  6. Then, load the height map image. Insert the following code in to LoadContent():
    [code]
    terrain = Content.Load<Model>(“HeightMap”);
    [/code]
  7. Rotate the camera around a circle:
    [code]
    float time = (float)gameTime.TotalGameTime.TotalSeconds * 0.2f;
    // Rotate the camera around a circle
    float cameraX = (float)Math.Cos(time) * 64;
    float cameraY = (float)Math.Sin(time) * 64;
    Vector3 cameraPosition = new Vector3(cameraX, 0, cameraY);
    view =
    Matrix.CreateLookAt(cameraPosition,Vector3.Zero,Vector3.Up);
    [/code]
  8. Draw the terrain on-screen. First, we should define the DrawTerrain() method for drawing the terrain model.
    [code]
    void DrawTerrain(Matrix view, Matrix projection)
    {
    foreach (ModelMesh mesh in terrain.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.View = view;
    effect.Projection = projection;
    effect.AmbientLightColor = Color.White.ToVector3();
    effect.EnableDefaultLighting();
    // Set the specular lighting
    effect.SpecularColor = new Vector3(
    0.6f, 0.4f, 0.2f);
    effect.SpecularPower = 8;
    effect.FogEnabled = true;
    effect.FogColor = Color.White.ToVector3();
    effect.FogStart = 100;
    effect.FogEnd = 500;
    }
    mesh.Draw();
    }
    }
    [/code]
  9. Then, add the reference of the DrawTerrain() method in the Draw() method:
    [code]
    DrawTerrain(view, projection);
    [/code]
  10. The whole project is complete. Build and run the example. The application should run as shown in the following screenshots:
    texture mapping

How it works…

In step 2, the TerrainScale defines the size of the terrain in a 3D world; terrainHeightScale amplifies height change when generating the terrain; texCoordScale stands for displaying the portion of texture image when sampling; terrainTexture is the name of the texture file.

In step 3, since the Terrain Processor is to generate terrain mesh from an image, the input of the content processor is Texture2DContent, the output is ModelContent. The first line in the method body is to initialize the MeshBuilder. MeshBuilder is a helper class to ease the way to create a mesh object with internal MeshContent and GeometryContent classes. A general procedure to build a mesh consists of six steps:

  1. Call the StartMesh() method to instance a MeshBuilder object.
  2. Call the CreatePosition() method to fill the position’s buffer with data.
  3. Call the CreateVertexChannel()method to get the types of vertex channels and create a vertex data channel for use by the mesh. Typically, the data channel holds texture coordinates, normals, and other per-vertex data. A vertex channel is a list of arbitrary data with one value for each vertex. The types of vertex channels include:
    1. Binormal
    2. Color
    3. Normal
    4. Tangent
    5. TextureCoordinate
    6. Weights
  4. After building the position and vertex data channel buffers, start creating the triangles. For setting the data of each triangle use the SetMaterial() method. The SetVertexChannelData() method will set the individual vertex data of each triangle.
  5. Call AddVertex() to add a vertex into the index collection for forming a triangle. MeshBuilder supports triangle lists only. Therefore, calls to the AddTriangleVertex() method must occur in groups of three. That means the code snippet should look similar to the following:
    [code]
    // Create a Triangle
    AddTriangleVertex(…);
    AddTriangleVertex(…);
    AddTriangleVertex(…);
    [/code]
  6. In addition, MeshBuilder automatically determines which GeometryContent object receives the current triangle based on the state data. This data is set by the last calls to SetMaterial() and SetOpaqueData().
  7. Call the FinishMesh() method to finish the mesh building. All of the vertices in the mesh will be optimized with calls to the MergeDuplicateVertices() method for merging any duplicate vertices and to the CalculateNormals() method for computing the normals from the specified mesh.

So far, you have seen the procedure to create a mesh using MeshBuilder. Now, let’s go on looking into the Process() method. After creating the MeshBuilder object, we use the input.ConvertBitmapType() method to convert the image color information to float, because we want to use the different color values to determine the height of every vertex of the terrain mesh. The following for loop is used to set the position of every vertex, x – heightmap.Width / 2 and y – heightmap.Width / 2, define the X and Y positions to make the terrain model locate at the center in 3D world. The code (heightmap. GetPixel(x, y) is the key method to get the height data from image pixels. With this value, we could set the Y value of the vertex position. After defining the vertex position, we call the MeshBuilder.CreatePosition() to create the vertex position data in the MeshBuilder:

[code]
MeshBuilder.CreateVertexChannel<Vector2>(
VertexChannelNames.TextureCoordinate(0));
[/code]

This code creates a vertex texture coordinate channel for terrain mesh use. Then we get the texture file’s absolute filepath, set it to the terrain mesh material and assign the material to MeshBuilder. When the material is assigned, we begin building the triangles with texture based on the vertices defined earlier. In the following for loop, every step creates two triangles, a top right and bottom left. We will discuss the AddVertex() method later. When mesh triangles are created, we will call Mesh.FinishMesh(). Finally, call ContentProcessorContext.Convert(), which converts the MeshContent to ModelContent.

Now it’s time to explain the AddVertex() method:

[code]
// Adding a new triangle vertex to a MeshBuilder,
// along with an associated texture coordinate value.
static void AddVertex(MeshBuilder builder, int texCoordId, int w,
int x, int y)
{
// Set the vertex channel data to tell the MeshBuilder how to
// map the texture
builder.SetVertexChannelData(texCoordId,
new Vector2(x, y) * 0.1f);
// Add the triangle vertices to the indices array.
builder.AddTriangleVertex(x + y * w);
}
[/code]

The first MeshBuilder.SetVertexChannelData() method sets the location and portion of the texture coordinate for the specified vertex. MeshBuilder.AddTriangleVertex() adds the triangle vertex to the MeshBuilder indices buffer.

In step 7, the camera rotation meets the law as shown in the following diagram:

the camera rotation meets

P.X = CosA * Radius, P.Y = SinA * Radius

The previous formula is easy to understand, CosA is cosine value of angle A, it multiplies with the Radius to produce the horizontal value of X; Similarly, the SinA * Radius will produce the vertical value of Y. Since the formula is computing with angle A, the radius is constant when rotating around the center; the formula will generate a point set for representing a circle.

Customizing vertex formats

In XNA 4.0, a vertex format has a description about how the data stored in the vertex allows the system to easily locate the specified data. The XNA framework provides some built-in vertex formats, such as VertexPositionColor and VertexPositionNormalTexture format. Sometimes, these built-in vertex formats are limited for special effects such as particles with life limitation. At that moment, you will need to define a custom vertex format. In this recipe, you will learn how to define the custom vertex format.

How to do it…

Now let’s begin to program our sample application:

  1. Create a Windows Phone Game project named CustomVertexFormat, change Game1.cs to CustomVertexFormatGame.cs. Add a new class file CustomVertexPositionColor.cs to the project.
  2. Define the CustomVertexPositionColor class in the CustomVertexPositionColor.cs file:
    [code]
    // Define the CustomVertexPositionColor class
    public struct CustomVertexPositionColor : IVertexType
    {
    public Vector3 Position;
    public Color Color;
    public CustomVertexPositionColor(Vector3 Position,
    Color Color)
    {
    this.Position = Position;
    this.Color = Color;
    }
    // Define the vertex declaration
    public static readonly VertexDeclaration
    VertexDeclaration = new
    Microsoft.Xna.Framework.Graphics.VertexDeclaration
    (
    new VertexElement(0, VertexElementFormat.Vector3,
    VertexElementUsage.Position, 0),
    new VertexElement(12, VertexElementFormat.Color,
    VertexElementUsage.Color, 0)
    );
    // Override the VertexDeclaration attribute
    VertexDeclaration IVertexType.VertexDeclaration
    {
    get { return VertexDeclaration; }
    }
    }
    [/code]
  3. From this step, we will begin to use the CustomVertexPositionColor array to create a cubic and render it on the Windows Phone 7 screen. First, declare the variables in the field of the CustomVertexFormatGame class:
    [code]
    // CustomVertexPositionColor array
    CustomVertexPositionColor[] vertices;
    // VertexBuffer stores the custom vectex data
    VertexBuffer vertexBuffer;
    // BasicEffect for rendering the vertex array
    BasicEffect effect;
    // Camera position
    Vector3 cameraPosition;
    // Camera view matrix
    Matrix view;
    // Camera projection matrix
    Matrix projection;
    // The WireFrame render state
    static RasterizerState WireFrame = new RasterizerState
    {
    FillMode = FillMode.Solid,
    CullMode = CullMode.None
    };
    [/code]
  4. Define the faces of cubic and initialize the camera. Add the following code to the Initialize() method:
    [code]
    // Allocate the CustomVertexPositonColor array on memory
    vertices = new CustomVertexPositionColor[24];
    // Initialize the vertices of cubic front, right, left and
    // bottom faces.
    int i = 0;
    // Front Face
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, 0), Color.Blue);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, 10, 0), Color.Blue);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, 10, 0), Color.Blue);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, 10, 0), Color.Blue);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, 0), Color.Blue);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, 0), Color.Blue);
    // Right Face
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, 10, 0), Color.Red);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, 0), Color.Red);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, 10, -20), Color.Red);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, 0), Color.Red);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, -20), Color.Red);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, 10, -20), Color.Red);
    // Left Face
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, 10, 0), Color.Green);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, 0), Color.Green);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, 10, -20), Color.Green);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, 0), Color.Green);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, -20), Color.Green);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, 10, -20), Color.Green);
    // Bottom Face
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, 0), Color.Yellow);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, 0), Color.Yellow);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, -20), Color.Yellow);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(-10, -10, -20), Color.Yellow);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, 0), Color.Yellow);
    vertices[i++] = new CustomVertexPositionColor(
    new Vector3(10, -10, -20), Color.Yellow);
    // Initialze the vertex buffer for loading the vertex array
    vertexBuffer = new VertexBuffer(GraphicsDevice,
    CustomVertexPositionColor.VertexDeclaration,
    vertices.Length, BufferUsage.WriteOnly);
    // Set the vertex array data to vertex buffer
    vertexBuffer.SetData <CustomVertexPositionColor>(vertices);
    // Initialize the camera
    cameraPosition = new Vector3(0, 0, 100);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Initialize the basic effect for drawing
    effect = new BasicEffect(GraphicsDevice);
    [/code]
  5. Draw the cubic on the Windows Phone 7 screen. Insert the following code into Draw() method:
    [code]
    GraphicsDevice device = GraphicsDevice;
    // Set the render state
    device.BlendState = BlendState.Opaque;
    device.RasterizerState = WireFrame;
    // Rotate the cubic
    effect.World *=
    Matrix.CreateRotationY(MathHelper.ToRadians(1));
    // Set the basic effect parameters for drawing the cubic
    effect.View = view;
    effect.Projection = projection;
    effect.VertexColorEnabled = true;
    // Set the vertex buffer to device
    device.SetVertexBuffer(vertexBuffer);
    // Drawing the traingles of the cubic from vertex buffer on
    // screen
    foreach (EffectPass pass in effect.CurrentTechnique.Passes)
    {
    pass.Apply();
    // The count of triangles = vertices.Length / 3 = 24 / 3
    // = 8
    device.DrawPrimitives(PrimitiveType.TriangleList, 0, 8);
    }
    [/code]
  6. Now, build and run the application. It runs as shown in the following screenshots:
    Customizing vertex formats

How it works…

In step 2, the CustomVertexPositionColor derives from the IVertexType interface, which declares a VertexDeclaration that the class must override and implement the VertexDeclaration attribute to describe the layout custom vertex format data and their usages. The custom vertex format CustomVertexPositionColor is a customized version of the built-in vertex format VertexPositionColor; it also has the Position and Color data members. The class constructor CustomVertexPositionColor() is the key data member VertexDeclaration. Here, the VertexElement class defines the properties of Position and Color includes the offset in memory, VertexElementFormat and the vertex usage. The Position is a Vector3 object, that has three float parameters occupying 12 bytes. Because the Color variable is following the Position, the offset of Color should begin from the end of Position at the 12th byte in memory. Finally, the IVertexType. VertexDeclaration attribute will return the VertexDeclaration data when initializing the VertexBuffer or you can manually read it.

In step 3, the vertices is a CustomVertexPositionColor array that will store the vertices of the cubic faces; vertexBuffer stores the CustomVertexPositionColor array data for rendering; effect defines the rendering method. The following three variables: cameraPosition, view, and projection will be used to initialize the camera: WireFrame specifies the device render state, because the triangles of every face of cubic are composed of two triangles, we should disable the culling method that triangles could be seen from the back.

In step 4, as we want to draw four faces of cubic and every face is made up of two triangles, the amount of CustomVertexPositionColor is 4 * 2 * 3 = 24. After initializing the vertices of the triangles with position and color information, it is time to create the vertex buffer to store the defined vertex array and assign the vertex array to the vertex buffer for rendering. The next part of code is about establishing the camera and instancing the BasicEffect object.

In step 5, the code assigns the WireFrame defined in a class field to disable the culling that you could see the graphics in any perspective. The settings of effect are rotating the cubic and coloring the vertices. After that, the iteration of EffectPass collection is to draw the triangles of cubic on screen using GraphicDevice.DrawPrimitives(). Since the PrimitiveType is TriangleList, the third parameter of the DrawPrimitives() method is 8 and stands for the total count of triangles, which comes from the equation total vertex count / 3 = 24 / 3 = 8.

Calculating the normal vectors from a model vertex

In mathematics, normal is the vector perpendicular to a plane or a surface. In computer graphics, normal is often used to calculate the lighting, angle of tilting, and collision detection. In this recipe, you will learn how to calculate the normal from vertices.

Getting ready

The 3D model mesh is made up of triangles and every triangle in a plane has a normal vector and is stored in the vertex. (You can find out more information about normal vectors in any computer graphic or linear algebra books.) Some typical and realistic lighting techniques will use the average normal vector of a vertex shared by several triangles. To calculate the normal of a triangle is not hard. Suppose the triangle has three points: A, B, and C. Choose point A as the root, Vector AB equals B – A; Vector AC equals C – A, and normal Vector N is the cross product of the two vectors AB and AC. Our example will illustrate the actual working code.

How to do it…

The following steps will show you a handy way to get the normal vectors of a model:

  1. Create a Windows Phone Game project named NormalGeneration, change Game1.cs to NormalGenerationGame.cs.
  2. Add the GenerateNormalsForTriangleStrip()method for normal calculation to the NormalGenerationGame class:
    [code]
    private VertexPositionNormalTexture[]
    GenerateNormalsForTriangleStrip(VertexPositionNormalTextu
    re[]
    vertices, short[] indices)
    {
    // Set the Normal factor of every vertex
    for (int i = 0; i < vertices.Length; i++)
    vertices[i].Normal = new Vector3(0, 0, 0);
    // Compute the length of indices array
    int indiceLength = indices.Length;
    // The winding sign
    bool IsNormalUp = false;
    // Calculate the normal vector of every triangle
    for (int i = 2; i < indiceLenngth; i++)
    {
    Vector3 firstVec = vertices[indices[i – 1]].Position –
    vertices[indices[i]].Position;
    Vector3 secondVec = vertices[indices[i – 2]].Position –
    vertices[indices[i]].Position;
    Vector3 normal = Vector3.Cross(firstVec, secondVec);
    normal.Normalize();
    // Let the normal of every triangle face up
    if (IsNormalUp)
    normal *= -1;
    // Validate the normal vector
    if (!float.IsNaN(normal.X))
    {
    // Assign the generated normal vector to the
    // current triangle vertices
    vertices[indices[i]].Normal += normal;
    vertices[indices[i – 1]].Normal += normal;
    vertices[indices[i – 2]].Normal += normal;
    }
    // Swap the winding sign for the next triangle when
    // create mesh as TriangleStrip
    IsNormalUp = !IsNormalUp;
    }
    return vertices;
    }
    [/code]

How it works…

In step 2, this method receives the array of mesh vertices and indices. The indices tell the drawing system how to index and draw the triangles from vertices. The for loop starts from the third vertex. With the former two indices i – 1 and i – 2, they form a triangle and use the indices to create two vectors in the same plain representing two sides of the current triangle.

Then call the Vector3.Cross() method to compute the normal perpendicular to the triangle plain. After that, we should normalize the normal for accurate computing such as lighting. Since the indices are organized in TriangleStrip, every new added index will generate a new triangle, but the normal of the new triangle is opposite to the previous one. We should reverse the direction of the new normal by multiplying by -1 when IsNormalUp is true.

Next, we should validate the normal using float.IsNaN(), which returns a value indicating whether the specified number evaluates to a number. When two vectors used to compute the normal vector and the current triangle have completely the same direction, another vector, the cross product will return a Vector3 object with three NaN values, meanwhile the invalid data must be eliminated. Finally, the method returns the processed vertices with correct normal vectors.

Simulating an ocean on your CPU

The ocean simulation is an interesting and challenging topic in computer graphic rendering that has been covered in many papers and books. In Windows, it is easier to render a decent ocean or water body depending on the GPU HLSL or Cg languages. In Windows Phone 7 XNA, so far, the customized HLSL shader is not supported. The only way to solve the problem is to do the simulation on CPU of Windows Phone 7. In this recipe, you will learn to realize the ocean effect on Windows Phone 7 CPU.

How to do it…

The following steps demonstrate one approach to emulating an ocean on the Windows Phone CPU:

  1. Create a Windows Phone Game project named OceanGenerationCPU, change Game1.cs to OceanGenerationCPUGame.cs. Then add a new file Ocean.cs to the project and image file to the content project.
  2. Define the Ocean class in the Ocean.cs file. Add the following lines to the class field as a data member:
    [code]
    // The graphics device object
    GraphicsDevice device;
    // Ocean width and height
    int PlainWidth = 64;
    int PlainHeight = 64;
    // Random object for randomly generating wave height
    Random random = new Random();
    // BasicEffect for drawing the ocean
    BasicEffect basicEffect;
    // Texture2D object loads the water texture
    Texture2D texWater;
    // Ocean vertex buffer
    VertexBuffer oceanVertexBuffer;
    // Ocean vertices
    VertexPositionNormalTexture[] oceanVertices;
    // The index array of the ocean vertices
    short[] oceanIndices;
    // Ocean index buffer
    IndexBuffer oceanIndexBuffer;
    // The max height of wave
    int MaxHeight = 2;
    // The wind speed
    float Speed = 0.02f;
    // Wave directions
    protected int[] directions;
    [/code]
  3. Next, implement the Ocean constructor as follows:
    [code]
    public Ocean(Texture2D texWater, GraphicsDevice device)
    {
    this.device = device;
    this.texWater = texWater;
    basicEffect = new BasicEffect(device);
    // Create the ocean vertices
    oceanVertices = CreateOceanVertices();
    // Create the ocean indices
    oceanIndices = CreateOceanIndices();
    // Generate the normals of ocean vertices for lighting
    oceanVertices =
    GenerateNormalsForTriangleStrip(oceanVertices,
    oceanIndices);
    // Create the vertex buffer and index buffer to load the
    // ocean vertices and indices
    CreateBuffers(oceanVertices, oceanIndices);
    }
    [/code]
  4. Define the Update() method of the Ocean class.
    [code]
    // Update the ocean height for the waving effect
    public void Update(GameTime gameTime)
    {
    for (int i = 0; i < oceanVertices.Length; i++)
    {
    oceanVertices[i].Position.Y += directions[i] * Speed;
    // Change direction if Y component has ecxeeded the
    // limit
    if (Math.Abs(oceanVertices[i].Position.Y) > MaxHeight)
    {
    oceanVertices[i].Position.Y =
    Math.Sign(oceanVertices[i].Position.Y) *
    MaxHeight;
    directions[i] *= -1;
    }
    }
    oceanVertices =
    GenerateNormalsForTriangleStrip(oceanVertices,
    oceanIndices);
    }
    [/code]
  5. Implement the Draw() method of the Ocean class:
    [code]
    public void Draw(Matrix view, Matrix projection)
    {
    // Draw Ocean
    basicEffect.World = Matrix.Identity;
    basicEffect.View = view;
    basicEffect.Projection = projection;
    basicEffect.Texture = texWater;
    basicEffect.TextureEnabled = true;
    basicEffect.EnableDefaultLighting();
    basicEffect.AmbientLightColor = Color.Blue.ToVector3();
    basicEffect.SpecularColor = Color.White.ToVector3();
    foreach (EffectPass pass in
    basicEffect.CurrentTechnique.Passes)
    {
    pass.Apply();
    oceanVertexBuffer.SetData<VertexPositionNormalTexture>
    (oceanVertices);
    device.SetVertexBuffer(oceanVertexBuffer, 0);
    device.Indices = oceanIndexBuffer;
    device.DrawIndexedPrimitives(
    PrimitiveType.TriangleStrip, 0, 0, PlainWidth *
    PlainHeight, 0,
    PlainWidth * 2 * (PlainHeight – 1) – 2);
    // This is important, because you need to update the
    // vertices
    device.SetVertexBuffer(null);
    }
    }
    [/code]
  6. From this step, we will use the Ocean class to actually draw the ocean on the Windows Phone 7 CPU. Please add the following code to the OceanGenerationCPUGame class field:
    [code]
    // Ocean water texture
    Texture2D texWater;
    // Ocean object
    Ocean ocean;
    // Camera view and projection matrices
    Matrix view;
    Matrix projection;
    [/code]
  7. Initialize the camera in the Initialize() method:
    [code]
    Vector3 camPosition = new Vector3(80, 20, -100);
    view = Matrix.CreateLookAt(camPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000);
    [/code]
  8. Load the ocean water texture and initiate the ocean object. Insert the code into the LoadContent() method:
    [code]
    texWater = Content.Load<Texture2D>(“Water”);
    ocean = new Ocean(texWater, GraphicsDevice);
    [/code]
  9. Update the ocean state. Add the following line to the Update() method:
    [code]
    ocean.Update(gameTime);
    [/code]
  10. Draw the ocean on the Windows Phone 7 screen.
    [code]
    ocean.Draw(view, projection);
    [/code]
  11. Now, build and run the application. You will see the ocean as shown in the following screenshot:
    Simulating an ocean

How it works…

In step 2, PlainWidth and PlainHeight define the dimensions of the ocean; the random object will be used to generate the random height of every ocean vertex; texWater loads the ocean texture; oceanVertexBuffer will store the ocean vertices; oceanVertices is the VertexPositionNormalTexture array representing the entire ocean vertices; oceanIndices represents the indices of ocean vertices. It is a short array because XNA only supports the 16 bytes index format; oceanIndexBuffer is the IndexBuffer to store the ocean indices; directions indicates the waving direction of every vertex.

In step 3, for the constructor, we will discuss the CreateOceanVertices(), CreateOceanIndices(), GenerateNormalsForTriangleStrip(), and the CreateBuffers() method.

  1. Define the CreateOceanVertices() method:
    [code]
    // Create the ocean vertices
    private VertexPositionNormalTexture[] CreateOceanVertices()
    {
    // Creat the local ocean vertices
    VertexPositionNormalTexture[] oceanVertices =
    new VertexPositionNormalTexture[PlainWidth *
    PlainHeight];
    directions = new int[PlainHeight * PlainWidth];
    // Initialize the ocean vertices and wave direction array
    int i = 0;
    for (int z = 0; z < PlainHeight; z++)
    {
    for (int x = 0; x < PlainWidth; x++)
    {
    // Generate the vertex position with random
    // height
    Vector3 position = new
    Vector3(x, random.Next(0, 4), -z);
    Vector3 normal = new Vector3(0, 0, 0);
    Vector2 texCoord =
    new Vector2((float)x / PlainWidth,
    (float)z / PlainWidth);
    // Randomly set the direction of the vertex up or
    // down
    directions[i] = position.Y > 2 ? -1 : 1;
    // Set the position, normal and texCoord to every
    // element of ocean vertex array
    oceanVertices[i++] = new
    VertexPositionNormalTexture(position, normal,
    texCoord);
    }
    }
    return oceanVertices;
    }
    [/code]
    First you find the width and height of the ocean. Then, create an array that stores all the vertices for the ocean about PlainWidth * PlainHeight in total. After that, we use two for loops to initiate the necessary information for the ocean vertices. The height of the vertex is randomly generated, the normal is zero which will get changed in the Update() method; texCoord specifies how to map the texture on the ocean vertices so that every PlainWidth vertex will repeat the water texture. Next, the code on position.Y is to determine the wave direction of every ocean vertex. When all the needed information is done, the last line in the for loop is about initializing the vertices one by one.
  2. The CreateOceanIndices() method: when the ocean vertices are ready, it is time to define the indices array to build the ocean mesh in triangle strip mode.
    [code]
    // Create the ocean indices
    private short[] CreateOceanIndices()
    {
    // Define the resolution of ocean indices
    short width = (short)PlainWidth;
    short height = (short)PlainHeight;
    short[] oceanIndices =
    new short[(width) * 2 * (height – 1)];
    short i = 0;
    short z = 0;
    // Create the indices row by row
    while (z < height – 1)
    {
    for (int x = 0; x < width; x++)
    {
    oceanIndices[i++] = (short)(x + z * width);
    oceanIndices[i++] = (short)(x + (z + 1) * width);
    }
    z++;
    if (z < height – 1)
    {
    for (short x = (short)(width – 1); x >= 0; x–)
    {
    oceanIndices[i++] = (short)
    (x + (z + 1) * width);
    oceanIndices[i++] = (short)(x + z * width);
    }
    }
    z++;
    }
    return oceanIndices;
    }
    [/code]
  3. In this code, we store all the indices of the ocean vertices and in each row we define the PlainWidth * 2 triangles. Actually, every three rows of indices represent two rows of triangles, so we have PlainHeight – 1 rows of triangles. The total indices are PlainWidth * 2 * (PlainHeight – 1) in TriangleStrip drawing mode where each index indicates a new triangle based on the index and its previous two indices.

The z variable indicates the current row from 0, the first row created from left to right. Next, you increment z by one for moving to the next row created from right to left. You repeat the process until the rows are all built when z is equal to PlainHeight – 1.

  1. GenerateNormalsForTriangleStrip() method: we use the method to calculate the normal of each vertex of ocean mesh triangles. For a more detailed explanation, please refer to the Calculating the normal vectors from a model vertex recipe.
  2. CreateBuffers() method: this method is to create the vertex buffer for ocean vertices and index buffer for ocean indices for rendering the ocean on the Windows Phone 7 CPU. The code is as follows:
    [code]
    // Create the vertex buffer and index buffer for ocean
    // vertices and indices
    private void CreateBuffers(VertexPositionNormalTexture[]
    vertices, short[] indices)
    {
    oceanVertexBuffer = new VertexBuffer(device,
    VertexPositionNormalTexture.VertexDeclaration,
    vertices.Length, BufferUsage.WriteOnly);
    oceanVertexBuffer.SetData(vertices);
    oceanIndexBuffer = new IndexBuffer(device, typeof(short),
    indices.Length, BufferUsage.WriteOnly);
    oceanIndexBuffer.SetData(indices);
    }
    [/code]

In step 4, the code iterates all the ocean vertices and changes the height of every vertex. Once the absolute value of height is greater than MaxHeight, the direction of the vertex will be reversed to simulate the wave effect. After the ocean vertices are updated, we need to compute the vertex normal again since the vertex positions are different.

In step 5, when rendering the 3D object manually with mapping texture, the basicEffect. TextureEnabled should be true and set the Texture2D object to the BasicEffect. Texture attribute. Then, we open the light to highlight the ocean. Finally, the foreach loop is used to draw the ocean on the Windows Phone 7 CPU. Here, we should set the updated ocean vertices to the vertex buffer in every frame.

Windows Phone Entering the Exciting World of 3D Models #part1

Controlling a model with the help of trackball rotation

To rotate a model from any direction in Windows Phone 7 could let the game player have extra choices to view the model. For programming, the trackball viewer will help the programmer to check whether the output model from the model software works well. In this recipe, you will learn how to control a model in trackball rotation.

How to do it…

Follow these steps to control a model in trackball rotation:

  1. Create a Windows Phone Game named ModelTrackBall, change Game1.cs to ModelTrackBallGame.cs. Then add the tree.fbx model file to the content project.
  2. Declare the variables for rotating and rendering the model in ModelTrackBall class field:
    [code]
    // Tree model
    Model modelTree;
    // Tree model world position
    Matrix worldTree = Matrix.Identity;
    // Camera Position
    Vector3 cameraPosition;
    // Camera look at target
    Vector3 cameraTarget;
    // Camera view matrix
    Matrix view;
    // Camera projection matrix
    Matrix projection;
    // Angle for trackball rotation
    Vector2 angle;
    [/code]
  3. Initialize the camera and enable the GestureType.FreeDrag. Add the code into the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(0, 40, 40);
    cameraTarget = Vector3.Zero + new Vector3(0, 10, 0);
    view = Matrix.CreateLookAt(cameraPosition, cameraTarget,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Instance the angle
    angle = new Vector2();
    // Enable the FreeDrag gesture
    TouchPanel.EnabledGestures = GestureType.FreeDrag;
    [/code]
  4. Rotate the tree model. Please insert the code into the Update() method:
    [code]
    // Check if the gesture is enabled or not
    if (TouchPanel.IsGestureAvailable)
    {
    // Read the on-going gesture
    GestureSample gesture = TouchPanel.ReadGesture();
    if (gesture.GestureType == GestureType.FreeDrag)
    {
    // If the gesture is FreeDrag, read the delta value
    // for model rotation
    angle.Y = gesture.Delta.X * 0.001f;
    angle.X = gesture.Delta.Y * 0.001f;
    }
    }
    // Rotate the tree model around axis Y
    worldTree *= Matrix.CreateRotationY(angle.Y);
    // Read the tree model around axis X
    worldTree *= Matrix.CreateRotationX(angle.X);
    [/code]
  5. Render the rotating tree model to the screen. First, we define the DrawModel() method:
    [code]
    // Draw the model on screen
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.World = transforms[mesh.ParentBone.Index] *
    world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  6. Then add the reference to the Draw() method:
    [code]
    DrawModel(modelTree, worldTree, view, projection);
    [/code]
  7. Build and run the application. It should run as shown in the following screenshots:
    trackball rotation

How it works…

In step 2, modelTree will load and store the tree model for rendering; worldTree represents the world position of the model tree. The following four variables, cameraPosition, cameraTarget, view, and projection are responsible for initializing and manipulating the camera; the last variable angle specifies the angle value when GestureType.FreeDrag takes place.

In step 3, we define the camera world position and look at the target in the first two lines. Then we create the view and projection matrices for camera view. After that, we have the code for initiating the angle object and enabling the FreeDrag gesture using TouchPanel. EnabledGesture.

In step 4, the first part of the code before rotation is to read the delta value for the FreeDrag gesture. We use TouchPanel.IsGestureAvailable to check whether the gestures are enabled. Then we call TouchPanel.ReadGesture() to get the on-going gesture. After that, we determine whether the gesture is FreeDrag or not. If so, then read Delta.X to angle.Y for rotating the model around the Y-axis and assign Delta.Y to angle.X for rotating around the X-axis. Once the latest angle value is known, it is time for rotating the tree model. We use Matrix.CreateRotationY and Matrix.CreateRotationX to rotate the tree model around the X- and Y-axes.

Translating the model in world coordinates

Translating the model in 3D world is a basic operation of Windows Phone 7 games; you can move the game object from one place to another. Jumping, running, or crawling is based on the translation. In this recipe, you will learn how to gain the knowledge necessary to do this.

How to do it…

The following steps will show you how to do the basic and useful operations on 3D models—Translation:

  1. Create a Windows Phone Game project named TranslateModel, change Game1. cs to TranslateModelGame.cs. Next, add the model file ball.fbx and font file gameFont.spritefont to the content project.
  2. Declare the variables for ball translation. Add the following lines to the TranslateModelGame class:
    [code]
    // Sprite font for showing the notice message
    SpriteFont font;
    // The beginning offset at axis X
    float begin;
    // The ending offset at axis X
    float end;
    // the translation value at axis X
    float translation;
    // Ball model
    Model modelBall;
    // Ball model position
    Matrix worldBall = Matrix.Identity;
    // Camera position
    Vector3 cameraPosition;
    // Camera view and projection matrix
    Matrix view;
    Matrix projection;
    // Indicate the screen tapping state
    bool Tapped;
    [/code]
  3. Initialize the camera, and define the start and end position for the ball. Insert the following code to the Initialize() method:
    [code]
    // Initialize the camera position
    cameraPosition = new Vector3(0, 5, 10);
    // Initialize the camera view and projection matrices
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    // Define the offset of beginning position from Vector.Zero at
    // axis X.
    begin = -5;
    // Define the offset of ending position from Vector.Zero at
    // axis X.
    end = 5;
    // Translate the ball to the beginning position
    worldBall *= Matrix.CreateTranslation(begin, 0, 0);
    [/code]
  4. In this step, you will translate the model smoothly when you touch the phone screen. Add the following code into the Update() method:
    [code]
    // Check the screen is tapped
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    if (GraphicsDevice.Viewport.Bounds.Contains
    ((int)touches[0].Position.X, (int)touches[0].
    Position.Y))
    {
    Tapped = true;
    }
    }
    // If the screen is tapped, move the ball in straight along
    // the axis X
    if (Tapped)
    {
    begin = MathHelper.SmoothStep(begin, end, 0.1f);
    translation = begin;
    worldBall += Matrix.CreateTranslation(translation, 0, 0);
    }
    [/code]
  5. Draw the ball model and display the instructions on screen. Paste the following code to the Draw() method:
    [code]
    // Draw the ball model
    DrawModel(modelBall, worldBall, view, projection);
    // Draw the text
    spriteBatch.Begin();
    spriteBatch.DrawString(font, “Please Tap the Screen”,
    new Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  6. We still need to add the DrawModel() method to the TranslateModelGame class:
    [code]
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.World = transforms[mesh.ParentBone.Index] *
    world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  7. Now build and run the application. It will look similar to the following screenshots:
    world coordinates

How it works…

In step 2, the SpriteFont is used to render the text on screen; begin and end specifies the offset at the X-axis; translation is the actual value for ball translation along the X-axis; modelBall loads and stores the ball model; worldBall represents the ball world position in 3D; the following three variables cameraPosition, view, and projection are used to initialize the camera. The bool value Tapped indicates whether Windows Phone 7 was tapped.

In step 4, the first part before if(Tapped) is to check whether the tapped position locates inside the screen bound. If yes, set Tapped to true. Once the screen is tapped, MathHelper. SmoothStep() will increase the begin value to end value defined previously frame-by-frame using the cubic interpolation algorithm and add the latest step value to the translation variable. It will then use the Matrix.CreateTranslation() method to generate a translation matrix for moving the ball model in 3D world.

Scaling a model

In order to change the scale of a model you can adjust the model meets the scene size or construct special effects, such as when a little sprite takes magical water, it suddenly becomes much stronger and bigger. In this recipe, you will learn how to change the model size at runtime.

How to do it…

Follow these steps to scale a 3D model:

  1. Create a Windows Phone Game project named ScaleModel, change Game1.cs to ScaleModelGame.cs. Then add the model file ball.fbx and font file gameFont. fle to the content project.
  2. Declare the necessary variables. Add the following lines to the ScaleModel class field:
    [code]
    // SpriteFont for showing the scale value on screen
    SpriteFont font;
    // Ball model
    Model modelBall;
    // Tree model world position
    Matrix worldBall = Matrix.Identity;
    // Camera Position
    Vector3 cameraPosition;
    // Camera view matrix
    Matrix view;
    // Camera projection matrix
    Matrix projection;
    // Scale factor
    float scale = 1;
    // The size the model will scale to
    float NewSize = 5;
    [/code]
  3. Initialize the camera. Insert the following code into the Initialize() method:
    [code]
    // Initialize the camera
    cameraPosition = new Vector3(0, 5, 10);
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    [/code]
  4. Load the ball model and game font. Paste the following code into the LoadContent() method:
    [code]
    // Load the tree model
    modelBall = Content.Load<Model>(“ball”);
    font = Content.Load<SpriteFont>(“gameFont”);
    [/code]
  5. This step will change the scale value of the ball model to the designated size. Add the following lines to the Update() method:
    [code]
    scale = MathHelper.SmoothStep(scale, NewSize, 0.1f);
    worldBall = Matrix.Identity;
    worldBall *= Matrix.CreateScale(scale);
    [/code]
  6. Draw the ball and font on the Windows Phone 7 screen. Add the following code to the Draw() method:
    [code]
    // Draw the ball
    DrawModel(modelBall, worldBall, view, projection);
    // Draw the scale value
    spriteBatch.Begin();
    spriteBatch.DrawString(font, “scale: ” + scale.ToString(), new
    Vector2(0, 0), Color.White);
    spriteBatch.End();
    [/code]
  7. The DrawModel() method should be as follows:
    [code]
    // Draw the model on screen
    public void DrawModel(Model model, Matrix world, Matrix view,
    Matrix projection)
    {
    Matrix[] transforms = new Matrix[model.Bones.Count];
    model.CopyAbsoluteBoneTransformsTo(transforms);
    foreach (ModelMesh mesh in model.Meshes)
    {
    foreach (BasicEffect effect in mesh.Effects)
    {
    effect.EnableDefaultLighting();
    effect.World = transforms[mesh.ParentBone.Index] *
    world;
    effect.View = view;
    effect.Projection = projection;
    }
    mesh.Draw();
    }
    }
    [/code]
  8. The application. It will run similar to the following screenshots:
    Scaling a model

How it works…

In step 3, the font variable is responsible for the draw-the-scale value on screen; modelBall loads the ball model; worldBall is the key matrix that specifies the world position and scale of the ball model; scale stands for the size of the ball model. When the initiative value is 1, it means the ball is in its original size; NewSize indicates the new size the ball model will scale to.

In step 4, the MathHelper.SmoothStep() method uses the cubic interpolation algorithm to change the current scale to a new value smoothly. Before calling the Matrix. CreateScale() method to create the scale matrix and multiply the worldBall matrix, we must restore the worldBall to Matrix.Identity, otherwise the new scale will change from the previous value.

Viewing the model hierarchy information

In Windows Phone 7 3D game programming, the models come from the modeling software, such as 3DS MAX or Maya, which are used frequently. Sometimes, it you do not want to control a complete animation, just part of it. At that moment, you should know the subdivisions where they are. Actually, the models are organized as a tree, and you can find a specified mesh or bone using a tree-based binary search algorithm; however, there is no need to write an algorithm, as the XNA framework has done them for you. As a handy reference, you should know the hierarchy of the model and the name of every part. In the recipe, you will learn how to get the model hierarchy information.

How to do it…

  1. Create a Windows Phone Game project named ModelHierarchy, and change Game1.cs to ModelHierarchyGame.cs. Then, add the tank.fbx model file from the XNA APP sample to the content project. After that, create a content pipeline extension library named ModelHierarchyProcessor and replace ContentProcessor1.cs to ModelHierarchyProcessor.cs.
  2. Create the ModelHierarchyProcessor class in the ModelHierarchyProcessor.cs file.
    [code]
    [ContentProcessor(DisplayName = “ModelHierarchyProcessor”)]
    public class ModelHierarchyProcessor : ModelProcessor
    {
    public override ModelContent Process(NodeContent input,
    ContentProcessorContext context)
    {
    context.Logger.LogImportantMessage(
    “—- Model Bone Hierarchy —-“);
    // Show the model hierarchy
    DemonstrateNodeTree(input, context, “”);
    return base.Process(input, context);
    }
    private void DemonstrateNodeTree(NodeContent input,
    ContentProcessorContext context, string start)
    {
    // Output the name and type of current model part
    context.Logger.LogImportantMessage(
    start + “- Name: [{0}] – {1}”, input.Name,
    input.GetType().Name);
    // Iterate all of the sub content of current
    //NodeContent
    foreach (NodeContent node in input.Children)
    DemonstrateNodeTree(node, context, start + “- “);
    }
    }
    [/code]
  3. Add ModelHierarchyProcessor.dll to the content project reference list and the content processor of the tank model to ModelHierarchyProcessor, as shown in the following screenshot:
    ModelHierarchyProcessor
  4. Build the ModelHierarchy. In the Output window, the model hierarchy information will show up as follows:
    [code]
    —- Model Bone Hierarchy —-
    – Name: [tank_geo] – MeshContent
    – – Name: [r_engine_geo] – MeshContent
    – – – Name: [r_back_wheel_geo] – MeshContent
    – – – Name: [r_steer_geo] – MeshContent
    – – – – Name: [r_front_wheel_geo] – MeshContent
    – – Name: [l_engine_geo] – MeshContent
    – – – Name: [l_back_wheel_geo] – MeshContent
    – – – Name: [l_steer_geo] – MeshContent
    – – – – Name: [l_front_wheel_geo] – MeshContent
    – – Name: [turret_geo] – MeshContent
    – – – Name: [canon_geo] – MeshContent
    – – – Name: [hatch_geo] – MeshContent
    [/code]
    Compare it to the model information in 3DS MAX, as shown in the following screenshot. They should completely match. For looking up the information in 3DS MAX, you could click Tools | Open Container Explorer.
    Tools | Open Container Explorer

How it works…

In step 2, ModelHierarchyProcessor directly inherits from the ModelProcessor because we just need to print out the model hierarchy. In the DemonstrateNodeTree() method—which is the key method showing the model mesh and bone tree—the Context. Logger.LogImportantMessage() shows the name and type of the current NodeContent. Mostly, the NodeContent is MeshContent or BoneContent in the model processing phase when building the main project. The following recursion is to check whether the current NodeContent has sub node contents. If so, we will process the children one by one at a lower level. Then, the Process() method calls the method before returning the processed ModelContent.

Highlighting individual meshes of a model

The 3D game model object is made up of different meshes. In a real 3D game development, sometimes you want to locate the moving mesh and see the bounding wireframe. This will help you to control the designated mesh more accurately. In this recipe, you will learn how to draw and highlight the mesh of the model individually.

How to do it…

The following steps will help you understand how to highlight different parts of a model for better comprehension of model vertex structure:

  1. Create a Windows Phone Game project named HighlightModelMesh and change Game1.cs to HighlightModelMeshGame.cs. Then, add a new MeshInfo. cs file to the project. Next, add the model file tank.fbx and font file gameFont. spritefont to the content project. After that, create a Content Pipeline Extension Library named MeshVerticesProcessor and replace ContentProcessor1.cs with MeshVerticesProcessor.cs.
  2. Define the MeshVerticesProcessor in MeshVerticesProcessor.cs of MeshVerticesProcessor Content Pipeline Extension Library project. The processor is the extension of ModelProcessor:
    [code]
    // This custom processor attaches vertex position data of every
    mesh to a model’s tag property.
    [ContentProcessor]
    public class MeshVerticesProcessor : ModelProcessor
    [/code]
  3. In the MeshVerticesProcessor class, we add a tagData dictionary in the class field:
    [code]
    Dictionary<string, List<Vector3>> tagData =
    new Dictionary<string, List<Vector3>>();
    [/code]
  4. Next, we define the Process() method:
    [code]
    // The main method in charge of processing the content.
    public override ModelContent Process(NodeContent input,
    ContentProcessorContext context)
    {
    FindVertices(input);
    ModelContent model = base.Process(input, context);
    model.Tag = tagData;
    return model;
    }
    [/code]
  5. Build the MeshVerticesProcessor project. Add a reference to MeshVerticesProcessor.dll in the content project and change the Content Processor of tank.fbx, as shown in the following screenshot:
    Content Processor
  6. Define the MeshInfo class in MeshInfo.cs.
    [code]
    public class MeshInfo
    {
    public string MeshName;
    public List<Vector3> Positions;
    public MeshInfo(string name, List<Vector3> positions)
    {
    this.MeshName = name;
    this.Positions = positions;
    }
    }
    [/code]
  7. From this step, we will start to render the individual wireframe mesh and the whole tank object on the Windows Phone 7 screen. First, declare the necessary variable in the HighlightModelMeshGame class fields:
    [code]
    // SpriteFont for showing the model mesh name
    SpriteFont font;
    // Tank model
    Model modelTank;
    // Tank model world position
    Matrix worldTank = Matrix.Identity;
    // Camera position
    Vector3 cameraPosition;
    // Camera view and projection matrix
    Matrix view;
    Matrix projection;
    // Indicate the screen tapping state
    bool Tapped;
    // The model mesh index in MeshInfo list
    int Index = 0;
    // Dictionary for mesh name and vertices
    Dictionary<string, List<Vector3>> meshVerticesDictionary;
    // Store the current mesh vertices
    List<Vector3> meshVertices;
    // Mesh Info list
    List<MeshInfo> MeshInfoList;
    // Vertex array for drawing the mesh vertices on screen
    VertexPositionColor[] vertices;
    // Vertex buffer store the vertex buffer
    VertexBuffer vertexBuffer;
    // The WireFrame render state
    static RasterizerState WireFrame = new RasterizerState
    {
    FillMode = FillMode.WireFrame,
    CullMode = CullMode.None
    };
    // The noraml render state
    static RasterizerState Normal = new RasterizerState
    {
    FillMode = FillMode.Solid,
    CullMode = CullMode.None
    };
    [/code]
  8. Initialize the camera. Insert the code into the Initialize() method:
    [code]
    cameraPosition = new Vector3(35, 15, 35);
    // Initialize the camera view and projection matrices
    view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero,
    Vector3.Up);
    projection = Matrix.CreatePerspectiveFieldOfView(
    MathHelper.PiOver4, GraphicsDevice.Viewport.AspectRatio,
    0.1f, 1000.0f);
    meshVertices = new List<Vector3>();
    [/code]
  9. Load the tank model and font in the game. Then, map the model Tag dictionary data with mesh info to MeshInfo list. Insert the following code to the LoadContent() method:
    [code]
    // Create a new SpriteBatch, which can be used to draw
    // textures.
    spriteBatch = new SpriteBatch(GraphicsDevice);
    // Load the font
    font = Content.Load<SpriteFont>(“gameFont”);
    // Load the ball model
    modelTank = Content.Load<Model>(“tank”);
    // Get the dictionary data with mesh name and its vertices
    meshVerticesDictionary = (Dictionary<string, List<Vector3>>)
    modelTank.Tag;
    // Get the mapped MeshInfo list
    MeshInfoList = MapMeshDictionaryToList(meshVerticesDictionary);
    // Set the mesh for rendering
    SetMeshVerticesToVertexBuffer(Index);
    [/code]
  10. Change the mesh for rendering. Add the following code to the Update() method:
    [code]
    // Check the screen is tapped and change the rendering mesh
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    if (GraphicsDevice.Viewport.Bounds.Contains(
    (int)touches[0].Position.X, (int)touches[0].
    Position.Y))
    {
    // Clamp the Index value within the amount of model
    meshes
    Index = ++Index % MeshInfoList.Count;
    // Set the mesh index for rendering
    SetMeshVerticesToVertexBuffer(Index);
    }
    }
    [/code]
  11. Draw the tank mode, current mesh, and its name on the Windows Phone 7 screen. Paste the following code into the Draw() method:
    [code]
    GraphicsDevice device = graphics.GraphicsDevice;
    device.Clear(Color.CornflowerBlue);
    // Set the render state for drawing the tank model
    device.BlendState = BlendState.Opaque;
    device.RasterizerState = Normal;
    device.DepthStencilState = DepthStencilState.Default;
    DrawModel(modelTank, worldTank, view, projection);
    // Set the render state for drawing the current mesh
    device.RasterizerState = WireFrame;
    device.DepthStencilState = DepthStencilState.Default;
    // Declare a BasicEffect object to draw the mesh wireframe
    BasicEffect effect = new BasicEffect(device);
    effect.View = view;
    effect.Projection = projection;
    // Enable the vertex color
    effect.VertexColorEnabled = true;
    // Begin to draw
    effect.CurrentTechnique.Passes[0].Apply();
    // Set the VertexBuffer to GraphicDevice
    device.SetVertexBuffer(vertexBuffer);
    // Draw the mesh in TriangleList mode
    device.DrawPrimitives(PrimitiveType.TriangleList, 0,
    meshVertices.Count / 3);
    // Draw the mesh name on screen
    spriteBatch.Begin();
    spriteBatch.DrawString(font, “Curent Mesh Name: ” +
    MeshInfoList[Index].MeshName, new Vector2(0, 0),
    Color.White);
    spriteBatch.End();
    [/code]
  12. Now, build and run the application. It should run as shown in the following screenshots. When you tap the screen the current mesh will change to another.
    Highlighting individual meshes

How it works…

In step 3, the tagData receives the mesh name as the key and the corresponding mesh vertices as the value.

In step 4, the input, a NodeContent object, represents the root NodeContent of the input model. The key called method is the FindVertices() method. It iterates the meshes in the input model and stores the mesh vertices in tagData with the mesh name. The method should be as follows:

[code]
// Extracting a list of all the vertex positions in
// a model.
void FindVertices(NodeContent node)
{
// Transform the current NodeContent to MeshContent
MeshContent mesh = node as MeshContent;
if (mesh != null)
{
string meshName = mesh.Name;
List<Vector3> meshVertices = new List<Vector3>();
// Look up the absolute transform of the mesh.
Matrix absoluteTransform = mesh.AbsoluteTransform;
// Loop over all the pieces of geometry in the mesh.
foreach (GeometryContent geometry in mesh.Geometry)
{
// Loop over all the indices in this piece of
// geometry. Every group of three indices
// represents one triangle.
foreach (int index in geometry.Indices)
{
// Look up the position of this vertex.
Vector3 vertex =
geometry.Vertices.Positions[index];
// Transform from local into world space.
vertex = Vector3.Transform(vertex,
absoluteTransform);
// Store this vertex.
meshVertices.Add(vertex);
}
}
tagData.Add(meshName, meshVertices);
}
// Recursively scan over the children of this node.
foreach (NodeContent child in node.Children)
{
FindVertices(child);
}
}
[/code]

The first line is to transform the current NodeContent to MeshContent so that we can get the mesh vertices. If the current NodeContent is a MeshContent, declare the meshName variable for holding the current mesh name, meshVertices for saving the mesh vertices, and store the world absolute transformation matrix to the absoluteTransform matrix using MeshContent.AbsoluteTransform. The following foreach loop iterates every vertex of the model geometries and transforms it from the object coordinate to the world coordinate, then stores the current vertex to the meshVertices. When all the vertices of the current mesh are processed, we add meshVertices to the tagData dictionary with meshName as key. The last part is to recursively process the vertices of the child NodeContents of the temporary MeshContent.

In step 6, the MeshInfo class assembles the mesh name and its vertices.

In step 7, the font will be used to render the current mesh name on screen; modelTank loads the tank model; worldTank indicates the tank world position; Index determines which mesh will be rendered; meshVerticesDictionary stores the model Tag information, which stores the mesh name and mesh vertices; meshVertices saves the vertices of the current mesh for rendering; MeshInfoList will hold the mesh information mapped from meshVerticesDictionary; vertices represents the VertexPositionColor array for rendering the current mesh vertices on screen; vertexBuffer will allocate the space for the current mesh vertex array. WireFrame and Normal specify the render state for the individual mesh and the tank model.

In step 9, we call two other methods: MapMeshDictionaryToList() and SetMeshVerticesToVertexBuffer().

The MapMeshDictionaryToList() method is to map the mesh info from the dictionary to the MeshInfo list, as follows:
[code]
// Map mesh info dictionary to MeshInfo list
public List<MeshInfo> MapMeshDictionaryToList(
Dictionary<string, List<Vector3>> meshVerticesDictionary)
{
MeshInfo meshInfo;
List<MeshInfo> list = new List<MeshInfo>();
// Iterate the item in dictionary
foreach (KeyValuePair<string, List<Vector3>> item in
meshVerticesDictionary)
{
// Initialize the MeshInfo object with mesh name and
// vertices
meshInfo = new MeshInfo(item.Key, item.Value);
// Add the MeshInfo object to MeshInfoList
list.Add(meshInfo);
}
return list;
}
[/code]

We iterate and read the item of meshVerticesDictionary to meshInfo with the mesh name and vertices. Then, add the mesh info to the MeshInfoList.

The SetMeshVerticesToVertexBuffer() method is to set the current mesh vertices to vertex buffer. The code is as follows:

[code]
// Set the mesh index for rendering
private void SetMeshVerticesToVertexBuffer(int MeshIndex)
{
if (MeshInfoList.Count > 0)
{
// Get the mesh vertices
meshVertices = MeshInfoList[MeshIndex].Positions;
// Declare the VertexPositionColor array
vertices = new VertexPositionColor[meshVertices.Count];
// Initialize the VertexPositionColor array with the
// mesh vertices data
for (int i = 0; i < meshVertices.Count; i++)
{
vertices[i].Position = meshVertices[i];
vertices[i].Color = Color.Red;
}
// Initialize the VertexBuffer for VertexPositionColor
// array
vertexBuffer = new VertexBuffer(GraphicsDevice,
VertexPositionColor.VertexDeclaration,
meshVertices.Count, BufferUsage.WriteOnly);
// Set VertexPositionColor array to VertexBuffer
vertexBuffer.SetData(vertices);
}
}
[/code]

We use MeshIndex to get the current vertices from MeshInfoList. Then allocate the space for vertices—a VertexPositionColor array—and initialize the array data using meshVertices. After that, we initialize the vertexBuffer to store the VertexPositionColor array for drawing the current mesh on screen.

In step 10, this code will react to the valid tap gesture and change the mesh index for choosing different meshes to show.

In step 11, the first part of the code is to draw the tank model in Normal render state defined in the class field. The second part is responsible for rendering the current mesh in WireFrame render state. For rendering the current mesh, we declare a new BasicEffect object and enable the VertexColorEnabled attribute to highlight the selected mesh. The following is the code snippet for the DrawModel() method:

[code]
//Draw the model
public void DrawModel(Model model, Matrix world, Matrix view, Matrix
projection)
{
Matrix[] transforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(transforms);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
effect.World = transforms[mesh.ParentBone.Index] * world;
effect.View = view;
effect.Projection = projection;
}
mesh.Draw();
}
}
[/code]

Implementing a rigid model animation

Since 1997, 3D animation has made modern games more fun, and has given them more possibilities. You can take different actions in role-playing games. 3D model animation will make the game more fun and realistic. In this recipe, you will learn how to process and play the rigid model animation in Windows Phone 7.

How to do it…

The following steps will help you look into detail on implementing a rigid model animation:

  1. Create a Windows Phone Game project named RigidModelAnimationGame, change Game1.cs to RigidAnimationGame.cs and add the 3D animated model file Fan.FBX to the content project. Then create a Windows Phone Class Library project called RigidModelAnimationLibrary to define the animation data and add the class files ModelAnimationClip.cs, AnimationClip. cs, AnimationPlayerBase.cs, ModelData.cs, Keyframe.cs, RigidAnimationPlayer.cs, and RootAnimationPlayer.cs to this project.
  2. Next, build a new Content Pipeline Extension Library project named RigidAnimationModelProcessor to process the animated model and return the model animation data to the Model object when initializing the game.
  3. Define the Keyframe class in Keyframe.cs of RigidModelAnimationLibrary project. Keyframe class is responsible for storing an animation frame for a bone in the mode. An animation frame is required to refer to a corresponding bone. If you have not created a bone, or there is no bone in the mesh, XNA frame will automatically create a bone for this kind of mesh, so that the system can locate the mesh. The class should be as follows:
    [code]
    // Indicate the position of a bone of a model mesh
    public class Keyframe
    {
    public Keyframe() { }
    // Gets the index of the target bone that is animated by
    // this keyframe.
    [ContentSerializer]
    public int Bone;
    // Gets the time offset from the start of the animation to
    // this keyframe.
    [ContentSerializer]
    public TimeSpan Time;
    // Gets the bone transform for this keyframe.
    [ContentSerializer]
    public Matrix Transform;
    // Constructs a new Keyframe object.
    public Keyframe(int bone, TimeSpan time, Matrix transform)
    {
    Bone = bone;
    Time = time;
    Transform = transform;
    }
    }
    [/code]
  4. Implement the AnimationClip class in AnimationClip.cs of RigidModelAnimationLibrary project. AnimationClip class is the runtime equivalent of the Microsoft.Xna.Framework.Content.Pipeline.Graphics. AnimationContent type, which holds all the key frames needed to describe a single model animation. The class is as follows:
    [code]
    public class AnimationClip
    {
    private AnimationClip() { }
    // The total length of the model animation
    [ContentSerializer]
    public TimeSpan Duration;
    // The collection of key frames, sorted by time, for all
    // bones
    [ContentSerializer]
    public List<Keyframe> Keyframes;
    // Animation clip constructor
    public AnimationClip(TimeSpan duration, List<Keyframe>
    keyframes)
    {
    Duration = duration;
    Keyframes = keyframes;
    }
    }
    [/code]
  5. Implement the AnimationPlayerBase class in AnimationPlayerBase.cs of RigidModelAnimationLibrary project. This class is the base class for rigid animation players. It deals with a clip, playing it back at speed, notifying clients of completion, and so on. We add the following lines to the class field:
    [code]
    // Clip currently being played
    AnimationClip currentClip;
    // Current timeindex and keyframe in the clip
    TimeSpan currentTime;
    int currentKeyframe;
    // Speed of playback
    float playbackRate = 1.0f;
    // The amount of time for which the animation will play.
    // TimeSpan.MaxValue will loop forever. TimeSpan.Zero will
    // play once.
    TimeSpan duration = TimeSpan.MaxValue;
    // Amount of time elapsed while playing
    TimeSpan elapsedPlaybackTime = TimeSpan.Zero;
    // Whether or not playback is paused
    bool paused;
    // Invoked when playback has completed.
    public event EventHandler Completed;
    [/code]
  6. We define the attributes of the AnimationPlayerBase class:
    [code]
    // Gets the current clip
    public AnimationClip CurrentClip
    {
    get { return currentClip; }
    }
    // Current key frame index
    public int CurrentKeyFrame
    {
    get { return currentKeyframe; }
    set
    {
    IList<Keyframe> keyframes = currentClip.Keyframes;
    TimeSpan time = keyframes[value].Time;
    CurrentTime = time;
    }
    }
    // Get and set the current playing position.
    public TimeSpan CurrentTime
    {
    get { return currentTime; }
    set
    {
    TimeSpan time = value;
    // If the position moved backwards, reset the keyframe
    // index.
    if (time < currentTime)
    {
    currentKeyframe = 0;
    InitClip();
    }
    currentTime = time;
    // Read keyframe matrices.
    IList<Keyframe> keyframes = currentClip.Keyframes;
    while (currentKeyframe < keyframes.Count)
    {
    Keyframe keyframe = keyframes[currentKeyframe];
    // Stop when we’ve read up to the current time
    // position.
    if (keyframe.Time > currentTime)
    break;
    // Use this keyframe
    SetKeyframe(keyframe);
    currentKeyframe++;
    }
    }
    }
    [/code]
  7. Give the definition of the StartClip() method to the AnimationPlayerBase class:
    [code]
    // Starts the specified animation clip.
    public void StartClip(AnimationClip clip)
    {
    StartClip(clip, 1.0f, TimeSpan.MaxValue);
    }
    // Starts playing a clip, duration (max is loop, 0 is once)
    public void StartClip(AnimationClip clip, float playbackRate,
    TimeSpan duration)
    {
    if (clip == null)
    throw new ArgumentNullException(“Clip required”);
    // Store the clip and reset playing data
    currentClip = clip;
    currentKeyframe = 0;
    CurrentTime = TimeSpan.Zero;
    elapsedPlaybackTime = TimeSpan.Zero;
    // Store the data about how we want to playback
    this.playbackRate = playbackRate;
    this.duration = duration;
    // Call the virtual to allow initialization of the clip
    InitClip();
    }
    [/code]
  8. Add the implementation of Update() to the AnimationPlayerBase class:
    [code]
    // Called during the update loop to move the animation forward
    public virtual void Update(GameTime gameTime)
    {
    if (currentClip == null)
    return;
    TimeSpan time = gameTime.ElapsedGameTime;
    // Adjust for the rate
    if (playbackRate != 1.0f)
    time = TimeSpan.FromMilliseconds(
    time.TotalMilliseconds * playbackRate);
    elapsedPlaybackTime += time;
    // Check the animation is end
    if (elapsedPlaybackTime > duration && duration !=
    TimeSpan.Zero ||
    elapsedPlaybackTime > currentClip.Duration &&
    duration == TimeSpan.Zero)
    {
    if (Completed != null)
    Completed(this, EventArgs.Empty);
    currentClip = null;
    return;
    }
    // Update the animation position.
    time += currentTime;
    CurrentTime = time;
    }
    [/code]
  9. Implement two virtual methods for subclass to custom its special behaviors:
    [code]
    // Subclass initialization when the clip is
    // initialized.
    protected virtual void InitClip()
    {
    }
    // For subclasses to set the associated data of a particular
    // keyframe.
    protected virtual void SetKeyframe(Keyframe keyframe)
    {
    }
    [/code]
  10. Define the RigidAnimationPlayer class in RigidAnimationPlayer.cs of the RigidModelAnimationLibrary project. This animation player knows how to play an animation on a rigid model, applying transformations to each of the objects in the model over time. The class is as follows:
    [code]
    public class RigidAnimationPlayer : AnimationPlayerBase
    {
    // This is an array of the transforms to each object in the
    // model
    Matrix[] boneTransforms;
    // Create a new rigid animation player, receive count of
    // bones
    public RigidAnimationPlayer(int count)
    {
    if (count <= 0)
    throw new Exception(“Bad arguments to model
    animation player”);
    this.boneTransforms = new Matrix[count];
    }
    // Initializes all the bone transforms to the identity
    protected override void InitClip()
    {
    int boneCount = boneTransforms.Length;
    for (int i = 0; i < boneCount; i++)
    this.boneTransforms[i] = Matrix.Identity;
    }
    // Sets the key frame for a bone to a transform
    protected override void SetKeyframe(Keyframe keyframe)
    {
    this.boneTransforms[keyframe.Bone] =
    keyframe.Transform;
    }
    // Gets the current bone transform matrices for the
    // animation
    public Matrix[] GetBoneTransforms()
    {
    return boneTransforms;
    }
    }
    [/code]
  11. Define the RootAnimationPlayer class in RootAnimationPlayer.cs of the RigidModelAnimationLibrary project. The root animation player contains a single transformation matrix to control the entire model. The class should be as follows:
    [code]
    public class RootAnimationPlayer : AnimationPlayerBase
    {
    Matrix currentTransform;
    // Initializes the transformation to the identity
    protected override void InitClip()
    {
    this.currentTransform = Matrix.Identity;
    }
    // Sets the key frame by storing the current transform
    protected override void SetKeyframe(Keyframe keyframe)
    {
    this.currentTransform = keyframe.Transform;
    }
    // Gets the current transformation being applied
    public Matrix GetCurrentTransform()
    {
    return this.currentTransform;
    }
    }
    [/code]
  12. Define the ModelData class in ModelData.cs of RigidModelAnimationLibrary project. The ModelData class combines all the data needed to render an animated rigid model, the ModelData object will be used to store the animated data in Tag property. The class looks similar to the following:
    [code]
    public class ModelData
    {
    [ContentSerializer]
    public Dictionary<string, AnimationClip>
    RootAnimationClips;
    [ContentSerializer]
    public Dictionary<string, AnimationClip>
    ModelAnimationClips;
    public ModelData(
    Dictionary<string, AnimationClip> modelAnimationClips,
    Dictionary<string, AnimationClip> rootAnimationClips
    )
    {
    ModelAnimationClips = modelAnimationClips;
    RootAnimationClips = rootAnimationClips;
    }
    private ModelData()
    {
    }
    }
    [/code]
  13. Now, build the RigidModelAnimationLibrary project and you will get RigidModelAnimationLibrary.dll.
  14. From this step, we will begin to create the RigidModelAnimationProcessor. The RigidModelAnimationProcessor extends from the ModelProcessor because we only want to get the model animation data.
    [code]
    [ContentProcessor(DisplayName = “Rigid Model Animation
    Processor”)]
    public class RigidModelAnimationProcessor : ModelProcessor
    [/code]
  15. Define the maximum number of bones. Add the following line to the class field:
    [code]
    const int MaxBones = 59;
    [/code]
  16. Define the Process() method:
    [code]
    // The main Process method converts an intermediate format
    // content pipeline NodeContent tree to a ModelContent object
    // with embedded animation data.
    public override ModelContent Process(NodeContent input,
    ContentProcessorContext context)
    {
    ValidateMesh(input, context, null);
    List<int> boneHierarchy = new List<int>();
    // Chain to the base ModelProcessor class so it can
    // convert the model data.
    ModelContent model = base.Process(input, context);
    // Animation clips inside the object (mesh)
    Dictionary<string, AnimationClip> animationClips =
    new Dictionary<string, AnimationClip>();
    // Animation clips at the root of the object
    Dictionary<string, AnimationClip> rootClips =
    new Dictionary<string, AnimationClip>();
    // Process the animations
    ProcessAnimations(input, model, animationClips, rootClips);
    // Store the data for the model
    model.Tag = new ModelData(animationClips, rootClips);
    return model;
    }
    [/code]
  17. Define the ProcessAnimations() method:
    [code]
    // Converts an intermediate format content pipeline
    // AnimationContentDictionary object to our runtime
    // AnimationClip format.
    static void ProcessAnimations(
    NodeContent input,
    ModelContent model,
    Dictionary<string, AnimationClip> animationClips,
    Dictionary<string, AnimationClip> rootClips)
    {
    // Build up a table mapping bone names to indices.
    Dictionary<string, int> boneMap =
    new Dictionary<string, int>();
    for (int i = 0; i < model.Bones.Count; i++)
    {
    string boneName = model.Bones[i].Name;
    if (!string.IsNullOrEmpty(boneName))
    boneMap.Add(boneName, i);
    }
    // Convert each animation in the root of the object
    foreach (KeyValuePair<string, AnimationContent> animation
    in input.Animations)
    {
    AnimationClip processed = ProcessRootAnimation(
    animation.Value, model.Bones[0].Name);
    rootClips.Add(animation.Key, processed);
    }
    // Get the unique names of the animations on the mesh
    // children
    List<string> animationNames = new List<string>();
    AddAnimationNodes(animationNames, input);
    // Now create those animations
    foreach (string key in animationNames)
    {
    AnimationClip processed = ProcessAnimation(key,
    boneMap, input, model);
    animationClips.Add(key, processed);
    }
    }
    [/code]
  18. Define the ProcessRootAnimation() method, in the RigidModelAnimationProcessor class, to convert an intermediate format content pipeline AnimationContent object to the runtime AnimationClip format. The code is as follows:
    [code]
    public static AnimationClip ProcessRootAnimation(
    AnimationContent animation, string name)
    {
    List<Keyframe> keyframes = new List<Keyframe>();
    // The root animation is controlling the root of the bones
    AnimationChannel channel = animation.Channels[name];
    // Add the transformations on the root of the model
    foreach (AnimationKeyframe keyframe in channel)
    {
    keyframes.Add(new Keyframe(0, keyframe.Time,
    keyframe.Transform));
    }
    // Sort the merged keyframes by time.
    keyframes.Sort(CompareKeyframeTimes);
    if (keyframes.Count == 0)
    throw new InvalidContentException(“Animation has no”
    + “keyframes.”);
    if (animation.Duration <= TimeSpan.Zero)
    throw new InvalidContentException(“Animation has a”
    + “zero duration.”);
    return new AnimationClip(animation.Duration, keyframes);
    }
    [/code]
  19. Define the AddAnimationNames() static method, in the RigidModelAnimationProcessor class, which assembles the animation names to locate different animations. It is as follows:
    [code]
    static void AddAnimationNames(List<string> animationNames,
    NodeContent node)
    {
    foreach (NodeContent childNode in node.Children)
    {
    // If this node doesn’t have keyframes for this
    // animation we should just skip it
    foreach (string key in childNode.Animations.Keys)
    {
    if (!animationNames.Contains(key))
    animationNames.Add(key);
    }
    AddAnimationNames(animationNames, childNode);
    }
    }
    [/code]
  20. Define the ProcessAnimation() method, in the RigidModelAnimationProcessor class, to process the animations of individual model meshes. The method definition should be as follows:
    [code]
    // Converts an intermediate format content pipeline
    // AnimationContent object to the AnimationClip format.
    static AnimationClip ProcessAnimation(
    string animationName,
    Dictionary<string, int> boneMap,
    NodeContent input)
    {
    List<Keyframe> keyframes = new List<Keyframe>();
    TimeSpan duration = TimeSpan.Zero;
    // Get all of the key frames and duration of the input
    // animated model
    GetAnimationKeyframes(animationName, boneMap, input,
    ref keyframes, ref duration);
    // Sort the merged keyframes by time.
    keyframes.Sort(CompareKeyframeTimes);
    if (keyframes.Count == 0)
    throw new InvalidContentException(“Animation has no
    + “keyframes.”);
    if (duration <= TimeSpan.Zero)
    throw new InvalidContentException(“Animation has a
    + “zero duration.”);
    return new AnimationClip(duration, keyframes);
    }
    [/code]
  21. Define the GetAnimationKeyframe() method referenced by ProcessAnimation() in the RigidModelAnimationProcessor class. This is mainly responsible for processing the input animated model and gets all of its key frames and duration. The complete implementation of the method is as follows:
    [code]
    // Get all of the key frames and duration of the input
    // animated model
    static void GetAnimationKeyframes(
    string animationName,
    Dictionary<string, int> boneMap,
    NodeContent input,
    ref List<Keyframe> keyframes,
    ref TimeSpan duration)
    {
    // Add the transformation on each of the meshes from the
    // animation key frames
    foreach (NodeContent childNode in input.Children)
    {
    // If this node doesn’t have keyframes for this
    // animation we should just skip it
    if (childNode.Animations.ContainsKey(animationName))
    {
    AnimationChannel childChannel =
    childNode.Animations[animationName].Channels[
    childNode.Name];
    if(childNode.Animations[animationName].Duration !=
    duration)
    {
    if (duration < childNode.Animations[
    animationName].Duration)
    duration = childNode.Animations[
    animationName].Duration;
    }
    int boneIndex;
    if(!boneMap.TryGetValue(childNode.Name,
    out boneIndex))
    {
    throw new InvalidContentException(
    string.Format(“Found animation for”
    + “bone ‘{0}’, which is not part of the”
    + “model.”, childNode.Name));
    }
    foreach (AnimationKeyframe keyframe in
    childChannel)
    {
    keyframes.Add(new Keyframe(boneIndex,
    keyframe.Time, keyframe.Transform));
    }
    }
    // Get the child animation key frame by animation
    // name of current NodeContent
    GetAnimationKeyframes(animationName, boneMap,
    childNode, ref keyframes, ref duration);
    }
    }
    [/code]
  22. Define the CompareKeyframeTimes() method for sorting the animation key frames along the animation running sequence.
    [code]
    // Comparison function for sorting keyframes into ascending
    // time order.
    static int CompareKeyframeTimes(Keyframe a, Keyframe b)
    {
    return a.Time.CompareTo(b.Time);
    }
    [/code]
  23. Now the RigidModelAnimationProcessor class is complete, you need to build the RigidModelAnimationProcessor project. You will get a RigidModelAnimationProcessor.dll library file. Add the RigidModelAnimationProcessor.dll to the content project reference list and change the model processor of Fan.fbx to RigidModelAnimationProcessor, as shown in the following screenshot:
    RigidModelAnimationProcessor
  24. From this step, you will begin to draw the animated model on the Windows Phone 7 screen. Add the code to the RigidAnimationGame class:
    [code]
    // Rigid model, animation players, clips
    Model rigidModel;
    Matrix rigidWorld;
    bool playingRigid;
    RootAnimationPlayer rigidRootPlayer;
    AnimationClip rigidRootClip;
    RigidAnimationPlayer rigidPlayer;
    AnimationClip rigidClip;
    // View and Projection matrices used for rendering
    Matrix view;
    Matrix projection;
    RasterizerState Solid = new RasterizerState()
    {
    FillMode = FillMode.Solid,
    CullMode = CullMode.None
    };
    [/code]
  25. Now, build and run the application. It runs as shown in the following screenshots:
    rigid model animation

How it works…

In step 3, we use the key frames to change the bone from original transformation to a new one. The transformation, the Transform variable, saved as a Matrix. TimeSpan will store the animation time when the key frame is played. The ContentSerializer is an attribute that marks a field or property showing how it is serialized or included in serialization.

In step 5, the currentClip represents the current animation that will be played; currentTime indicates the index of time; currentKeyframe is the current playing key frame; playbackRate stands for how fast the animation will be played; duration represents the total length of time of the current animation being played; elapsedPlaybackTime shows the amount of time that the current animation has played.

In step 6, CurrentKeyFrame attribute returns the index of current key frame. When the attribute is set by an integer value, the attribute will read the Time value of the current frame and assign it to the CurrentTime attribute for getting the transformation of the current bone.

In step 7, the StartClip() method is to initialize the necessary data for playing the current clip.

In step 8, the Update() method counts the elapsed time of animation being played. Then, it checks the elapsedPlaybackTime to see whether it is greater than the valid duration value or currentClip.Duration when duration is TimeSpan.Zero. If yes, it means the current animation is ended. Completed will be triggered when it has been initialized. After that, we should update the animation position, time, plus the currentTime, to compute the total time from the beginning. Finally, update the CurrentTime attribute.

In step 10, the boneTransforms will store all of the transformation matrices of model bones. The SetKeyframe() method is responsible for assigning the transformation matrix of the designated key frame of the corresponding element in boneTransforms based on the bone index of the current key frame. The GetBoneTransforms() method returns the processed boneTransforms for actual transformation computation on the model.

In step 11, based on the class, we could transform the entire model, not the individual mesh, which has its own animation. Now, please notice the difference between the RootAnimationPlayer class and the RigidAnimationPlayer class. The RigidAnimationPlayer class constructor receives a count parameter, but the other one does not. The reason is that the RootAnimationPlayer controls the transformation of the entire model; it just needs to get the root bone information, which will be passed to it at runtime. On the other hand, RigidAnimationPlayer is responsible for playing the animation of every individual mesh; it should know how many bones there are for the meshes, which will help it to allocate enough space to store the transformation matrices.

In step 16, the Process() method is the root method to read the animations from an animated model including the root animation for the entire model transformation and the animations of every single model mesh. Finally, you then assign the generated ModelData with root animations and mesh animations to the Model.Tag property for animating the model at runtime.

In step 17, first the code creates a boneMap dictionary that stores the bone name and its index in the model. Then, the ProcessAnimations() will process the animated model into two animation clips, one for the rootClips using the ProcessRootAnimation() method with the input root bone, and one for animationClips using the ProcessAnimation() method.

In step 18, the first line we declare is a Keyframe collection, which stores all of the key frames of animation. Then, the code gets AnimationChannel from AnimationContent. Channel by the root bone name, where the animation channel data has the necessary transformation data to transform the child bones of the root. Since XNA will generate a corresponding bone for every model for its position, the mesh will be transformed when the root bone is transformed. After getting the animation channel data, the following foreach loop will read all the content pipeline type AnimationKeyFrame objects from the current AnimationChannel data and store the key frame information to the animation runtime type Keyframe defined in the RigidModelAnimationLibrary. Notice the digit 0 in the Keyframe constructor parameter list stands for the root bone index. Next, the keyframes. Sort() is to sort the keyframes collection along the actual animation running sequence, CompareKeyFrameTimes is a key frame time comparing method, which will be discussed later on. Next, we use two lines to validate the keyframes and animation. Finally, return a new AnimationClip object with Duration and keyframes to the caller.

In step 19, the first foreach loop iterates the child NodeContent of the input, the second foreach loop looks into every key of animation of the current NodeContent. The key is the animation name. For example, Take001 will be the animation name when you export an FBX format 3D model from AutoDesk 3DS MAX. Then, the animationNames.Contains() method will check whether the animation name is already in the collection or not. If not, the new animation name will be added to the animationNames collection.

In step 20, the two variables keyframes and duration represent the total key frames and running time of the current model mesh animation. Then, we call the GetAnimationKeyframes() method to get all the key frames and duration of the current animation by animation name. After that, the keyframes.Sort() method sorts the key frames along the animation running order. The following two lines are to check whether the current animation is valid. At last, this method returns a new AnimationClip object corresponding to the input animation name.

In step 21, the foreach loop iterates the every child NodeContent of input. In the loop body, the first job is to check whether the current NodeContent has the animation with the input animation using the NodeContent.Animations.ContainsKey() method. If yes, then the childNode.Animations[animationName].Channels[childNode.Name] is responsible for finding the animation channel, which stores the total key frames of a mesh or bone, such as Plane001, from the specified animation by the name of the current NodeContent. The next line is on returning the duration time of the unique animation. So far, we have collected the necessary AnimationChannel data and duration time for creating the runtime KeyFrame collection. Before generating the runtime Keyframe set, we should get the bone index the set will attach to. Depending on the bone.TryGetValue() method, the boneIndex value is returned according to the current NodeContent name. After that, the following foreach loop goes through all of AnimationKeyFrame in the childChannel, and AnimationChannel that we got earlier. Then add a new KeyFrame object to the keyframes with bone index, key frame time, and the related transformation matrix. The last line recursively gets the animation key frames and duration of the child content of the current node.

In step 22, the CompareTo() method of KeyFrame.TimeSpan compares the KeyFrame object a to the TimeSpan of another Keyframe object b and returns an integer that indicates whether the TimeSpan of a is earlier than, equal to, or later than the TimeSpan of b. This method lets the keyframes.Sort() method in ProcessAnimation() and ProcessRootAnimation() know how to sort the keyframes.

New CA Driving Law

Next week, select counties in California will be part of the pilot testing for California’s new ignition interlock law. Because of the amount of DUI arrests in the state, many CA drivers will be affected by this. Legal Brand Marketing has decided to interview a representative from Smart Start Ignition Interlock to answer some questions and get more information about the upcoming law.What is an interlock exactly?An ignition interlock is a device that is connected electronically to your car through your steering column. It looks sort of like a cellphone. It tests to see if the driver has alcohol in their system. If the device does detect alcohol, then the driver’s car won’t start.How does it work?First, you need to blow into the interlock. Using fuel cell technology, the device calculates your blood alcohol level. There’s also a computer chip in the interlock which records your numbers and sends them to the authorities so they can monitor a driver.Can people just get someone else to blow into it?No. First off, it is a misdemeanor if someone gets another person to blow in it. Secondly, many interlocks are becoming equipped with photo ID cameras to ensure the correct person is blowing into the interlock. I know Smart Start’s SSI-20/20 has one and I predict a camera will be equipped in most interlocks in the future.Can you explain the new CA law?Starting July 1, CA is requiring anyone convicted of DUI to have an ignition interlock. For first convictions, a person is required to have an interlock for six months. This law is initially being tested out in a few CA counties: Los Angeles, Sacramento, Tulare, and Alameda. After a few years, in 2015, the DMV will research to see how effective the law is and there’s a chance the rest of CA will have this law.How do most people feel about ignition interlocks?Well, I can tell you that ignition interlocks are not people’s favorite things. I mean, they’re getting these things because of a DUI conviction which is not something people want. Sure, it’s annoying to have to blow into it every time you start your car, plus interlocks require you to retest while you’re on the road. However, DUI laws are harsh and many people cannot drive at all because of their conviction. Therefore, those people can’t get to work or pick their kids up from school and, to me, that seems like a bigger hassle than having an ignition interlock.

What is Home Equity Loan Modification?

Home equity loan modification is a change in which you have an option to modify your mortgage if you are behind and having difficulty in your payments. This is the loan type wherein the one who borrowed will use the equity in their homes as collateral. This will be sometimes a useful element to facilitate major repairs in the home, college education or medically related bills.This type of loan generates a lien or a security interest against the house of the borrower and the actual home equity will be reduced. This is usually referred to as mortgages because the value of the property is secured against it; just the same as a traditional mortgage. Also, it can be possible to deduct one’s income tax from the home equity loan.The government is giving you options to avoid possible foreclosure in your costs; this is the home equity loan modification. First is to have your payment at your mortgage that is 31% more than your gross income which greatly includes your taxes, your insurances or homeowner dues that you might be paying. This will just show that you are really struggling with your payments. Second is when you use loan modification, this will make your mortgage be in much better shape than you can ever imagine.It will provide you with payments than you can afford and will make sure that you will never lead into foreclosure which in turn, will get back your credits and save your home. And the last thing you would do is to go online and start consulting. You will just fill out some forms about yourself and your status. It includes information about your home equity loan modification and later on, they will call you and give details to help you in saving your home.

How Black Friday Affects the Market

Every investor I know has the same dream.  Predicting the direction of the market.  Knowing if we’re heading higher or lower in the coming weeks or months.  Having that kind of knowledge would be very powerful.  Having that kind of knowledge would be very profitable.There’s two ways to judge the direction of the markets.The first is to use some form of technical analysis.  The second is to look at the fundamentals of the market.  I’ve been looking closely at the market and I think we’re near a bottom.Why?A few things are pushing me to that conclusion. Anyway, here’s the point.  There was a big drop that occurred in early October.  We had another big drop in November?  That could be an early sign of a bottom in the market.  It’s called a double bottom. Technical analysts will tell you it’s a signal.  A change in direction is expected.Here’s the other key that’s got me thinking we’re near a bottom… fundamental information.When I’m talking about fundamentals, I’m referring to how the market reacts to news and other information.  Just think about the news over the last year.  Most of its been bad, and the market traded lower.The first news about the credit crunch.  The hedge funds imploding because of CDOs.  The Bear Stearns collapse.  The Lehman Brothers bankruptcy.  Don’t forget AIG, Fannie Mae, Freddie Mac…In every case the market traded lower.But last week something changed.The federal government stepped in to save Citigroup (C), one of the largest banks in the world.  This wasn’t a friendly helping hand.  Here the government invested $20 billion dollars and agreed to guarantee another $306 billion worth of loans and commitments.Citigroup gave up warrants, stock, and agreed to cut the quarterly dividend.  And that’s just the beginning… so what happened to the markets?They traded higher for 4 days in a row.I’d expect this news to push markets lower.  Now we’re rallying off of the news of government rescue?  It’s not logical… and it’s a great sign we might be at a bottom.Now, about Black Friday.Today is Black Friday.  It sounds ominous, but it’s the number one shopping day of the year.  The day after Thanksgiving everyone heads to the malls to buy gifts for the holidays.  A huge portion of the holiday season sales occurs in this short 24 hour period.This is the day every retailer dreams of… but this year it’s a nightmare.Black Friday sales are going to be weak.  Consumer confidence is hovering around record lows.  The economy’s in recession.  People are losing their jobs.  Not exactly the time to be spending money.I’m expecting the retailers to announce horrible results.Sales on Black Friday are going to be down.  Retailers are going to comment on how bad the shopping environment is.  It’s all going to be bad news.  That’s going to lead to bigger discounts and eventually shrinking profits.It’s not good.That’s why today’s going to be so important.  We know the news is bad. The question is how will the market react?If the market crumbles on the news… watch out, we’re heading lower. However, if the market’s flat or trades up on the horrible news… it’s another signal the market’s at a bottom.This “signal” isn’t always perfect.  But this year, I’m watching it very closely.  Today’s news and the reaction of the market next week will determine the trend in the market.  I’m expecting a rally in the next few months

Increase in Permanent Home Modifications in October Report of Bank of America

October Bank of America home loan modification report shows increase in permanent home modifications and the report is as authentic as much as it is true. The important fact to highlight here is that on many previous occasion, homeowners have become critical of B o A and other servicers as they are in doubt whether they qualify for the program or not. Until recently many homeowners who are looking for in house or any other home loan assistance as the remedy to the modification efforts made from servicers like BOA. But with the October report of B of A Loan Modification, the story told is completely different. The report shows the increase in the permanent mortgage modifications. If we consider the September report of B O A, there were around 78,905 applications, but the latest October 31 report shows that there were 79,339 applications under the active permanent home loan modification. Many of the homeowners are struggling really hard to pay their mortgage payments on the BOA home loan, and this is also true with many other mortgage services as well. Many other banks acting as servicers are trying their level best to address the concerns of struggling homeowners to pay their monthly mortgages.Read through the October report of B O A and make your understanding on the Obama’s mortgage program. The program is devised for the homeowners who are not able to pay back their mortgage finance on time. If you are in the better condition to pay back the finance, you will certainly be having an upper edge than those homeowners who are least aware or half informed about the program. Bank of America has taken the lead to provide millions of desired homeowners with the mortgage modification process as it filed their loan applications. The bank has also played crucial role in alternative home loan assistance. Make sure that you discuss with the servicer the loan modification terms and conditions before you take the next step. Discussing about the mortgage modification with servicer will give you a fair idea on how good or bad you can go ahead with the auto loan.As the B of A loan Modification requirements go, there are many points which have to be considered by the homeowner before they apply. Moreover servicers, like Bank of America are doing very little to assist the desirous homeowners to understand the loan terms and conditions.

Credit Card Debt Consolidation Loan – Get Control Of Your Finances

Credit card debt consolidation loan has now emerged as an effective financial tool that can help you manage and pay off the nasty piles of huge credit card debts. That is the reason why a massive number of plastic money users who are facing poor credit score and are caught in a strong debt-trap are looking for an effective debt consolidation program. If you are among those who are drowned in the deep ocean of debts, Credit card debt consolidation loan can be a great help for you. It will help you renovate your finances in a better way.Get Rid of The Rising DebtsMany people have a misconception that if they already have a huge burden of debts, no other financial institution will offer them loan or any such program to pay off their existing debts while regaining control of their finances. Well, as I said, this is just a misconception. You will be happy to know that now Credit card debt consolidation loan is easily available in the financial market to help you get rid of the heavy piles of credit card debts. Various credit card debt reduction programs have specifically been designed for those who find themselves unable to get out of the credit card debt trap. This loan and program comes in handy for those who have a bad credit rating.The credit card debt consolidation companies only help you in managing your debts, but they make sure to put you in a position where you can comfortably pay off all your debts. Besides offering a Credit card debt consolidation loan, some debt consolidation companies also offer credit-counseling services. These credit counseling services work in a way to educate you regarding how to manage your finances along with paying off all your debts. They also educate you how to avoid getting into debts in future. An important aspect of counseling is to help reduce your dependence on credit cards.Overall, if you have caught yourself in a huge burden of debts, then a Credit card debt consolidation loan is an ideal answer for you. It can help you get out of the deep ocean of debts. Do a thorough research in order to find out a reliable credit card debt management company. Do not forget to ask for a free debt consolidation quote before you proceed with their services. Once you find that the debt consolidation quote and the program offered suits best your debt and financial requirements, you can go ahead, entrust your case to them, take a sigh of relief, and get ready to become debt free.

What is Student Loan Consolidation?

What does it mean to consolidate your student loans?To consolidate your student loans means to take all of your various loans and give them to one company. One benefit of this is that, often, you can get a lower monthly rate.Benefits:
Consolidating your student loans has other benefits. Listed below are a couple more of them:
-Only one payment: Consolidating simplifies your finances by allowing you to write only one check instead of several checks.
-Spend less: Find a lender that will charge you a lower monthly interest rate and let them consolidate your loans.
-Helps build credit: At the time you consolidate, your new lender will repay your previous loans and merge them to make 1 new one. Because your former loans were fully repaid, your history looks better which, consequently, improves your credit rating.
-Lock in interest rate: When you consolidate your interest rate is locked in. This protects you from future interest rate increases.Consolidation loans are great because they are easy to get. You don’t need to be employed or have collateral or a cosigner to consolidate your loans. You don’t even need good credit!The Problem With Consolidation:
There are usually disadvantages to every decision. Check out all the pros and cons of consolidation before you do it. Here are a couple disadvantages you may want to consider:
-Interest rates fall every now and then. If you have consolidated, you are stuck with one rate.
-Once you consolidate you cannot “unconsolidate”.
-The consolidated loan will have new terms and conditions that your other loans didn’t.
-If you choose to extend the life of the loan you will wind up spending more (in interest) than you would have.Since we have reviewed the benefits and disadvantages of consolidating your student loans, you should find out if you are eligible for one. To be eligible for federal student loan consolidation, you must meet a couple requirements. You have to have at least ten thousand dollars. You must be graduated as well. Finally, you must repair any defaulted federal loans before you will be allowed to consolidate.

Why is FAFSA So Important to College Students Looking For Student Loans?

FAFSA is Your Ticket to student loans?FAFSA is your only entry into the world of federal student loans. You can also find yourself the recipient of a federal grant and work study programs to boot. Without a Student Aid Report (SAR), your chances of getting the best student loans are almost nil.FAFSA, stands for Free Application for Federal Student Aid. The feds are your best place to get a college loan because they have the money to lend at a decent interest rate and the government will actually make your interest payment on some of the loans.The application usually is completed online (spanish version as well) but we recommend downloading a copy and printing it out. It will take you time to research and find the answers to their questions.First you need to create an account with FAFSA. This pin number will allow you to proceed with filling out the application. You will then continue to use that pin number as long as you are applying for federal student loans. Basically it is a virtual account system.CRITICAL NEWSFLASH
Many institutions use the report to help them determine what type of student loan you are eligible for. It is not strictly used for federal purposes.
Money or loans are awarded on a first come first served basis.
January is the start date that you can apply through FAFSA to get student loans for the following fall quarters.
Last year we completed our FAFSA in February and still received our student loans for 2 family members.
Seek help from your high school counselor or a college admissions advisor like we did.
When you receive your SAR, you will find a lot of information but the two most important aspects you need to be concerned with are:
EFC
Eligible Loans
EFC stands for Expected Family Contributions. In your report they will tell you how much they expect you and your family to contribute to your college education. The remainder of the monies needed for college will come from your student loans.Eligible loans are basically that. You’ll receive a list of federal unsubsidized and subsidized student loans that you qualify for. These loans will usually be a Stafford, Perkins Loans or Pell Grants.Keep a couple of things in mind:1. The colleges you chose on your report will also receive your SAR report and if you receive federal monies, they will take money out of the loan for tuition and then send you the rest of the money to use on school related expense.2. You can choose to accept the total loan, partial loan or no loan at all. And you can choose which loan institutions you want to go through.The first time you fill out the FAFSA you will probably say to yourself: “this is the last time I’m doing this”, but trust me, it is easier the second time around. Just don’t forget to fill it out early in the calendar year so there will be money for you.