Playing with Windows Phone Touch and Sensors

Creating your first touch application/game

For Windows Phone 7, touchscreen is the most convenient way of allowing the user to interact with your game. The screen, where all actions and gestures take place, is 800 * 480 for landscape mode and 480 * 800 for portrait mode. Based on the hardware, Windows Phone 7 will give the player a hand to actually touch the game as it unfolds, bringing it to life. In this recipe, you will discover how Windows Phone 7 touchscreen works and see how to get the benefits of using this functionality.

Getting ready

For Windows Phone touchscreen interactions, the most common behavior is Tap. When your finger touches the touchscreen, the tap event will be triggered. In XNA, the touchscreen input is referred to as a TouchPanel class and the method is referred to as GetState(). This static class provides the input and gesture information. We will begin with the basic concepts and properties.

The most Important methods for the TouchPanel class are: TouchPanel.GetState() and TouchPanel.TryGetPreviousLocation().

The GetState() method

The GetState() method will return TouchCollection, a list-based data structure. The element TouchLocationState actually represents the touch tap position. The code for it is as follows:

[code]

public enum TouchLocationState
{
Invalid = 0,
Released = 1,
Pressed = 2,
Moved = 3,
}

[/code]

The frequently used state is TouchLocationState.Pressed. For a TouchCollection object, you can use an integer as the index to look up a special finger and a TouchLocation, when Pressed or Released. If there is no finger touching the screen, TouchCollection.Count will be zero. When a finger first touches the screen, touch collection contains only one TouchLocation object with the State set to Pressed. Subsequently, if the finger has moved, TouchLocation.State will change to TouchLocationState.Moved. When the finger is lifted from the screen, TouchLocation. State will change to TouchLocationState.Released. After that, the TouchCollection will be empty if no screen operations take place. The following table is generated when a finger taps the screen:

TouchLocationState.Pressed

The following table is generated when a finger touches the screen, moves across the screen, and is then lifted:

generated when a finger touches the screen

The previous description is about how one finger can interact with touchscreen, but how about using more fingers? When Multi-Touch happens, fingers will be touching, moving, and lifting from the screen independent of each other. You can track the particular finger using the Id Property, where the Id will be the same for the Pressed, Moved, and Released State. The following table is generated when two fingers simultaneously touch the screen, move across the screen, and then each finger is lifted individually:

Pressed, Moved, and Released State

Dictionary objects with keys based on ID are very common for getting the designated finger information.

The TryGetPreviousLocation() method

This method allows you to calculate the difference between the positions under the Moved state. If the state is TouchLocationState.Pressed, this method will return false, and the previous state tried to get will be invalid.

The GetCapabilities() method

This method returns a data structure, TouchPanelCapabilities, which provides access to information about the capabilities of the input:

[code]

public struct TouchPanelCapabilities
{
public bool IsConnected { get; }
public int MaximumTouchCount { get; }
}

[/code]

The IsConnected attribute indicates the availability of the touch panel for use; it is always true. The MaximumTouchCount attribute gets the maximum number of touch locations that can be tracked by the touch pad device; for Windows Phone 7 the number is not less than 4.

Although this was a little theoretical, these materials are very useful for your future Windows Phone XNA game programming. Now, let’s begin with an example which is simple, but clear. It will help you understand the main concepts of the XNA Touch Technique.

How to do it…

This application presents a white ball in the center of the landscape mode screen. When your finger taps it, the ball color will change to red:

  1. First, you will have to create a new Windows Phone 7 Game Project: go to File | New Project | Windows Phone Game and name it TouchEventTap. Then, add the following lines in the TouchTap class field:
    [code]
    Texture2D texRound;
    Rectangle HitRegion;
    bool isSelected = false;
    [/code]
  2. The next step is to insert the code to the LoadContent() method:
    [code]
    texRound = Content.Load<Texture2D>(“Round”);
    HitRegion = new Rectangle(400 – texRound.Width / 2, 240 –
    texRound.Height / 2, texRound.Width, texRound.Height);
    [/code]
  3. Create an UpdateInput() method. This is the most important method in this example. Your application could actually interact with your fingers with the help of this method:
    [code]
    private void UpdateInput()
    {
    // Get the touch panel state as a TouchCollection
    TouchCollection touches = TouchPanel.GetState();
    // Check the first finger touches on screen
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    // Examine whether the tapped position is in the
    // HitRegion
    Point touchPoint = new
    Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    if (HitRegion.Contains(touchPoint))
    {
    isSelected = true;
    }
    else
    {
    isSelected = false;
    }
    }
    }
    [/code]
  4. When you have finished creating the UpdateInput() method, you should call UpdateInput() in Update():
    [code]
    protected override void Update(GameTime gameTime)
    {
    . . .
    UpdateInput();
    . . .
    }
    [/code]
  5. The final step is drawing, so add the following code to Draw() before base. Draw(gameTime). It will look similar to the following code:
    [code]
    spriteBatch.Begin();
    if (isSelected)
    {
    spriteBatch.Draw(texRound, HitRegion, Color.Red);
    }
    else
    {
    spriteBatch.Draw(texRound, HitRegion, Color.White);
    }
    spriteBatch.End();
    [/code]
  6. Build and run the application. The game will run as shown in the screenshot to the left. When you tap the ball, the ball color will be changed to red. Once you tap the outside region of the ball, the color will go back to white:
    Build and run the application

How it works…

In step 1, you used the Texture2D object for the ball image, the Rectangle for the HitRegion, and a bool value isSelected for ball hitting status.

In step 2, you must make sure the Round.png file is added to your referred content project before technically loading the round image in code. The initialization for the rectangle object HitRegion is to define the center of the screen in landscape mode; 400 is half of the screen width, and 240 is half of the screen height.

In step 3, as a typical Windows Phone XNA program, TouchPanel.GetState() should be the first line for Touch operations. Then check that touches.Count is over 0 and TouchLocationState is Pressed to make sure your finger has tapped on the screen. Subsequently, we shall examine the tapped position to check whether it is in the HitRegion. HitRegion.Contains() does the job for us. HitRegion is a rectangular object, the Rectangle.Contains() method computes the four-sided area of a rectangle to check whether it includes the position. If yes, the bool value isSelected will be set to true. Otherwise, it will be false. Update() and Draw() use the isSelected value to do the corresponding work.

In step 5, with isSelected value, we determine whether to draw a red ball or a white ball. When your finger taps inside the ball, its color will be red; otherwise, it restores to white. That’s all. You can review the complete project in our book code files.

Taking your touch application to the next level

Now, we have done the first sample of touch tap, very easy, huh? Maybe you feel it’s not exciting enough yet. The next example will be more interesting. The ball will randomly show up in every possible place within the touchscreen, and your job is to click the ball as soon as possible; every valid click will be counted and shown as scores at the top-left of the screen. The new sample’s name is TouchTapRandomBall.

How to do it…

Create a new Windows Phone 7 Game project in Visual Studio 2010 named TouchTapRandomBall. Change the name from Game1.cs to TouchTapRandomBallGame. cs. Then add the Round.jpg and gamefont.spritefont to the associated content project:

  1. The first operation is to add the lines as field variables:
    [code]
    GraphicsDeviceManager graphics;
    SpriteBatch spriteBatch;
    Texture2D texRound;
    Rectangle HitRegion;
    bool isSelected = false;
    //start position of round, in the center of screen
    int positionX = 400;
    int positionY = 240;
    //random number Axis X and Y
    Random randomX;
    Random randomY;
    //the range for random number of start and end of X, Y
    int startX, endX;
    int startY, endY;
    //total time
    float milliseconds = 0f;
    //score count
    int count = 0;
    //game font
    SpriteFont font;
    [/code]
  2. Based on the variables, the next step is to add the lines into the LoadContent() method:
    [code]
    spriteBatch = new SpriteBatch(GraphicsDevice);
    texRound = Content.Load<Texture2D>(“Round”);
    randomX = new Random();
    randomY = new Random();
    // The X axis bound range of touch for ball
    startX = texRound.Width ;
    endX = GraphicsDevice.Viewport.Width – texRound.Width;
    // The X axis bound range of touch for ball
    startY = texRound.Height;
    endY = GraphicsDevice.Viewport.Height – texRound.Height;
    // Define the HitRegion of ball in the middle of touchscreen
    HitRegion = new Rectangle(positionX – texRound.Width / 2,
    positionY – texRound.Height / 2, texRound.Width,
    texRound.Height);
    // Load the font definition file
    font = Content.Load<SpriteFont>(“gamefont”);
    [/code]
  3. The next block of code is for the Update() method:
    [code]
    // Accumulate the elapsed milliseconds every frame
    milliseconds +=
    (float)gameTime.ElapsedGameTime.TotalMilliseconds;
    if (milliseconds > 1000)
    {
    // When the milliseconds greater than 1000 milliseconds,
    // randomly locate a new position for the ball
    HitRegion.X = randomX.Next(startX, endX + 1);
    HitRegion.Y = randomY.Next(startY, endY + 1);
    // Reset the milliseconds to zero for new milliseconds
    // count
    // make the ball not been selected
    milliseconds = 0f;
    if (isSelected)
    isSelected = false;
    }
    [/code]
  4. Besides the previous code, we still want to count how many times we tapped the ball. The following code would reach the point. Insert the highlighted line to UpdateInput():
    [code]
    Point touchPoint = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    if (HitRegion.Contains(touchPoint))
    {
    isSelected = true;
    count++;
    }
    else
    {
    isSelected = false;
    }
    [/code]
  5. Add the following lines to the Draw() method:
    [code]
    spriteBatch.Begin();
    if (isSelected)
    {
    spriteBatch.Draw(texRound, HitRegion, Color.Red);
    }
    else
    {
    spriteBatch.Draw(texRound, HitRegion, Color.White);
    }
    spriteBatch.DrawString(font, “Score:” + count.ToString(),
    new Vector2(0f, 0f), Color.White);
    spriteBatch.End();
    [/code]
  6. Now you’ve done all the necessary code work, let’s build and run the application. You will discover a jumping ball in the touchscreen. Each time you tap the ball successfully, the score on the top-left will increase by one, as shown in the following screenshots:
    necessary code work

How it works…

In step 2, the randomX and randomY objects indicate the random location of the ball. startX and endX are the range in the X axis for randomX and startY and endY are the range for randomY. The time will be calculated in milliseconds and the count variable will be responsible for the score.

In step 3, the calculation for startX, endX, startY, and endY stand for controlling the ball moving inside the screen because the ball position is randomly generated. Then we make the HitRegion locate in the middle of the screen. The last line is for loading and initializing the font.

In step 4, ElapsedGameTime—a Timespan object—represents the amount of elapsed game time since the last update or last frame. In every Update, we add the elapsed time to the milliseconds variable. Once the value is greater than 1000 milliseconds, the code will generate two random numbers for X and Y of the HitRegion position that will be used in the Draw() method in the same frame. After the new values are generated, we reset the milliseconds variable to 0 and deselect the ball.

In step 5, when your finger taps on the ball, the UpdateInput() method will handle it and add one to the count variable; this number will appear on the top-left of the touchscreen.

Creating a Touch Directional Pad (D-pad)

In a Windows game, we have arrow keys to control directions. In Xbox, we have the gamepad controller to set different orientations. In Windows Phone 7, we only have the touchscreen for the same work, so we need another tool to accomplish this aim. The solution is the Touch Directional Pad. Touch Directional Pad gives you comfortable controlling experiences in game playing when adjusting the directions. In this recipe, you will learn how to create your own Touch Directional Pad on Windows Phone 7.

How to do it…

  1. The first step is to create a new Windows Phone 7 Game project named WindowsPhoneDpad, and change Game1.cs to DpadGame.cs. Then add field variables in the DpadGame class, as follows:
    [code]
    //Font for direction status display
    SpriteFont font;
    //Texture Image for Thumbstick
    Texture2D texDpad;
    //4 direction rectangles
    Rectangle recUp;
    Rectangle recDown;
    Rectangle recLeft;
    Rectangle recRight;
    //Bounding Rectangle for the 4 direction rectangles
    Rectangle recBounding;
    //Corner Margin, 1/4 of Width or Height of square
    int cornerWidth;
    int cornerHeight;
    //Direction String
    string directionString;
    [/code]
  2. The next step is to add the following lines to the LoadContent() method:
    [code]
    //Load the Texture image from content
    texDpad = Content.Load<Texture2D>(“Dpad”);
    recBounding = new Rectangle(0,
    this.GraphicsDevice.Viewport.Height –
    texDpad.Height, texDpad.Width,
    texDpad.Height);
    //Load the game font file
    font = Content.Load<SpriteFont>(“gamefont”);
    //Calculate the corner height and width
    cornerWidth = texDpad.Width / 4;
    cornerHeight = texDpad.Height / 4
    //Calculate the Up rectangle
    recUp = new Rectangle(recBounding.X + cornerWidth,
    recBounding.Y,
    cornerWidth * 2, cornerHeight);
    //Calculate the Down rectangle
    recDown = new Rectangle(recBounding.X + cornerWidth,
    recBounding.Y + recBounding.Height – cornerHeight,
    cornerWidth * 2, cornerHeight);
    //Calculate the Left rectangle
    recLeft = new Rectangle(recBounding.X,
    recBounding.Y + cornerHeight, cornerWidth,
    cornerHeight * 2);
    //Calculate the Right rectangle
    recRight = new Rectangle(recBounding.X + recBounding.
    Width – cornerWidth, recBounding.Y + cornerHeight,
    cornerWidth, cornerHeight * 2);
    [/code]
  3. The third step is to insert the code to the Update() method:
    [code]
    // Check the Tap point whether in the region of the 4
    // direction rectangle
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    // Check the Tap point whether in the 4 direction
    // rectangle
    if (recUp.Contains(point))
    {
    directionString = “Up”;
    }
    else if (recDown.Contains(point))
    {
    directionString = “Down”;
    }
    else if (recLeft.Contains(point))
    {
    directionString = “Left”;
    }
    else if (recRight.Contains(point))
    {
    directionString = “Right”;
    }
    }
    [/code]
  4. In the Draw() method, we add the following lines:
    [code]
    //Draw the font and Touch Directional Pad texture
    spriteBatch.Begin();
    spriteBatch.DrawString(font, “Direction : ” + directionString,
    new Vector2(0, 0), Color.White);
    spriteBatch.Draw(texDpad, recBounding, Color.White);
    spriteBatch.End();
    [/code]
  5. Finally, build and run the application. Click the Right and the Up part on the thumbstick, and you will see something similar to the following screenshots:
     Up part on the thumbstick

How it works…

In step 1, the four rectangle objects: recUp, recDown, recLeft, and recRight individually represent the Up, Down, Left, and Right directions. The recBounding is a rectangle that surrounds the D-pad image. It is convenient for you to control the position of a D-pad and locate the relative positions of the four direction rectangles. The cornerWidth and cornerHeight variables are used for calculating the square gap to corners. Actually, for a square thumbstick, the two variables have the same value. The last variable directionString shows the direction when you tap on the different parts of the thumbsticks.

In step 2, the code is mainly responsible for loading the D-pad image, calculating, and defining the four clickable direction rectangles. The main idea is easy. You can understand the logic and layout from the following figure:

D-pad image

 

The initialization of recBounding gives the bounding rectangle position, at the bottom-left of the screen. Since the thumbstick image is a square, the cornerWidth and cornerHeight is a quarter of the side. You can see in the previous screenshot that the width of recUp and recDown is half of the length and the height is a quarter. For recLeft and recRight, the width is a quarter of the side and the height is half. For every direction rectangle, the shorter side should be at a distance of cornerWidth or cornerHeight to the thumbstick bounding rectangle side.

In step 3, when you get the tapped position, you should check whether the position is within one of the four direction rectangles. If yes, then according to the rectangle, the direction will receive the appropriate value, that is: Up, Down, Left, or Right. Once the updating is done, the last step will present the image and string on the screen.

In step 4, the method renders the direction string on the top-left of the screen and the thumbstick image at the bottom-left.

Dragging and swiping objects

For Windows Phone 7, XNA 4.0 provides two main ways to get touchscreen input. Basically, one is based on tap. With the TouchPanel.GetState()method, you can look up the particular finger by the ID for the raw access to touch point. The Gesture System is another advanced input approach, which provides a number of pre-defined touch gestures, so that you don’t have to work out how to read the common touch gestures using raw data. The TouchPanel.ReadGesture() method offers you a chance to interact with the touch screen in another way. In this recipe, you will get close to two of the most exciting gestures of touchscreen: dragging and swiping.

Getting ready

For Windows Phone XNA programming, the TouchPanel class has an important subclass GestureSample and a corresponding method ReadGesture(). Based on GestureType enum to interact with your gestures, Windows Phone 7 supports the following:

  • Tap: You touch the screen and move away one time, a single point.
  • DoubleTap: You touch on the screen two times in a short time.
  • Hold: You touch and hold the screen at one point for more than one second.
  • FreeDrag: Touch and freely move your finger on the screen.
  • HorizontalDrag: Move your finger around the X axis of the screen, either in landscape mode or portrait mode.
  • VerticalDrag: Move your finger along the Y axis of the screen, either in landscape mode or portrait mode.
  • DragComplete: Lift your finger from the screen.
  • Flick: A quick swipe on the screen. The velocity of flick can be retrieved by reading the Delta member of GestureSample.
  • Pinch: Pinch behaves like a two-finger drag. Two fingers concurrently moving towards each other or apart.
  • PinchComplete: Lift your fingers from the screen.

If you want to use some of the gestures in your Windows Phone 7 XNA game, the best way is to enable them in the Initialize() method as follows:

[code]

TouchPanel.EnabledGestures =
GestureType.Tap |
GestureType.FreeDrag |
GestureType.Flick;

[/code]

Then in the Update() method, you could interact with the gestures as follows:

[code]

while (TouchPanel.IsGestureAvailable)
{
GestureSample gesture = TouchPanel.ReadGesture();
switch (gesture.GestureType)
{
case GestureType.Tap:
. . .
break;
case GestureType.FreeDrag:
. . .
break;
case GestureType.Flick:
. . .
break;
}
}

[/code]

The while loop is used to check whether the gesture property is enabled. If you have set the TouchPanel.EnableGestures in the Initialize() method, then at least one gesture, the IsGestureAvailable will be true. The TouchPanel.ReadGesture() method will then retrieve the gesture taking place on the screen and you can write your own logic to react to the different gesture types.

Now, you know the basic skeleton code for manipulating the Windows Phone 7 gestures. Moreover, I will explain the GestureSample class, which defines the four properties of type Vector2:

  • Delta: Holds delta information about the first touchpoint in a multitouch gesture
  • Delta2: Holds delta information about the second touchpoint in a multitouch gesture
  • Position: Holds the current position of the first touchpoint in this gesture sample
  • Position2: Holds the current position of the second touchpoint in this gesture sample

The Position property indicates the current position of the finger relative to the screen. The Delta property presents the finger movements since the last position. The Delta is zero when the finger touches on the screen and remains there.

Furthermore, we should thank Charles Petzold’s who reminds us of the following:

  • Position is valid for all gestures except Flick. Flick is positionless, only the Delta value could be tracked.
  • Delta is valid for all Drag gestures, Pinch and Flick.
  • Position2 and Delta2 are valid only for Pinch.
  • None of these properties are valid for the DragComplete and PinchComplete types.

Now that we’ve covered the basic ideas of gestures, let’s look at a simple example in which you can drag the ball to the middle of the screen. If you swipe it, the ball will fly away and will come back when it collides with the screen bounds.

How to do it…

  1. In Visual Studio 2010, click File | New | Project | Windows Phone Game and create a Windows Phone Game project named DragSwipe. Change Game1.cs to DragSwipe.cs and then add the following lines as fields:
    [code]
    GraphicsDeviceManager graphics;
    SpriteBatch spriteBatch;
    Texture2D texBall;
    Rectangle HitRegion;
    // Ball position
    Vector2 positionBall;
    bool isSelected;
    // Ball velocity
    Vector2 velocity;
    // This is the percentage of velocity lost each second as
    // the sprite moves around.
    const float Friction = 0.9f;
    // Margin for screen bound
    const int Margin = 5;
    // Viewport bound for ball
    float boundLeft = 0f;
    float boundRight = 0f;
    float boundTop = 0f;
    float boundBottom = 0f;
    [/code]
  2. The second step is to add the following lines in the Initialize() method:
    [code]
    // Enable gestures
    TouchPanel.EnabledGestures =
    GestureType.Tap |
    GestureType.FreeDrag |
    GestureType.Flick;
    [/code]
  3. After the gestures are enabled, you need to insert the following code to LoadContent():
    [code]
    spriteBatch = new SpriteBatch(GraphicsDevice);
    texBall = Content.Load<Texture2D>(“Round”);
    // Set the HitRegion of ball in the center
    HitRegion = new Rectangle(
    GraphicsDevice.Viewport.Width / 2 – texBall.Width / 2,
    GraphicsDevice.Viewport.Height / 2 – texBall.Height / 2,
    texBall.Width, texBall.Height);
    // Set the ball position to the center
    positionBall = new Vector2(
    GraphicsDevice.Viewport.Width / 2 – texBall.Width / 2,
    GraphicsDevice.Viewport.Height / 2 – texBall.Height / 2);
    // Define the bound for ball moving
    boundLeft = 0;
    boundRight = GraphicsDevice.Viewport.Width – texBall.Width;
    boundTop = 0;
    boundBottom = GraphicsDevice.Viewport.Height – texBall.Height;
    [/code]
  4. Add the following code to the Update() method:
    [code]
    TouchCollection touches = TouchPanel.GetState();
    // Check the first finger touches on screen
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed
    {
    // Examine the tapped position is in the HitRegion
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    if (HitRegion.Contains(point))
    {
    isSelected = true;
    }
    else
    {
    isSelected = false;
    }
    }
    // Check the available gestures
    while (TouchPanel.IsGestureAvailable)
    {
    // Read the on-going gestures
    GestureSample gesture = TouchPanel.ReadGesture();
    //Process different gestures
    switch (gesture.GestureType)
    {
    case GestureType.FreeDrag:
    if (isSelected)
    {
    // When the ball is being dragged, update
    // the position of ball and HitRegion
    positionBall += gesture.Delta;
    HitRegion.X += (int)gesture.Delta.X;
    HitRegion.Y += (int)gesture.Delta.Y;
    }
    break;
    case GestureType.Flick:
    {
    if (isSelected)
    {
    // When the ball is swiped, update its
    // velocity
    velocity = gesture.Delta *
    (float)gameTime.ElapsedGameTime.
    TotalSeconds;
    }
    }
    break;
    }
    }
    // Accumulate the velocity of every frame
    positionBall += velocity;
    // Reduce the velocity of every frame
    velocity *= 1f – (Friction *
    (float)gameTime.ElapsedGameTime.TotalSeconds);
    // Check Bound, once the ball collides with the bound, change
    // its direction to the opposite.
    if (positionBall.X < boundLeft)
    {
    positionBall.X = boundLeft + Margin;
    velocity.X *= -1;
    }
    else if (positionBall.X > boundRight)
    {
    positionBall.X = boundRight – Margin;
    velocity.X *= -1;
    }
    else if (positionBall.Y < boundTop)
    {
    positionBall.Y = boundTop + Margin;
    velocity.Y *= -1;
    }
    else if (positionBall.Y > boundBottom)
    {
    positionBall.Y = boundBottom – Margin;
    velocity.Y *= -1;
    }
    // Update the position of HitRegion
    HitRegion.X = (int)positionBall.X;
    HitRegion.Y = (int)positionBall.Y;
    [/code]
  5. The final Draw() method is very simple:
    [code]
    protected override void Draw(GameTime gameTime)
    {
    GraphicsDevice.Clear(Color.CornflowerBlue);
    spriteBatch.Begin();
    spriteBatch.Draw(texBall, positionBall, Color.White);
    spriteBatch.End();
    base.Draw(gameTime);
    }
    [/code]
  6. Build and run the project. The application will look similar to the following screenshot:
    Build and run the project

How it works…

In step 1, the first four variables are the necessary objects for rendering texture and string on the screen. positionBall indicates the ball position. The Friction variable is used for calculating the ball velocity that changes in every frame. The Margin variable defines the distance between the active limited bound for ball and viewport. The four integers boundLeft, boundRight, boundTop, and boundBottom are the active bound values for controlling the ball movement inside the screen.

In step 2, the code enables the Tap, FreeDrag, and Flick gestures in this application. This means that you could track these actions and perform the corresponding logic.

In step 3, we set the HitRegion and positionBall to the center of the screen. This makes the ball moving-bound have a distance away from the ball texture width to the screen right side and from the screen bottom to the height of the ball texture.

In step 4, the first section of the code is used to check whether the tapped point is inside the ball. If yes, isSelected will be true. The code in the while loop is used for reading and processing the different gestures. While the GestureType is FreeDrag, we update the ball position by Gesture.Delta, the same as the HitRegion. When the GestureType is equal to Flick, it means you are swiping the ball very fast. We use the Delta value to update the ball velocity along the game elapsed time. After the while loop, you can use the latest value to perform your own logic. Here, we update the velocity following the inertia law with the game elapsed time in every frame. The next section is the bounding checks for the moving ball, as long as the ball collides with the bound, the ongoing ball direction will be changed to the opposite.

Controlling images with Multi-Touch control

Windows Phone 7 supports Multi-Touch Control. With this technique, developers can do any creative operation using their fingers such as to control car steel in a racing game, to rotate and zoom in/out images, to hit balls that come up from different positions concurrently, and so on. In this recipe, you will know how to control images using the Multi-Touch technology in Windows Phone 7.

Getting ready

In the movie Minority Report, Tom Cruise wears gloves with flashing lights on every finger, grasping and zooming the images on the glass wall without touching them. I am sure the incredible actions and effects impressed you at that time. Actually, I am not Tom Cruise, I am not the director Steven Spielberg, and the glass wall is not in front of me. However, I have Windows Phone 7, and although the effect can be as cool as the movie, I still want to present you with the amazing Multi-Touch on image controlling. Now, let’s begin our journey.

How to do it…

  1. The first step is to create a Windows Phone Game project named MultiTouchImage. Change Game1.cs to ImageControl.cs and then add a new class file Mountain.cs to the project. The complete class will be as follows:
    [code]
    public class Mountain
    {
    // The minimum and maximum scale values for the sprite
    public const float MinScale = 0.5f;
    public const float MaxScale = 3f;
    // The texture object
    private Texture2D texture;
    // The scale factor for pinch
    private float scale = 1f;
    // The center position of object
    public Vector2 Center;
    // The Scale property of object
    public float Scale
    {
    get { return scale; }
    // Control the scale value within min and max
    set
    {
    scale = MathHelper.Clamp(value, MinScale,
    MaxScale);
    }
    }
    // The HitRegion Property
    public Rectangle HitRegion
    {
    get
    {
    // Create a rectangle based on the texture center
    // and scale
    Rectangle r = new Rectangle(
    (int)(Center.X – (texture.Width /2 * Scale)),
    (int)(Center.Y – (texture.Height /2 *Scale)),
    (int)(texture.Width * Scale),
    (int)(texture.Height * Scale));
    return r;
    }
    }
    public Mountain(Texture2D texture)
    {
    this.texture = texture;
    }
    public void Draw(SpriteBatch spriteBatch)
    {
    spriteBatch.Draw(
    texture,
    Center,
    null,
    Color.White,
    0,
    new Vector2(texture.Width / 2,
    texture.Height / 2),
    Scale,
    SpriteEffects.None,
    0);
    }
    }
    [/code]
  2. Now you have completed the Mountain class. The second step is to add the fields to your ImageControl main game class for interacting with the mountain texture:
    [code]
    GraphicsDeviceManager graphics;
    SpriteBatch spriteBatch;
    // Texture file for mountain
    Texture2D texMountain;
    // Mountain object
    Mountain mountain;
    // Bool value tracks the mountain object selection
    bool isSelected;
    [/code]
  3. The next step is to insert the code in to the LoadContent() method:
    [code]
    // Create a new SpriteBatch, which can be used to draw
    textures.
    spriteBatch = new SpriteBatch(GraphicsDevice);
    texMountain = Content.Load<Texture2D>(“Mountain”);
    // Initialize the mountain object
    mountain = new Mountain(texMountain);
    // Define the center of mountain
    mountain.Center = new Vector2(
    GraphicsDevice.Viewport.Width / 2,
    GraphicsDevice.Viewport.Height / 2);
    [/code]
  4. Next, you should enable the Pinch and Drag gestures in the Initialize method:
    [code]
    TouchPanel.EnabledGestures =
    GestureType.Pinch | GestureType.FreeDrag;
    [/code]
  5. Then add the following core logic code for Multi-Touch in Windows Phone 7 to the Update() method:
    [code]
    // Allows the game to exit
    if (GamePad.GetState(PlayerIndex.One).Buttons.Back ==
    ButtonState.Pressed)
    this.Exit();
    // Get the touch position and check whether it is in HitRegion
    TouchCollection touches = TouchPanel.GetState();
    if (touches.Count > 0 && touches[0].State ==
    TouchLocationState.Pressed)
    {
    Point point = new Point((int)touches[0].Position.X,
    (int)touches[0].Position.Y);
    if (mountain.HitRegion.Contains(point))
    {
    isSelected = true;
    }
    else
    {
    isSelected = false;
    }
    }
    // Check the Gestures available or not
    while (TouchPanel.IsGestureAvailable)
    {
    //Read the gestures
    GestureSample gestures = TouchPanel.ReadGesture();
    // Determine which gesture takes place
    switch (gestures.GestureType)
    {
    // When gesture type is Pinch
    case GestureType.Pinch:
    // When the mountain texture is selected
    if (isSelected)
    {
    // Get the current touch position
    // and calculate their previous position
    // according to the Delta value from gesture.
    Vector2 vec1 = gestures.Position;
    Vector2 oldvec1 =
    gestures.Position – gestures.Delta;
    Vector2 vec2 = gestures.Position2;
    Vector2 oldvec2 =
    gestures.Position2 – gestures.Delta2;
    // Figure out the distance between the current
    // and previous locations
    float distance = Vector2.Distance(vec1, vec2);
    float oldDistance = Vector2.Distance(oldvec1,
    oldvec2);
    // Calculate the difference between the two
    // and use that to alter the scale
    float scaleChanged =
    (distance – oldDistance) * 0.01f;
    mountain.Scale += scaleChanged;
    }
    break;
    // When gesture is FreeDrag
    case GestureType.FreeDrag:
    // When the mountain texture is selected
    if (isSelected)
    {
    mountain.Center += gestures.Delta;
    }
    break;
    }
    }
    [/code]
  6. In the Draw() method the code is easy:
    [code]
    spriteBatch.Begin();
    mountain.Draw(spriteBatch);
    spriteBatch.End();
    [/code]
  7. Now, build and run the project. Then start the Multi-Touch simulator, if you do not have Windows Phone 7. When the application runs well, you can experience the amazing Multi-Touch feeling, as shown in the following screenshots. When your fingers move outwards, the application runs similar to the screenshot to the right:
    Multi-Touch simulator

How it works…

In step 1, in the Mountain class, we have defined the MinScale and MaxScale to limit the scale value variation range. In the set operation of Scale property, the MathHelp. Clamp() method is used to perform the value limitation work of scale. The HitRegion property is responsible for returning the start point and size of the texture bounding box based on the texture center and size. In the Draw() method, we use another overload method of SpriteBatch.Draw() because we want to change the scale value of the texture. The complete parameter specification is as follows:

[code]
public void Draw (
Texture2D texture,
Vector2 position,
Nullable<Rectangle> sourceRectangle,
Color color,
float rotation,
Vector2 origin,
Vector2 scale,
SpriteEffects effects,
float layerDepth
)
[/code]

The texture, position, and color parameters are easy to understand. The sourceRectangle is used to determine which part of the texture needs to be rendered. In this example, we set it to null to draw the entire texture. The rotation parameter will rotate the texture with the actual passing value. The origin parameter defines the position around which rotation and scale can take place. The effects parameter applies SpriteEffects to texture; here, we set SpriteEffects to None. The last parameter layerDepth gives the layer order for drawing.

In step 3, the code snippet loads the Mountain image and initializes the mountain position to the center of the screen.
In step 5, the code tests the touch point to check whether it is inside the mountain region. Then in the while loop, when TouchPanel.IsGestureAvailable is true, you get into the core code of Multi-Touch Pinch gesture. In this case, once two of your fingers tap on the Windows Phone 7 touchscreen, the code will read these two positions and calculate the previous position after you move the two fingers. It then calculates the distance between the current positions and the previous position. Based on the subtraction value of the two distances, you will get the scale change factor to change the size of the mountain texture and render it in Draw().
Using accelerometer to make a ball move on your phone

Accelerometers, which are useful for game programming, are becoming more common nowadays. An accelerometer is a device, a kind of a sensor, contained in the Windows Phone 7 that can report the device’s current axis or orientation. In other words, it can tell if the device is lying on the horizontal plain or rotated to the vertical position. This data from accelerometer presents the Windows Phone 7 game programmers with the opportunity to work with gravity, orientations, and so on. Instead of using the touchscreen, or pressing a button to move objects on the screen, the accelerometer makes it possible for players to shake or adjust the Windows Phone 7 device in whichever direction they want. The game play will take the corresponding actions. This feature of Windows Phone 7 creates a lot of possibilities for a game programmer. In this chapter, you will learn how to use this feature.

Getting ready

When you have a Windows Phone 7 in your hand, you will enjoy the accelerometer in the device. Just imagine playing a racing game, such as Need for Speed or GTA without any controller or keyboard, instead only using your hands. It is very exciting! As a Windows Phone programmer, understanding and mastering the accelerometer technique will make your game more attractive and creative. The two most common usages of accelerometer are:

  • Orientation adjustment
  • Movement track from initial position

The accelerometer sensor in Windows Phone 7 will tell you the direction of the earth relative to the phone because when the phone is still, the accelerometer will react to the force of gravity. Besides this, corresponding to a sudden action, such as a shake of the device, is also a very special function for inspiring your creativity.

Representing the output of Windows Phone 7 accelerometer in 3D vector is straightforward for its developers. Vector has direction and length, the (x, y, z) for 3D vector means the direction from the origin point (0, 0, 0) to the (x, y, z) point. We could learn the basic concepts and calculation from some computer graphics or linear algebra books.

While programming on a Windows Phone 7, you should be clear on the 3D coordinate system, no matter how the phone orients. The 3D coordinate system is completely different from the 2D coordinate system, in which the origin point is located at the top-left of the screen of Window Phone 7. The 3D coordinate system in the phone is a right-hand system. Here, you just need to know that the positive Z axis always points to the front as shown in the following screenshots. The screenshot to the left is for the landscape mode and the one to the right is for the portrait mode:

3D coordinate system

In landscape mode, the increasing Y towards the top side is parallel to the control pad and the increasing X is perpendicular to the control pad towards the right side. In portrait mode, the increasing Y towards the top side is perpendicular to the control pad and the increasing X towards the right side is parallel to the control pad. In addition, the 3D coordinate system remains fixed relative to the phone regardless of how you hold the phone or whatever the orientation is. The accelerometer is the reason to change the Windows Phone 7 orientation. In the next section, I will introduce you to the basic programming skeleton for the Windows Phone 7 accelerometer

For a typical Windows Phone 7 XNA accelerometer application, the first step is to add a Microsoft.Devices.Sensors reference to your project and then add data members to the game to hold the accelerometer data:

[code]
Accelerometer accelSensor;
YourClass substance;
bool accelActive;
Vector3 accelReading = new Vector3();
const float ACCELFACTOR = 2.0f;
[/code]

The Vector3 variable accelReading will be used to read the position data from AccelerometerReadingEventArgs. The second step is to add an event handler for the ReadingChanged event Accelerometer object:

[code]
public void AccelerometerReadingChanged(object sender,
AccelerometerReadingEventArgs e)
{
accelReading.X = (float)e.X;
accelReading.Y = (float)e.Y;
accelReading.Z = (float)e.Z;
}
[/code]

This method returns void, and passes two parameters; one is the sender, another is the AccelerometerReadingEventArgs to get the accelerometer reading. After this, you should add a reference in the Initialize() method:

[code]
accelSensor = new Accelerometer();
substance = new YourClass();
// Add the accelerometer event handler to the accelerometer
// sensor.
accelSensor.ReadingChanged += new EventHandler
<AccelerometerReadingEventArgs>(AccelerometerReadingChanged);
[/code]

When you are done with the preparation, you need to start the accelerometer:

[code]
// Start the accelerometer
try
{
accelSensor.Start();
accelActive = true;
}
catch (AccelerometerFailedException e)
{
// the accelerometer couldn’t be started. No fun!
accelActive = false;
}
catch (UnauthorizedAccessException e)
{
// This exception is thrown in the emulator – which doesn’t
// support an accelerometer.
accelActive = false;
}
[/code]

After it is started, the accelerometer calls your event handler when the ReadingChanged event is raised. Update your stored AccelerometerReadingEventArgs class (previously shown in the event handler code), and then use its data in your Update() method:

[code]
if (accelActive)
{
// accelerate the substance speed depending on
// accelerometer
// action.
substance.speed.X += accelReading.X * ACCELFACTOR;
substance.speed.Y += -accelReading.Y * ACCELFACTOR;
}
[/code]

The final code is a skeleton snippet used to stop the accelerometer sensor. To avoid having your event handler being called repeatedly when your game is not actually using the accelerometer data, you can stop the accelerometer when the game is paused, when menus are being shown, or at any other time by calling the Stop() method. Like the Start() method, this method can throw an exception, so allow your code to handle the AccelerometerFailedException:

[code]
// Stop the accelerometer if it’s active.
if (accelActive)
{
try
{
accelSensor.Stop();
}
catch (AccelerometerFailedException e)
{
// the accelerometer couldn’t be stopped now.
}
}
[/code]

The complete accelerometer skeleton snippet will be as follows:

[code]
using Microsoft.Devices.Sensors;
. . .
public class Game : Microsoft.Xna.Framework.Game
{
Accelerometer accelSensor;
YourClass substance;
bool accelActive;
Vector3 accelReading = new Vector3();
const float ACCELFACTOR = 2.0f;
protected override void Initialize()
{
base.Initialize();
accelSensor = new Accelerometer();
substance = new YourClass();
// Add the accelerometer event handler to the
// accelerometer sensor.
accelSensor.ReadingChanged += new EventHandler
<AccelerometerReadingEventArgs>
(AccelerometerReadingChanged);
// Start the accelerometer
try
{
accelSensor.Start();
accelActive = true;
}
catch (AccelerometerFailedException e)
{
// the accelerometer couldn’t be started. No fun!
accelActive = false;
}
catch (UnauthorizedAccessException e)
{
// This exception is thrown in the emulator – which
// doesn’t support an accelerometer.
accelActive = false;
}
}
protected override void LoadContent()
{
. . .;
}
protected override void Update(GameTime gameTime)
{
if (accelActive)
{
// accelerate the substance speed depending on
// accelerometer action.
substance.speed.X += accelReading.X * ACCELFACTOR;
substance.speed.Y += -accelReading.Y * ACCELFACTOR;
}
}
protected override void UnloadContent()
{
// Unload any non ContentManager content here
// Stop the accelerometer if it’s active.
if (accelActive)
{
try
{
accelSensor.Stop();
}
catch (AccelerometerFailedException e)
{
// the accelerometer couldn’t be stopped now.
}
}
}
}
[/code]

So far, I suppose you are familiar with the basic code of Windows Phone 7 accelerometer. Now, let’s make a new accelerometer project: a white ball will move around within the Windows Phone screen depending on your hand movement.

How to do it…

  1. First, create a Windows Phone Game project in Visual Studio 2010, named AccelerometerFallingBall and then change Game1.cs to FallingBallGame.cs. For accelerometer, you should add a Microsoft. Devices.Sensors reference to the project’s reference. Then, add Ball.cs.
  2. After the preparation work, you should insert the following lines to the FallingBallGame class field:
    [code]
    GraphicsDeviceManager graphics;
    SpriteBatch spriteBatch;
    // Ball object
    Ball ball;
    Texture2D texBall;
    Rectangle recBound;
    Vector2 position;
    //Accelerometer object
    Accelerometer accelSensor;
    // The accelActive bool value indicates whether the accelerometer
    // is turned on or off.
    bool accelActive;
    // Demonstrate the direction of accelerometer
    Vector3 accelReading = new Vector3();
    // Amplify the variegation of ball velocity
    const float ACCELFACTOR = 2.0f;
    // Viewport bound
    int boundLeft = 0;
    int boundRight = 0;
    int boundTop = 0;
    int boundBottom = 0;
    [/code]
  3. Add the following lines to the Initialize() method of the FallingBallGame class:
    [code]
    accelSensor = new Accelerometer();
    accelSensor.ReadingChanged += new EventHandler
    <AccelerometerReadingEventArgs>
    (AccelerometerReadingChanged);
    // Start the accelerometer
    try
    {
    accelSensor.Start();
    accelActive = true;
    }
    catch (AccelerometerFailedException e)
    {
    // the accelerometer couldn’t be started. No fun!
    accelActive = false;
    }
    catch (UnauthorizedAccessException e)
    {
    // This exception is thrown in the emulator – which
    // doesn’t support an accelerometer.
    accelActive = false;
    }
    [/code]
  4. Then, add the AccelerometerReadingChanged() method in the FallingBallGame class:
    [code]
    public void AccelerometerReadingChanged(object sender,
    AccelerometerReadingEventArgs e)
    {
    accelReading.X = (float)e.X;
    accelReading.Y = (float)e.Y;
    accelReading.Z = (float)e.Z;
    }
    [/code]
  5. The fifth step is to insert the code into the LoadContent() method of the FallingBallGame class:
    [code]
    // Create a new SpriteBatch, which can be used to draw
    // textures.
    spriteBatch = new SpriteBatch(GraphicsDevice);
    texBall = Content.Load<Texture2D>(“Round”);
    recBound = new Rectangle(GraphicsDevice.Viewport.Width / 2 –
    texBall.Width / 2,
    GraphicsDevice.Viewport.Height / 2 – texBall.Height / 2,
    texBall.Width, texBall.Height);
    // The center position
    position = new Vector2(GraphicsDevice.Viewport.Width / 2 –
    texBall.Width / 2,
    GraphicsDevice.Viewport.Height / 2 – texBall.Height / 2);
    //Bound Calculation
    boundLeft = 0;
    boundRight = GraphicsDevice.Viewport.Width – texBall.Width;
    boundTop = 0;
    boundBottom = GraphicsDevice.Viewport.Height – texBall.Height;
    //Initialize ViewPortBound
    ViewPortBound viewPortBound = new ViewPortBound(boundTop,
    boundLeft, boundRight, boundBottom);
    // Initialize Ball
    ball = new Ball(this, spriteBatch, texBall, recBound,
    position,viewPortBound);
    [/code]
  6. This step makes the ball interact with the accelerometer’s latest value. Add the following lines to the Update() method of the FallingBallGame class:
    [code]
    if (Microsoft.Devices.Environment.DeviceType ==
    DeviceType.Device)
    {
    if (accelActive)
    {
    // Accelerate the substance speed depending on
    // accelerometer action.
    ball.Velocity.X += accelReading.X * ACCELFACTOR;
    ball.Velocity.Y += -accelReading.Y * ACCELFACTOR;
    }
    }
    else if (Microsoft.Devices.Environment.DeviceType ==
    DeviceType.Emulator)
    {
    // Simulate the Keyboard when running on emulator
    KeyboardState keyboardCurrentState = Keyboard.GetState();
    if (keyboardCurrentState.IsKeyDown(Keys.Left))
    ball.Velocity.X -= 5f;
    if (keyboardCurrentState.IsKeyDown(Keys.Right))
    ball.Velocity.X += 5f;
    if (keyboardCurrentState.IsKeyDown(Keys.Up))
    ball.Velocity.Y -= 5f;
    if (keyboardCurrentState.IsKeyDown(Keys.Down))
    ball.Velocity.Y += 5f;
    }
    ball.Update(gameTime);
    [/code]
  7. Build and run the project. The ball will move with the shake of your hand, as application is shown in the next screenshot:shake of your hand

How it works…

In step 2, in the first section, we declare the Ball object, the texture, bound, and position of the ball. Then we declare the Accelerometer object. The bool value is used for indicating whether the accelerometer is active or not. The Vector3 variable accelReading stands for the direction of the accelerometer. ACCELFACTOR will make the accelerometer change more obvious. The next section covers the variables used for bound check.

In step 3, the code initializes the accelSensor object and associates the EventHandler with the ReadingChanged event for the accelerometer object. It then enables the accelerometer.

In step 4, the AccelerometerReadingChanged method is responsible for updating accelReading with every accelerometer direction change.

In step 6, notice that with the Microsoft.Devices.DeviceType, we do a check on the device type for Windows Phone 7 development. It is a challenge when using the emulator that you want to work with the accelerometer having specific hardware. You can even simulate it through the keyboard. When your application is running on a real Windows Phone 7 device, the code will read the actual data from the accelerometer in the device to update the ball velocity. Otherwise, you should enable your ball to simulate the information to change the ball velocity by 5 units per valid keyboard press.

Musical Robot (Multi-Touch)

Musical Robot is a quirky musical instrument app that can play two-octaves-worth of robotic sounds based on where you place your fingers. Touching toward the left produces lower notes, and touching toward the right produces higher notes. You can slide your fingers around to produce interesting effects. You can use multiple fingers— as many as your phone supports simultaneously—to play chords (multiple notes at once). You’re more likely to use this app to annoy your friends rather than play actual compositions, but it’s fun nevertheless!

The User Interface

Musical Robot’s main page, pictured in Figure 38.1 in its initial state, contains a few visual elements that have nothing to do with the core functionality of this app, but provide some visual flair and simple instructions. Listing 38.1 contains the XAML.

The main page contains a robot image and instructions.
FIGURE 38.1 The main page contains a robot image and instructions.

LISTING 38.1 MainPage.xaml—The User Interface for Musical Robot’s Main Page

[code]

<phone:PhoneApplicationPage x:Class=”WindowsPhoneApp.MainPage”
xmlns=”http://schemas.microsoft.com/winfx/2006/xaml/presentation”
xmlns:x=”http://schemas.microsoft.com/winfx/2006/xaml”
xmlns:phone=”clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone”
SupportedOrientations=”Landscape” Orientation=”Landscape”>
<Canvas>
<!– The dynamic mouth that is visible through the image’s mouth hole –>
<Rectangle Canvas.Left=”168” Canvas.Top=”127” Width=”114” Height=”23”
RadiusX=”10” RadiusY=”10” RenderTransformOrigin=”.5,.5”
Fill=”{StaticResource PhoneForegroundBrush}”>
<Rectangle.RenderTransform>
<!– The scale is continually changed from code-behind –>
<ScaleTransform x:Name=”MouthScale” ScaleX=”0”/>
</Rectangle.RenderTransform>
</Rectangle>
<!– 5 lights representing up to 5 simultaneous fingers –>
<Ellipse x:Name=”Light1” Visibility=”Collapsed” Canvas.Left=”137”
Canvas.Top=”284” Width=”23” Height=”23” Fill=”Red”/>
<Ellipse x:Name=”Light2” Visibility=”Collapsed” Canvas.Left=”174”
Canvas.Top=”294” Width=”23” Height=”23” Fill=”Red”/>
<Ellipse x:Name=”Light3” Visibility=”Collapsed” Canvas.Left=”213”
Canvas.Top=”298” Width=”23” Height=”23” Fill=”Red”/>
<Ellipse x:Name=”Light4” Visibility=”Collapsed” Canvas.Left=”252”
Canvas.Top=”294” Width=”23” Height=”23” Fill=”Red”/>
<Ellipse x:Name=”Light5” Visibility=”Collapsed” Canvas.Left=”290”
Canvas.Top=”284” Width=”23” Height=”23” Fill=”Red”/>
<!– The accent-colored robot –>
<Rectangle Width=”453” Height=”480” Fill=”{StaticResource PhoneAccentBrush}”>
<Rectangle.OpacityMask>
<ImageBrush ImageSource=”Images/robot.png”/>
</Rectangle.OpacityMask>
</Rectangle>
<!– Instructions –>
<TextBlock Canvas.Left=”350” Canvas.Top=”40” FontFamily=”Segoe WP Black”
FontSize=”40” Foreground=”{StaticResource PhoneAccentBrush}”>
<TextBlock.RenderTransform>
<RotateTransform Angle=”-10”/>
</TextBlock.RenderTransform>
TAP &amp; DRAG.
<LineBreak/>
USE MANY FINGERS!
</TextBlock>
</Canvas>
</phone:PhoneApplicationPage>

[/code]

  • MouthScale’s ScaleX value is randomly set anywhere from 0 to 1 whenever a finger makes contact with or moves across the screen. This provides the illusion that the robot is singing as it makes its noises.
  • The circles are filled with red to indicate how many fingers are simultaneously in contact with the screen (up to 5). The limit of 5 is simply due to space constraints in the artwork. It does not reflect on multi-touch limitations of the operating system or any particular device.

How many simultaneous touch points does Windows Phone support?

All Windows phones are guaranteed to support at least four simultaneous touch points. (Current models support exactly four.) The operating system can support up to 10, in case an ambitious device wants to support it.

The Code-Behind

Listing 38.2 contains the code-behind for the main page.

LISTING 38.2 MainPage.xaml.cs—The Code-Behind for Musical Robot’s Main Page

[code]

using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows;
using System.Windows.Input;
using System.Windows.Navigation;
using Microsoft.Phone.Controls;
using Microsoft.Xna.Framework.Audio;
namespace WindowsPhoneApp
{
public partial class MainPage : PhoneApplicationPage
{
// Store a separate sound effect instance for each unique finger
Dictionary<int, SoundEffectInstance> fingerSounds =
new Dictionary<int, SoundEffectInstance>();
// For the random mouth movement
Random random = new Random();
public MainPage()
{
InitializeComponent();
SoundEffects.Initialize();
}
protected override void OnNavigatedTo(NavigationEventArgs e)
{
base.OnNavigatedTo(e);
// Subscribe to the touch/multi-touch event.
// This is application-wide, so only do this when on this page.
Touch.FrameReported += Touch_FrameReported;
}
protected override void OnNavigatedFrom(NavigationEventArgs e)
{
base.OnNavigatedFrom(e);
// Unsubscribe from this application-wide event
Touch.FrameReported -= Touch_FrameReported;
}
void Touch_FrameReported(object sender, TouchFrameEventArgs e)

{
// Get all touch points
TouchPointCollection points = e.GetTouchPoints(this);
// Filter out the “up” touch points because those fingers are
// no longer in contact with the screen
int numPoints =
(from p in points where p.Action != TouchAction.Up select p).Count();
// Update up to 5 robot lights to indicate how many fingers are in contact
this.Light1.Visibility =
(numPoints >= 1 ? Visibility.Visible : Visibility.Collapsed);
this.Light2.Visibility =
(numPoints >= 2 ? Visibility.Visible : Visibility.Collapsed);
this.Light3.Visibility =
(numPoints >= 3 ? Visibility.Visible : Visibility.Collapsed);
this.Light4.Visibility =
(numPoints >= 4 ? Visibility.Visible : Visibility.Collapsed);
this.Light5.Visibility =
(numPoints >= 5 ? Visibility.Visible : Visibility.Collapsed);
// If any fingers are in contact, stretch the inner mouth anywhere from
// 0 to 100%
if (numPoints == 0)
this.MouthScale.ScaleX = 0;
else
this.MouthScale.ScaleX = this.random.NextDouble(); // Returns a # from 0-1
// Process each touch point individually
foreach (TouchPoint point in points)
{
// The “touch device” is each finger, and it has a unique ID
int fingerId = point.TouchDevice.Id;
if (point.Action == TouchAction.Up)
{
// Stop the sound corresponding to this just-lifted finger
if (this.fingerSounds.ContainsKey(fingerId))
this.fingerSounds[fingerId].Stop();
// Remove the sound from the dictionary
this.fingerSounds.Remove(fingerId);
}
else
{

// Turn the horizontal position into a pitch from -1 to 1.
// -1 represents 1 octave lower, 1 represents 1 octave higher.
float pitch = (float)(2 * point.Position.X / this.ActualWidth) – 1;
if (!this.fingerSounds.ContainsKey(fingerId))
{
// We haven’t yet created the sound effect for this finger, so do it
this.fingerSounds.Add(fingerId, SoundEffects.Sound.CreateInstance());
this.fingerSounds[fingerId].IsLooped = true;
}
// Start playing the looped sound at the correct pitch
this.fingerSounds[fingerId].Pitch = pitch;
this.fingerSounds[fingerId].Play();
}
}
// Work around the fact that we sometimes don’t get Up actions reported
CheckForStuckSounds(points);
}
void CheckForStuckSounds(TouchPointCollection points)
{
List<int> soundsToRemove = new List<int>();
// Inspect each active sound
foreach (var sound in this.fingerSounds)
{
bool found = false;
// See if this sound corresponds to an active finger
foreach (TouchPoint point in points)
{
if (point.TouchDevice.Id == sound.Key)
{
found = true;
break;
}
}
// It doesn’t, so stop the sound and mark it for removal
if (!found)
{
sound.Value.Stop();
soundsToRemove.Add(sound.Key);
}
}
// Remove each orphaned sound
foreach (int id in soundsToRemove)
this.fingerSounds.Remove(id);
}
}
}

[/code]

  • Because this app contains only a single page, unsubscribing from the FrameReported event in OnNavigatedFrom is not necessary, but this is done to keep good hygiene in case another page is ever added (or this code is used in a different app).
  • Inside the FrameReported event handler (Touch_FrameReported), GetTouchPoints is called to get the entire collection of touch points. On most devices, this collection will always contain 1–4 items. It never is zero-length or null.
  • Because each touch point can be in the returned collection for one of three reasons (making initial contact with the screen, moving on the screen, or being released from the screen), simply counting its items does not tell us how many fingers are currently in contact with the screen. To do this, we must filter out any touch points whose Action property is set to Up. This could be done by manually enumerating the points collection, but this code instead opts for a LINQ query to set the value of numPoints.
  • The code responsible for starting and stopping each sound makes use of a property on TouchPoint not mentioned in the preceding chapter: TouchDevice. This oddsounding property represents the user’s finger responsible for the touch point. Each finger is assigned an integer ID, exposed as a property on TouchDevice, which can be used to track each finger individually. This would otherwise be impossible when multiple fingers are triggering events simultaneously. The ID assigned to any finger is guaranteed to remain unique during the lifetime of a down/move/up action cycle.
  • The fingerSounds dictionary leverages the unique finger IDs for its keys. This listing starts playing a looped sound as each new finger makes contact with the screen, it adjusts the pitch of the sound as the finger is moved, and it stops the sound as the finger is released.
  • The SoundEffects class used by this listing is just like the same-named class used by many of this book’s apps, but customized to expose a single sound through its Sound property. The included sound is so short that it is barely audible when played by itself, but the IsLooped property on SoundEffectInstance is leveraged to produce a smooth and very audible sound. The pitch of each sound is varied based on the horizontal position of each touch point.

Occasionally, a finger-up action might not be reported!

Due to a bug in Silverlight (or perhaps in some touch drivers), a finger that has reported touching down and moving around might never report that it has been lifted up. Instead, the corresponding touch point simply vanishes from the collection returned by GetTouchPoints. In Musical Robot, this would manifest as sounds that never stop playing.

To prevent such “stuck” sounds, the CheckForStuckSounds method in Listing 38.2 looks at every active sound and attempts to find a current touch point that corresponds to it. If a sound is not associated with an active touch point, it is stopped and removed from the dictionary, just like what happens when a finger properly reports being lifted up.

Can a finger ID continue to identity a specific finger even if it temporarily leaves the screen?

No, the phone cannot continue to track a specific finger once it has broken contact with the screen.The unique ID, assigned during each new initial contact, is only valid until the FrameReported event reports an Up action for that touch point.

The Finished Product

Musical Robot (Multi-Touch)

Flash Builder ViewNavigator

Flash Builder and its Hero APIs provide functionality to handle view management.

Create a new project under File→New→Flex Mobile project. On the Mobile Settings panel, choose Application template→Mobile Application, then select Finish. Look at the main default application file. A firstView tag was added to the MobileApplica tion tag. It points to an MXML file inside a views directory:

[code]

<s:MobileApplication
xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adboec.com/flex/spark”
firstView=”views.OpeningView”
}
}

[/code]

Notice a new directory called views inside your default application file. It contains the MXML file which was named after your project, as well as HomeView.mxml, and was given a title of HomeView. This is the default first view which appears when the application starts:

[code]

<s:View
xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adboec.com/flex/spark”
title=”HomeView”>
<fx:Declarations> </fx:Declarations>
</s:View>

[/code]

A few new components were created especially for mobile development. The spark.components.ViewNavigatorApplication container automatically creates a ViewNavigator object. It is a container that consists of a collection of views, and is used to control the navigation between views, their addition or removal, and the navigation history.

The View is also a container. It extends from the Group component. Note its naviga tor attribute which references the ViewNavigator container and its data attribute which is used to store an Object, whether the view is currently visible or was previously visited.

The View dispatches three events of type FlexEvent. FlexEvent.VIEW_ACTIVATE and Flex Event.VIEW_DEACTIVATE are self-explanatory. FlexEvent.REMOVING is dispatched when the view is about to be deactivated.

Views are destroyed to keep a small footprint. If a particular view is complicated and may take some time to display, you can keep it in memory and set its destructionPo licy attribute to none.

Let’s add two buttons to our default view to navigate to another view:

[code]

<s:View
xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adboec.com/flex/spark”
title=”HomeView”>
<fx:Declarations> </fx:Declarations>
<fx:Script>
private function onClick(event:MouseEvent):void {
navigator.pushView(ContextView);
}
</fx:Script>
<s:Button id=”Apples” click=”onClick(event)” y=”100″ />
<s:Button id=”Oranges” click=”onClick(event)” y=”200″ />
</s:View>

[/code]

Clicking on the buttons calls the onClick function which, in turn, calls the navigator’s pushView method. This method navigates to a view we will call ContextView. The push View method takes three arguments: the name of the view, a data object, and a transition animation of type ViewTransition. Only the first parameter is required.

Other navigation methods are popView to go back to the previous view, popToFirst View to jump to the default first view, and the self-explanatory popAll and replace View. navigator.activeView returns the current view.

Change the onClick function to pass the button ID as an argument:

[code]

private function onClick(event:MouseEvent):void {
navigator.pushView(ContextView, {fruit:event.currentTarget.id});
}

[/code]

Now create the second view to navigate to. Right-click on the views folder, create a new project under New→MXML Component, and enter ContextView in the Name field. Note that the file is based on spark.components.View.

Add one text field to populate the view with the data received and one button to navigate back to the first view. Note the add tag which is dispatched when a view is added to the container:

[code]

<s:View
xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adboec.com/flex/spark”
title=”ContextView”>
<fx:Declarations> </fx:Declarations>
<s:add>
context.text = data.fruit;
</s:add>
<fx:Script>
private function onClick(event:MouseEvent):void {
navigator.pushView(HomeView);
}
</fx:Script>
<s:Button click=”onClick(event)” />
<s:TextArea id=”context” />
</s:View>

[/code]

Run this example and test the navigation. The context text area is populated with the data passed.

By default, a ViewTransition animates between screens, where one view slides out and the other slides in, defined as SlideViewTransition. You can change it for one of the following transitions: CrossFadeViewTransition, FlipViewTransition, or ZoomViewTran sition.

The default transition can be changed in navigator.defaultPopTransition and naviga tor.defaultPushTransition:

[code]

import spark.transitions.CrossFadeViewTransition;
import spark.transitions.FlipViewTransition;
var pushTransition = new FlipViewTransition();
navigator.defaultPushTransition = pushTransition;
var popTransition = new CrossFadeViewTransition();
navigator.defaultPopTransition = popTransition;
// OR
private function onClick(event:MouseEvent):void {
navigator.pushView(HomeView, {}, FlipViewTransition);
}

[/code]

To suppress the transition altogether, enter the following in the Default Application file:

[code]

navigator.transitionEnabled = false;
// OR
ViewNavigator.defaultPushTransition = null;
ViewNavigator.defaultPopTransition = null;

[/code]

A default ActionBar control is placed at the top of the screen. It functions as a control and navigation menu and provides contextual information, such as the current active view. It has a navigation area, a control area, and an action area. To modify it, use the navigationContent, titleContent, and actionContent tags.

By default, the name of the active view shows in the titleContent. To add a button to the navigationContent tag to go back home, use:

[code]

private function goHome():void {
navigator.popToFirstView();
}
<s:navigationContent>
<s:Button label=”Home” click=”goHome()”/>
</s:navigationContent>

[/code]

If you use it, move all navigation functionality from the views to it. Alternatively, you can choose not to use it at all. To hide it at the view level, use the following:

[code]

<s:creationComplete>
actionBarVisible = false;
</s:creationComplete>

[/code]

To hide it at the application level, use this code:

[code]

navigator.actionBar.visible = false;
navigator.actionBar.includeInLayout = false;

[/code]

Pressing the back button automatically goes back to the previous view. To overwrite this functionality, set a null navigationContent as follows:

[code]

<s:navigationContent/>

[/code]

If the user leaves the application on a specific view, you can start the application again on the same view with the same data by using the sessionCachingEnabled tag:

[code]

<s:MobileApplication
xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adboec.com/flex/spark”
sessionCachingEnabled=”true”
firstView=”views.OpeningView”
}
}

[/code]

 

Navigation

Unlike desktop applications, mobile applications only display one screen, or view, at a time. Each screen should be designed for a single task with clear and focused content. Intuitive single-click navigation to move to previous and next steps is key.

We will explore two approaches in this chapter. The first is a navigation system I wrote for my mobile applications using pure ActionScript. The second approach takes advantage of the ViewNavigator in the Flex Hero framework for mobile applications. It includes a few additional features such as transitions and a global ActionBar.

Navigation

You want your audience to be able to move forward between screens without re-creating the same steps over and over. You may need to provide the logic to navigate back to previous screens.

Google discourages the use of the physical back button for anything but going back to the previous application. However, the functionality exists, and some applications, as well as Flash Builder Mobile, use it to go through the stack of views. In my example, I create a back button within the application instead.

ViewManager

I originally developed this code for a conference scheduler application.

Attendees can carry the scheduler in their pocket and organize and save their own schedules. The application’s business logic is fairly simple, but all the screens are interconnected and can be accessed from several different points. For instance, from the session view, the user can access the session’s speaker(s), and from the speaker view, the user can access one of the sessions he is speaking at.

Creating views

The ViewManager creates the different views and manages them during the life of the application.

The document class creates an instance of the ViewManager. It calls its init function and passes a reference of the timeline:

[code]

import view.ViewManager;
// member variable
private var viewManager:ViewManager;
viewManager = new ViewManager();
viewManager.init(this);

[/code]

The ViewManager class stores a reference to the timeline to add and remove views from the display list:

[code]

private var timeline:MovieClip;
public function init(timeline:MovieClip):void {
this.timeline = timeline;
}

[/code]

The ViewManager creates an instance of each view and stores them in a viewList object. The following code assumes the MenuView, SessionsView, and SessionView classes exist.

The initialization process for each view, creation of the view’s member variables and references to other objects, only needs to happen once.

Note that the views are of data type BaseView.

[code]

private var currentView:BaseView;
private viewList:Object = {};
public function init(timeline:MovieClip):void {
this.timeline = timeline;
createView(“menu”, new MenuView());
createView(“sessions”, new SessionsView());
createView(“session”, new SessionView());
}
private function createView(name:String, instance:BaseView):void {
viewList[name] = instance;
}

[/code]

The initial view display

When the application first starts, the document class loads an XML document that contains all the data regarding the conference, such as the list of sessions and speakers. While this is taking place, the ViewManager displays an introductory view without any interactivity. Let’s modify the init method to add this functionality. The setCurrent View method will be discussed in the next paragraph:

[code]

public function init(timeline:MovieClip):void {
this.timeline = timeline;
createView(“intro”, new IntroView());
createView(“menu”, new MenuView());
createView(“sessions”, new SessionsView());
createView(“session”, new SessionView());
setCurrentView({view:”intro”});
}

[/code]

The current view display

Once the data is loaded, parsed, and stored in the model part of the application, the document class calls the onDataReady method on the ViewManager:

[code]

// set up application, model and get data from external xml
viewManager.onDataReady();

[/code]

In turn, the ViewManager defines the new view by calling the setCurrentView method and passes an object with the property view to define the view to display:

[code]

public function onDataReady():void {
setCurrentView({view:”menu”});
}

[/code]

The setCurrentView method removes the previous view if there is one. It then stores the view in the currentView variable and adds it to the display list. Two methods, onHide and onShow, are called via the IView interface, discussed next. Each view uses the methods to clear or add from the display list and destroy objects.

The method also registers the view for a custom ClickEvent.NAV_EVENT with the setCur rentView method of the ViewManager as the callback function.

[code]

import view.ClickEvent;
private var currentView:BaseView;
private function setCurrentView(object:Object):void {
// remove current view
if (currentView) {
currentView.removeEventListener(ClickEvent.NAV_EVENT, goTo);
IView(currentView).onHide();
timeline.removeChild(currentView);
currentView = null;
}
// add new view
currentView = viewList[object.view];
if (object.id != undefined) {
currentView.setID(object.id);
}
currentView.addEventListener(ClickEvent.NAV_EVENT, goTo, false, 0, true);
IView(currentView).onShow();
timeline.addChild(currentView);
}
// pass event data object
private function goTo(event:ClickEvent):void {
setCurrentView(event.data);
}

[/code]

The IView interface

It is imperative that all views have the two methods, onHide and onShow, so we use an IView interface. Each view also needs a method—here it is clickAway—to navigate to the next view. In our application, this always occurs upon user interaction. We will therefore use a MouseEvent:

[code]

package view {
import flash.events.MouseEvent;
public interface IView
{
function onShow():void
function onHide():void
function clickAway(event:MouseEvent):void
}
}

[/code]

Creating a custom event

A custom event is used to pass the destination view and additional data, if needed, from the current view. For example, if the user is looking at the screen displaying all the conference sessions and clicks on a specific session, we use the event object to pass the session ID to the Session view, as illustrated in Figure 16-1:

[code]

{view:”session”, id:5}

[/code]

Figure 16-1. The mechanism to send destination and data from the current view to the destination view via the ViewManager
Figure 16-1. The mechanism to send destination and data from the current view to the destination view via the ViewManager

The custom class is as follows. Its data property is an object, so we can add additional parameters as needed:

[code]

import flash.events.Event;
import events.ClickEvent;
final public calls ClickEvent extends Event {
public static const NAV_EVENT:String = “NavEvent”;
public var data:Object;
public function ClickEvent(type:String, data:Object = null) {
super(type, true, true);
this.data = data;
}
public override function clone():Event {
return new ClickEvent(this.type, this.data);
}
}

[/code]

Individual Views

Let’s examine some of the views.

Inheritance

Some functionality is identical for all views. For this reason, they all inherit from the same super class, called BaseView. Here we declare two member variables, id and con tainer, and a function, setID. It is a simple class initially. We will develop it further in this chapter to add a button and interactivity to navigate in reverse:

[code]

package view {
import flash.display.Sprite;
import flash.events.Event;
import flash.events.MouseEvent;
public class BaseView extends Sprite
{
protected var id:int;
protected var container:Sprite;
protected function SeID(id:int):void {
this.id = id;
}
}
}

[/code]

The following code is for the MenuView class. It adds three buttons—for sessions, speakers, and a custom schedule—and their listeners on onShow. It clears listeners and empties the display list on onHide. Figure 16-2 shows the Menu view of the AIR scheduler application for the 2010 GoogleIO conference:

[code]

package view {
import flash.events.MouseEvent;
import view.ClickEvent;
final public class MenuView extends BaseView implements IView() {
public function MenuView(){}
public function onShow():void {
var sessions:sessionsBut = new sessionsBut();
sessions.view = “sessions”;
sessions.addEventListener(MouseEvent.CLICK, onClickAway);
var speakers:speakersBut = new speakersBut();
speakers.view = “speakers”;
speakers.addEventListener(MouseEvent.CLICK, onClickAway);
var schedule:scheduleBut = new scheduleBut();
schedule.view = “schedule”;
schedule.addEventListener(MouseEvent.CLICK, onClickAway);
addChild(sessions);
addChild(speakers);
addChild(schedule);
}
public function onHide():void {

while(numChildren > 0) {
getChildAt(0).
removeEventListener(MouseEvent.CLICK, onClickAway);
removeChildAt(0);
}
}
public function onClickAway(event:MouseEvent):void {
dispatchEvent(new ClickEvent(ClickEvent.NAV_EVENT,
{view:event.currentTarget.view});
}
}
}

[/code]

Figure 16-2. The Menu view of the AIR scheduler application for the 2010 GoogleIO conference
Figure 16-2. The Menu view of the AIR scheduler application for the 2010 GoogleIO conference

The next example shows the SessionsView code. Note that the code is minimal to keep the focus on navigation. For instance, the scrolling mechanism was left out.

As mentioned before, the initialization process for each view only needs to happen once for the life of the application. Here the SessionsView class acquires the data for the sessions just once if its list variable is null.

The sessions data is stored in a Vector object called sessionList of the static class Sessions (not covered here) and is populated with objects of data type Session. It is already sorted and organized by day and time:

[code]

package view {
import events.ClickEvent;
import flash.events.MouseEvent;
import model.Sessions; // static class holds list of all Sessions
import model.Session; // class holds individual session data
final public class SessionsView extends BaseView implements IView() {
private var list:Vector.<Session>;
public function SessionsView (){}
]
public function onShow():void {
container = new Sprite();
// request list of sessions if not acquired yet
if (list == null) {
list = Sessions.sessionList;
}
// display sessions
showSessions();
}
}

[/code]

We traverse the list of sessions and display all the sessions with the hardcoded date of 19. Again, this is to keep this example simple. The conference took place over two days and would, in a full example, require a UI to choose between one of the two dates:

[code]

private showSessions():void {
var timeKeeper:String = “0”;
var bounds:int = list.length;
var ypos:int = 50;
for (var i:int = 0; i < bounds; i++) {
var session:Session = list[i];
// display a blue time indicator if it is a new time
if (session.time > timeKeeper) {
timeKeeper = session.time;
ypos += 15;
// TimeBar is a movieclip
var bar:TimeBar = new TimeBar();
bar.y = ypos;
bar.timeInfo.text = timeKeeper;
container.addChild(bar);
ypos += 60;
}
// load the individual session
// it returns its height to position the next element below it
var newPos = loadSession(session, ypos);
ypos =+ (newPos + 10);
}
addChild(container);
}
private loadSession(session:Session, ypos:int):int {
// SessionSmall is a movieclip
var mc:SessionSmall = new SessionSmall();
mc.y = ypos;
mc.id = session.id;
mc.what.autoSize = TextFieldAutoSize.LEFT;
mc.what.text = “+ ” + session.title;
mc.back.height = mc.what.height;
mc.addEventListener(MouseEvent.CLICK, clickAway, false, 0, true);
container.addChild(mc);
// return the session movie clip height
return mc.what.height;
}

[/code]

When the user chooses a session, its ID is passed along with the destination view:

[code]

public function clickAway(event:MouseEvent):void {
dispatchEvent(new ClickEvent(ClickEvent.NAV_EVENT,
{view:”session”, id:event.currentTarget.id}));
}

[/code]

In the onHide method, all the children of the Sprite container are removed as well as their listeners if they have one. Then the container itself is removed. Figure 16-3 shows the sessions broken down by day and time:

[code]

public function onHide():void {
while (container.numChildren > 0) {
var child:MovieClip = container.getChildAt(0) as MovieClip;
if (child.id != null) {
child.removeEventListener(MouseEvent.CLICK, clickAway);
}
container.removeChild(child);
}
removeChild(container);
container = null;
}

[/code]

Here is the SessionView code. The method displays all the data related to a session. This includes the session title, a description, the speakers involved, and the room, category, and rank:

[code]

package view {
import events.ClickEvent;
import flash.events.MouseEvent;
import model.Sessions;
import model.Speakers; // static class that holds Speakers data
final public class SessionView extends BaseView implements IView() {
public function SessionView(){}
public function onShow():void {
// search Sessions by id
var data:Object = Sessions.getItemByID(id);
container = new Sprite();
addChild(container);
// display title and description
// SessionMovie is a movieclip
var session:SessionMovie = new SessionMovie();
session.title.autoSize = TextFieldAutoSize.LEFT;
session.title.text = data.title;
session.body.text = data.description;
container.addChild(session);
// display list of speakers
for (var i:int; i < data.speakers.length; i++) {
var bio:Bio = new Bio();
bio.id = data.speakers[i];
// search list of speakers by id
var bioData:Object = Speakers.getItemByID(bio.id);
bio.speaker.text = bioData.first + ” ” + bioData.last;
bio.addEventListener(MouseEvent.CLICK,
clickAway, false, 0, true);
}
// display category, level, rank and room number
// Border is a movieClip
var border:Border = new Border();
// categories is a movieclip with frame labels matching category
border.categories.gotoAndStop(data.tag);
// rank is a movieclip with a text field
border.rank.text = String(data.type);
// room is a movieclip with a text field
border.room.text = String(data.room);
container.addChild(border);
}

[/code]

Figure 16-3. The sessions broken down by day and time
Figure 16-3. The sessions broken down by day and time

Clicking on one of the speakers takes the user to a new speaker destination view defined by an ID:

[code]

public function clickAway(event:MouseEvent):void {
dispatchEvent(new ClickEvent(ClickEvent.NAV_EVENT,
{view:”speaker”, id:event.currentTarget.id}));
}
}

[/code]

In the onHide method, all the children of the Sprite container are removed as well as their listeners if they have one. Then the container itself is removed:

[code]

public function onHide():void {
while(container.numChildren > 0) {
var child:container.getChildAt(0);
if (child.id != null) {
child removeEventListener(MouseEvent.CLICK, clickAway);
}
container.removeChild(child);
}
removeChild(container);
container = null;
}

[/code]

Figure 16-4 shows a subset of information for a session.

Figure 16-4. A subset of information for a session, including the speaker(s) name, category, level of expertise, and room number
Figure 16-4. A subset of information for a session, including the speaker(s) name, category, level of expertise, and room number

P2P Over a Remote Network

To use networking remotely, you need an RTMFP-capable server, such as Flash Media Server.

If you do not have access to such a server, Adobe provides a beta developer key to use its Cirrus service. Sign up to instantly receive a developer key and a URL, at http://labs .adobe.com/technologies/cirrus/.

The traditional streaming model requires clients to receive all data from a centralized server cluster. Scaling is achieved by adding more servers. Figure 15-2 shows traditional streaming/communication with the Unicast model and RTMFP in Flash Player/Cirrus.

Figure 15-2. Traditional streaming/communication with the Unicast model (left) and RTMFP in Flash Player 10.1/Cirrus 2 (right)
Figure 15-2. Traditional streaming/communication with the Unicast model (left) and RTMFP in Flash Player 10.1/Cirrus 2 (right)

RTMFP, now in its second generation, supports application-level multicast. Multicasting is the process of sending messages as a single transmission from one source to the group where each peer acts as a relay to dispatch the data to the next peer. It reduces the load on the server and there is no need for a streaming server.

The Cirrus service is only for clients communicating directly. It has low latency and good security. It does not support shared objects or custom server-side programming. You could still use shared objects with Flash Media Server, but via the traditional clientserver conduit.

The NetGroup uses ring topology. Its neighborCount property stores the number of peers. Each peer is assigned a peerID, which can be mapped to a group address using Group.convertPeerIDToGroupAddress(connection.nearID). An algorithm is run every few seconds to update ring positions for the group.

When the group is connected, you can obtain statistics such as Quality of Service in bytes per second from NetGroup’s info property:

[code]

function onStatus(event:NetStatusEvent):void {
if (event.info.code == NetGroup.Connect.Success”) {
trace(event.info.group);
// NetGroupInfo object with Quality of Service statistics
}
}

[/code]

The NetStream object is now equipped with new multicast properties. For instance, multicastWindowDuration specifies the duration in seconds of the peer-to-peer multicast reassembly window. A short value reduces latency but also quality.

NetGroup is best used for an application with a many-to-many spectrum. NetStream is for a one-to-many or few-to-many spectrum.

Communication can be done in different ways:

  • Posting is for lots of peers sending small messages.
  • Multicasting is for any size group, but with a small number of the peers being senders, and for continuous/live data which is large over time.
  • Direct routing is for sending messages to specific peers in the group using methods such as sendToAllNeighbors, sendToNeighbor, and sendToNearest.
  • Object replication is for more reliable data delivery whereby information is sent in packets between clients and reassembled.

Matthew Kaufman explains this technology in depth in his MAX 2009 presentation, at http://tv.adobe.com/watch/max-2009-develop/p2p-on-the-flash-platform-with-rtmfp.

Simple Text Chat

This example is very similar to the one we created for P2P over a local network, except for a few minor, yet important, changes.

The connection is made to a remote server using the NetConnection object and RTMFP. If you have the Adobe URL and developer key, use them as demonstrated in the following code:

[code]

const SERVER:String = “rtmfp://” + YOUR_SERVER_ADDRESS;
const KEY:STRING = YOUR_DEVELOPER_KEY;
var connection:NetConnection = new NetConnection();
connection.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
connection.connect(SERVER, KEY);

[/code]

Connecting to a traditional streaming server would still use the URI construct as “rtmfp://server/application/instance” and additional optional parameters to connect, such as a login and password.

The GroupSpecifier now needs serverChannelEnabled set to true to use the Cirrus server, and helps in peer discovery. PostingEnabled is still on to send messages. The IPMulticastAddress property is optional but can help optimize the group topology if the group is large:

[code]

function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetConnection.Connect.Success”) {
var groupSpec:GroupSpecifier = new GroupSpecifier(“chatGroup”);
groupSpec.postingEnabled = true;
groupSpec.serverChannelEnabled = true;
group = new NetGroup(connection,
groupSpec.groupspecWithAuthorizations());
group.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
}
}

[/code]

The exchange of messages is very similar to the local example. Note that a post method is well suited for many peers sending small messages, as in a chat application that is not time-critical:

[code]

function sendMessage():void {
var object:Object = new Object();
object.user = “Véronique”;
object.message = “This is a chat message”;
object.time = new Date().time;
group.post(object);
}
function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetGroup.Posting.Notify”) {
trace(event.info.message);
}
}

[/code]

Multicast Streaming

This example demonstrates a video chat between one publisher and many receivers who help redistribute the stream to other receivers.

The application connects in the same way as in the previous example, but instead of a NetGroup, we create a NetStream to transfer video, audio, and messages.

Publisher

This is the code for the publisher sending the stream.

To access the camera, add the permission in your descriptor file:

[code]<uses-permission android:name=”android.permission.CAMERA”/>[/code]

Set the GroupSpecifier and the NetStream. The GroupSpecifier needs to have multicas tEnabled set to true to support streaming:

[code]

import flash.net.NetStream;
var outStream:NetStream;
function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetConnection.Connect.Success”) {
var groupSpec:GroupSpecifier = new GroupSpecifier(“videoGroup”);
groupSpec.serverChannelEnabled = true;
groupSpec.multicastEnabled = true;
outStream = new NetStream(connection,
groupSpec.groupspecWithAuthorizations());
outStream.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
}
}

[/code]

Once the NetStream is connected, add a reference to the camera and the microphone and attach them to the stream. A Video object displays the camera feed. Finally, call the publish method and pass the name of your choice for the video session:

[code]

function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetStream.Connect.Success”) {
var camera:Camera = Camera.getCamera();
var video:Video = new Video();
video.attachCamera(camera);
addChild(video);
outStream.attachAudio(Microphone.getMicrophone());
outStream.attachCamera(camera);
outStream.publish(“remote video”);
}
}

[/code]

Recipients

The code for the peers receiving the video is similar, except for the few changes described next.

The incoming NetStream, used for the peers receiving the stream, must be the same GroupSpecifier as the publisher’s stream. The same stream cannot be used for sending and receiving:

[code]

var inStream:NetStream = new NetStream(connection,
groupSpec.groupspecWithAuthorizations());
inStream.addEventListener(NetStatusEvent.NET_STATUS, onStatus);

[/code]

The recipient needs a Video object but no reference to the microphone and the camera. The play method is used to stream the video in:

[code]

var video:Video = new Video();
addChild(video);
inStream.play(“remote video”);

[/code]

Sending and receiving data

Along with streams, NetStream can be used to send data. It is only an option for the publisher:

[code]

var object:Object = new Object();
object.type = “chat”;
object.message = “hello”;
outStream.send(“onReceiveData”, object);

[/code]

To receive data, the incoming stream must assign a NetStream.client for callbacks. Note that the onReceiveData function matches the first parameter passed in the publisher send call:

[code]

inStream.client = this;
function onReceiveData(object:Object):void {
trace(object.type, object.message); // chat, hello
}

[/code]

Closing a stream

Do not forget to remove the stream and its listener after it closes:

[code]

function onStatus(event:NetStatusEvent):void {
switch(event.info.code) {
case “NetStream.Connect.Closed” :
case “NetStream.Connect.Failed” :
onDisconnect();
break;
}
}
function onDisconnect():void {
stream.removeEventListener(NetStatusEvent.NET_STATUS, onStatus);
stream = null;
}
group.peerToPeerDisabled = false;
group.objectReplicationEnabled = true;

[/code]

End-to-End Stream

Another approach is for the publisher to send a separate stream to each receiver. This limits the number of users, but is the most efficient transmission with the lowest latency. No GroupSpecifier is needed for this mode of communication. In fact, this is no longer a group, but a one-to-one transfer or unidirectional NetStream channel.

Sending a peer-assisted stream

Set the connection parameter to NetStream.DIRECT_CONNECTIONS; the stream now has its bufferTime property set to 0 for maximum speed:

[code]

var outStream:NetStream =
new NetStream(connection, NetStream.DIRECT_CONNECTIONS);
outStream.bufferTime = 0;
outStream.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
var video:Video = new Video();
var camera:Camera = Camera.getCamera();
video.attachCamera(camera);
addChild(video);
outStream.attachAudio(Microphone.getMicrophone());
outStream.attachCamera(camera);
outStream.publish(“privateVideo”);

[/code]

When first connected, every peer is assigned a unique 256-bit peerID. Cirrus uses it to match it to your IP address and port number when other peers want to communicate with you, as in this example. nearID represents you:

[code]

var myPeerID:String
function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetConnection.Connect.Success) {
myPeerID = connection.nearID;
trace(myPeerID);
// 02024ab55a7284ad9d9d4586dd2dc8d2fa1b207e53118d93a34abc946836fa4
}
}

[/code]

The receivers need the peerID of the publisher to subscribe. The publisher needs a way to communicate the ID to others. In a professional application, you would use a web service or a remote sharedObject, but for web development, or if you know the people you want to communicate with, you can send your peerID in the body of an email:

[code]

var myPeerID:String
function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetConnection.Connect.Success”) {
myPeerID = connection.nearID;
navigateToURL(new URLRequest(‘mailto:FRIEND_EMAIL?subject=id&body=’+
myPeerID));
}
}

[/code]

The streams are not sent until another endpoint subscribes to the publisher’s stream.

Receiving a stream

In this example, the subscribers get the ID via email and copy its content into the system clipboard. Then they press the giveMe button:

[code]

var giveMe:Sprite = new Sprite();
giveMe.y = 100;
var g:Graphics = giveMe.graphics;
g.beginFill(0x0000FF);
g.drawRect(20, 20, 100, 75);
g.endFill();
giveMe.addEventListener(MouseEvent.CLICK, startStream);

[/code]

The startStream method gets the content of the clipboard and uses it to create the stream. The ID needs to be passed as the second parameter in the stream constructor:

[code]

function startStream():void {
var id:String =
Clipboard.generalClipboard.getData(ClipboardFormats.TEXT_FORMAT) as String;
var inStream:NetStream = new NetStream(connection, id);
inStream.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
var video:Video = new Video();
addChild(video);
inStream.play(“privateVideo”);
video.attachNetStream(inStream);
}

[/code]

The publisher has control, if needed, over accepting or rejecting subscribers. When a subscriber attempts to receive the stream, the onPeerConnect method is invoked. Create an object to capture the call. The way to monitor whom to accept (or not) is completely a function of your application:

[code]

var farPeerID:String;
var outClient:Object = new Object();
outClient.onPeerConnect = onConnect;
outStream.client = outClient;
function onConnect(stream:NetStream):Boolean {
farPeerID = stream.farID;
return true; // accept
OR
return false; // reject
}

[/code]

The publisher stream has a peerStreams property that holds all the subscribers for the publishing stream. Use NetStream.send() to send messages to all the recipients or Net Stream.peerStreams[0].send() for an individual user, here the first one in the list.

NetConnection.maxPeerConnections returns the limit of peer streams, typically set to a maximum of eight.

Directed Routing

Directed routing is for sending data to a specific peer in a group. Peers can send each other messages if they know their counterpart PeerID. This feature only works in a group via NetGroup. It is not available via NetStream.

Sending a message

Individual messages can be sent from one neighbor to another using the NetGroup.send ToNeighbor method:

[code]

var groupSpec:GroupSpecifier = new GroupSpecifier(“videoGroup”);
groupSpec.postingEnabled = true;
groupSpec.serverChannelEnabled = true;
groupSpec.routingEnabled = true;
var netGroup = new NetGroup(connection,
groupSpec.groupspecWithAuthorizations());
netGroup.addEventListener(NetStatusEvent.NET_STATUS, onStatus);

[/code]

The message is an Object. It needs a destination which is the peer receiving the message. Here, PeerID is converted to a group address. It also needs the message itself. Here, we added the time to make each message unique and a type to filter the conversation:

[code]

var message:Object = new Object();
var now:Date = new Date();
message.time = now.getHours() + “” + now.getMinutes()+ “” + now.getSeconds();
message.destination = group.convertPeerIDToGroupAddress(peerID);
message.value = “south”;
message.type = “direction”;
group.sendToNearest(message, message.destination);

[/code]

Receiving a message

The recipient must be in the same group. The message is received at an event with an info.code value of NetGroup.SendTo.Notify. The recipient checks to see if the message is for her by checking if event.info.fromLocal is true, and if it is not, sends it to the next neighbor until its destination is reached:

[code]

function onStatus(event:NetStatusEvent):void {
switch(event.info.code) {
case “NetGroup.SendTo.Notify” :
trace(event.info.fromLocal);
// if true, recipient is the intended destination
var message:Object = event.info.message;
(if message.type == “direction”) {
trace(message.value); // south
}
break;
}
}

[/code]

Relay

A simple message relay service was introduced in January 2011. It is not intended for ongoing communication, but rather for a few introductory messages, and is a feature for the Cirrus service only. It requires that the sender knows the PeerID of the recipient.

The sender requests a relay:

[code]

connection.call(“relay”, null, “RECIPIENT_ID”, “hello”);

[/code]

The recipient receives and responds to the relay:

[code]

connection.client = this;
function onRelay(senderID:String, message):void {
trace(senderID); // ID of the sender
trace(message); // “hello”
}

[/code]

Treasure Hunt

This treasure hunt game illustrates various aspects of this technology.

Referring to Figure 15-3, imagine the first user on the left walking outdoors looking for a treasure without knowing where it is. She streams a live video as she walks to indicate her progress. The second user from the left knows where the treasure is but is off-site. She guides the first user by pressing keys, representing the cardinal points, to send directions. Other peers (in the two screens toward the right) can watch the live stream and chat among themselves.

Figure 15-3. The Treasure Hunt activity; the panels shown here are (left to right) for the hunter walking, for the guide, and for users viewing the video and chatting over text
Figure 15-3. The Treasure Hunt activity; the panels shown here are (left to right) for the hunter walking, for the guide, and for users viewing the video and chatting over text

Review the sample code provided in this chapter to build such an application. We covered a one-to-many streaming example. We discussed chat in an earlier example. And we just went over sending direct messages.

As a final exercise, you can put all the pieces together to build a treasure hunt application. Good luck, and please post your results.

Other Multiuser Services

If you want to expand your application beyond what this service offers, several other options are available to set up communication between parties remotely, such the Adobe Media Server, Electrotank’s ElectroServer, and gotoAndPlay()’s SmartFox. All of them require server setup and some financing.

ElectroServer was developed for multiplayer games and tools to build a multiplayer lobby system. One installation scales up to tens of thousands of connected game players with message rates of more than 100,000 messages per second. You can try a free 25- user license (see http://www.electrotank.com/). Server-side code requires Java or ActionScript 1. It supports AIR and Android.

SmartFox is a platform for developing massive multiuser games and was designed with simplicity in mind. It is fast and reliable and can handle tens of thousands of concurrent clients with low CPU and memory usage. It is well documented. You can try a full version with a free 100-user license (see http://www.smartfoxserver.com/). Server-side
code requires Java. It supports AIR and Android.

 

 

Tuesday: Setting Up Google Analytics

Google Analytics is a robust analytics program that can help you gain valuable insight into your PPC performance. You can see where visitors are dropping out of your shopping process; you can see how long they stay on your landing page; and you can learn on which pages visitors convert most highly. The best part is, Google Analytics is free!

When you analyze them properly, you can gain invaluable, numerous insights that are from Google Analytics. We could write an entire book on installing and analyzing web analytics, but luckily we don’t have to! As we mentioned last week, a great resource for learning everything you need to know about analytics is Web Analytics: An Hour a Day, which shows you how to slice and dice your information so that you become a master of actionable analysis.

For now, here are the basic steps for getting Google Analytics installed on your website and landing pages:

  1. Open a Google Analytics account. You can do this by going to www.Google.com/Analytics.
  2. After you’re in your account, click Add Website Profile.
  3. Enter the URL of your website, select your country, and choose a time zone.
  4. The next page displays the universal tracking code that needs to be inserted on every page of your website. Copy the code.
  5. Insert the snippet of Google Analytics code on every page of your website.

After you complete this process, you should link your Google AdWords account to your Google Analytics account. This way, your PPC traffic is tracked correctly in Google Analytics, and you can access your Google Analytics account directly through AdWords. To link the accounts, follow these steps:

  1. Click the Reporting tab within AdWords.
  2. Choose Google Analytics from the drop-down menu.
  3. Enter your Google Analytics account ID to link the two accounts.

 

 

Windows Marketplace

Windows Phone 7 devices obtain applications through the Windows Marketplace hub. This is available from the Start screen, but you can
invoke the hub in your code in a variety of ways. You can open the hub to show specific types of content such as applications, music, or podcasts. You can also open the hub showing a filtered list of items from one of these categories using a search string, or you can just show a specific item by specifying its GUID content identifier. Finally, you can open the Reviews screen.

The following code examples show how you can use the Windows Marketplace task launchers in your application code.

C#
// Open the Windows Marketplace hub to show all applications.
MarketplaceHubTask marketPlace = new MarketplaceHubTask();
marketPlace.ContentType = MarketplaceContentType.Applications;
// Other options are MarketplaceContentType.Music
// and MarketplaceContentType.Podcasts.
marketPlace.Show();
// —————————————————–
// Open the Windows Marketplace hub to search for
// applications using a specified search string.
MarketplaceSearchTask marketSearch = new MarketplaceSearchTask();
marketSearch.ContentType = MarketplaceContentType.Applications;
marketSearch.SearchTerms = “Tailspin Surveys”;
marketSearch.Show();
// —————————————————–
// Open the Windows Marketplace hub with a specific
// application selected for download and installation.
MarketplaceDetailTask marketDetail = new MarketplaceDetailTask();
marketDetail.ContentType = MarketplaceContentType.Applications;
marketDetail.ContentIdentifier
= “{12345678-1234-1234-1234-123456789abc}”;
marketDetail.Show();
// —————————————————–
// Open the Windows Marketplace hub Review page.
MarketplaceReviewTask marketReview = new MarketplaceReviewTask();
marketReview.Show();

If you call the Show method of the MarketplaceDetailTask class without specifying a value for the ContentIdentifier property, the
task will show the Windows Marketplace page for the current application.

For more information about the tasks described in this section, and the other types of tasks available on Windows Phone 7 devices, see “Microsoft.Phone.Tasks Namespace” on MSDN (http://msdn.microsoft.com/en-us/library/microsoft.phone.tasks(VS.92).aspx).

If you want to provide a link to a specific product on Windows Marketplace within a Web page or another application, you can do so
by specifying the appropriate product ID in the link. The following shows the format of a link to an application on Windows Marketplace.

http://social.zune.net/redirect?type=phoneApp&id=12345678-1234-1234-1234-123456789abc&source=MSDN

 

Reverse Geocoding

Unless you are displaying a map, providing an address or nearby points of interest is more tangible than latitude and longitude.

Reverse geocoding is the process of reverse-coding a point location to fetch a readable address and place name. It is widely used in combination with location-based services (LBS) to retrieve local weather data or business information, as well as by public safety services such as Enhanced 911. Such information is not immediately available, but there are free services that provide it.

The Yahoo! Geocoding API is well documented and reliable. You need to apply for an application ID that is required as an argument in your requests. It is not exclusive to mobile use and can be used in the browser, assuming it has a way to detect location. Go to http://developer.yahoo.com/geo/placefinder/ and read the “How Do I Get Started” section to request an ID. You need to log in with a Yahoo! account and fill out the form. Provide the URL of a website, even though it will not be used on the device.

First, add the permission to go out to the Internet:

<uses-permission android:name=”android.permission.INTERNET” />

In this example, I am receiving the latitude and longitude data from Geolocation Event and passing it along with my required applicationID and gFlags = R for reverse geocoding:

import flash.events.GeolocationEvent;
import flash.events.Event;
import flash.events.IOErrorEvent;
import flash.net.URLLoader;
import flash.net.URLRequest;
import flash.net.URLRequestMethod;
import flash.net.URLVariables;
import flash.sensors.Geolocation;
var geolocation:Geolocation;
const YAHOO_URL:String = “http://where.yahooapis.com/geocode”;
const applicationID:String = “GET_YOUR_ID_ON_YAHOO_SITE”;
var loader:URLLoader;

Set the geolocation listener:

if (Geolocation.isSupported) {
geolocation = new Geolocation();
geolocation.addEventListener(GeolocationEvent.UPDATE, onTravel);
}

Request reverse geolocation data from the Yahoo! service passing the coordinates:

function onTravel(event:GeolocationEvent):void {
var request:URLRequest = new URLRequest(YAHOO_URL);
var variables:URLVariables = new URLVariables();
variables.q = event.latitude.toString() + “n”
+ event.longitude.toString();
variables.gflags = “R”;
variables.appid = applicationID;
request.data = variables;
request.method = URLRequestMethod.GET;
loader = new URLLoader();
loader.addEventListener(Event.COMPLETE, onLocationLoaded);
loader.addEventListener(IOErrorEvent.IO_ERROR, onLocationLoaded);
loader.load(request);
}
function onError(event:IOErrorEvent):void {
trace(“error”, event);
}

Parse the XML received from the service to get city and country information:

function onLocationLoaded(event:Event):void {
loader.removeEventListener(Event.COMPLETE, onLocationLoaded);
geolocation.removeEventListener(GeolocationEvent.UPDATE, onTravel);
var xml:XML = new XML(event.target.data);
var city:String = xml.Result.city.text();
var country:String = xml.Result.country.text();
trace(xml);
}

The XML comes back with a ResultSet.Result node, which includes a street address, city, and country. For example, this is the result for my office building located in New York City’s Times Square:

<Result>
<quality>99</quality>
<latitude>40.757630</latitude>
<longitude>-73.987167</longitude>
<offsetlat>40.757630</offsetlat>
<offsetlon>-73.987167</offsetlon>
<radius>500</radius>
<name>40.7576303 -73.98716655000001</name>
<line1>230 Rodgers &amp; Hammerstein Row</line1>
<line2>New York, NY 10036-3906</line2>
<line3/>
<line4>United States</line4>
<house>230</house>
<street>Rodgers &amp; Hammerstein Row</street>
<xstreet/>
<unittype/>
<unit/>
<postal>10036-3906</postal>
<neighborhood/>
<city>New York</city>
<county>New York County</county>
<state>New York</state>
<country>United States</country>
<countrycode>US</countrycode>
<statecode>NY</statecode>
<countycode>NY</countycode>
<hash/>
<woeid>12761367</woeid>
<woetype>11</woetype>
<uzip>10036</uzip>
</Result>

This address is actually correct for this particular location. It is not my postal address, but it is where I am currently sitting on the west side of the building.

Other interesting data in the XML is woeid and woetype (“woe” is short for Where On Earth). GeoPlanet (http://developer.yahoo.com/geo/geoplanet/guide/) was developed by Yahoo! as a way to identify some features on Earth in a unique, language-neutral manner. This is particularly important because many places in the world have the same name, such as Paris, France, and Paris, Texas.

woeid is a 32-bit unique and nonrepetitive identifier. Numbers follow a hierarchy such as country, state, and city. woetype is the type used to identify a place—in this case, 11 refers to the postal code

Twitter and Flickr are currently using the GeoPlanet system.

Locating a Device Using Global Positioning System and Network/WiFi Technology

The Android platform uses both GPS and network/WiFi to locate a device. There is no direct way to specify a unique location source, but a workaround is to only add the relevant permission, fine or coarse as defined earlier, in the manifest file.

Using GPS

GPS is a manmade navigation system. Satellites in space constantly broadcast messages with their ephemeris, or position in the sky, and the time. Your mobile device, as a GPS receiver, uses the messages received to determine the distance to each satellite by measuring the transit time of the message. Using triangulation, the device is able to determine its own position as latitude and longitude or on a map. Other information derived from this is direction and speed, calculated from changes in position. Visit the TomTom site for a simple explanation and good graphics on how GPS works (http://www.tomtom
.com/howdoesitwork/page.php?ID=8&CID=2&Language=1).

GPS is the most accurate positioning technology, and it provides the most frequent updates. However, it is not always available, particularly indoors. It also consumes the most battery power.

Position acquisition is often not very accurate initially, but it improves over time. The GPS sensor can take several minutes to get a proper position from satellites. Do not make your application full screen so that the status bar is hidden from the user so that they cannot see network activity and signal strength. If a GPS connection is established, a GPS icon is visible on the top right. If it is trying to establish or fix a lost GPS connection, the icon will blink.

Signal-to-noise ratio

SNR is an algorithm that compensates for inaccurate or ambiguous results due to interference and multipath errors. Interference could be due to weather conditions or signals bouncing off mountains or large buildings. A multipath error is a problem with signals from multiple satellites arriving out of sync if one of them experiences interference.

Assisted GPS

Assisted GPS, also called A-GPS, is used to improve the startup performance or Time To First Fix (TTFF). In this mode, your device uses cell tower positions and triangulates from there to get an initial location quickly. It then switches to GPS when it is available and accurate enough. Figure 10-2 shows GPS Test, an application developed by Chartcross Ltd. It displays the position and signal strength of satellites within view as obtained by the device’s GPS receiver.

Using the Cellular Network and WiFi

If GPS is not available, your application will switch over to cell/WiFi. The network system is a combination of cell towers, WiFi hotspots, and IP-based geolocation. Cell tower signals, although not affected by architecture or bad weather, are less accurate and are dependent on the infrastructure. WiFi hotspots are very precise with enough data points but are only available in urban areas.

GPS Test application developed by Chartcross Ltd.

How to Know if GPS or WiFi Is Active

The static flash.net.NetworkInfo class provides information about the network interfaces on computers and on some mobile devices. Your application needs permission to access network information. Add it to your application manifest as follows:

<uses-permission android:name=”android.permission.ACCESS_NETWORK_STATE” />
<uses-permission android:name=”android.permission.ACCESS_WIFI_STATE” />

Check that it is supported on your device:

if (NetworkInfo.isSupported) {
// network information supported
}

NetworkInfo stores a list of possible network interfaces that you can get by calling:

import flash.net.NetworkInfo;
var network:NetworkInfo = NetworkInfo.networkInfo;
for each (var object:NetworkInterface in network.findInterfaces()) {
trace(object.name);
}

For example, the list of interfaces for the Nexus One, Droid 2, and Samsung Galaxy Tab is:

mobile, WIFI, mobile_mms, mobile_supl, mobile_dun, mobile_hipri

You can find out which method is active, and its name, using the following code:

import flash.net.NetworkInterface;
var network:NetworkInfo = NetworkInfo.networkInfo;
for each (var object:NetworkInterface in network.findInterfaces()) {
trace(object.name);
if (object.active) {
if (object.name == “WIFI”) {
// you are using wifi
} else {
// you are using GPS
}
}
}

 

Device Information

Windows Phone 7 includes the DeviceExtendedProperties class that you can use to obtain information about the physical device. The information you can retrieve includes the following:

  • The device manufacturer, the device name, and its unique device ID
  • The version of the phone hardware and the version of the firmware running on the phone
  • The total physical memory (RAM) installed in the device, the amount of memory currently in use, and the peak memory usage of the current application

However, you must be aware that the phone will alert the user when some device information is retrieved and the user can refuse to allow the application to access it. You should access device information only if it is essential to your application. Typically, you will use device information to generate statistics or usage data, and to monitor memory usage of your application. You can use this data to adjust the behavior of your application to minimize impact on the device and other applications.

You retrieve device information using the GetValue or the TryGetValue method, as shown in the following code example.

C#
using Microsoft.Phone.Info;

string manufacturer = DeviceExtendedProperties.GetValue(
“DeviceManufacturer”).ToString();
string deviceName = DeviceExtendedProperties.GetValue(
“DeviceName”).ToString();
string firmwareVersion = DeviceExtendedProperties.GetValue(
“DeviceFirmwareVersion”).ToString();
string hardwareVersion = DeviceExtendedProperties.GetValue(
“DeviceHardwareVersion”).ToString();
long totalMemory = Convert.ToInt64(
DeviceExtendedProperties.GetValue(“DeviceTotalMemory”));
long memoryUsage = Convert.ToInt64(
DeviceExtendedProperties.GetValue(
“ApplicationCurrentMemoryUsage”));
object tryValue;
long peakMemoryUsage = -1;
if (DeviceExtendedProperties.TryGetValue(
“ApplicationPeakMemoryUsage”,out tryValue))
{
peakMemoryUsage = Convert.ToInt64(tryValue);
}
// The following returns a byte array of length 20.
object deviceID = DeviceExtendedProperties.GetValue(
“DeviceUniqueId”);

The device ID is a hash represented as an array of 20 bytes, and is unique to each device. It does not change when applications are installed or the firmware is updated.

When running in the emulator, the manufacturer name returns “Microsoft,” the device name returns “XDeviceEmulator,” and (in the initial release version) the hardware and firmware versions return 0.0.0.0.

For more information, and a list of properties for the Device ExtendedProperties class, see “Device Information for Windows Phone” on MSDN (http://msdn.microsoft.com/en-us/library/ff941122(VS.92).aspx).