Where to Find Help

You can get help and information from a variety of areas.

Documentation

Launch the language reference help from within your editor for language-specific information. I do not recommend using the search capability at http://help.adobe.com directly, as it directs you to the Adobe Support page, which is not specific enough for our purposes.

Type in a class name in the IDE text editor, and then select it. In Flash Professional, click the question mark on the top right. In Flash Builder, press Shift-F2. As shown in Figure 19-1, the information is presented in ASDoc style as HTML frames. The upper left frame lists packages, language elements, and appendixes. The lower left frame lists classes in the package in context. The right frame displays the class you are searching for. The content comprises an introductory paragraph and the list of properties, methods, and events.

Figure 19-1. Language reference help
Figure 19-1. Language reference help

The Internet

Use the Google search engine to find undocumented material starting with “as3” or “AIR”, especially now that you know the syntax of the class or API you are interested in. The Flash community is vibrant and posts solutions and examples, often before Adobe makes them official.

Read blogs for up-to-date information and code snippets. Visit websites for in-depth articles and application examples.

The Community

Post questions on the Adobe forums. This is a good place to ask beginner- to intermediate- level questions and gain access to the Adobe engineering team.

Attend conferences. Many sessions cover the latest in technology. In fact, it is often the arena chosen by software companies to make announcements and give sneak peeks.

Be active. Share your work. If you demonstrate interest and knowledge, you may be invited to be part of the prerelease lists. This will put you in the privileged position of testing and making suggestions on beta products. Be aware of Adobe bugs so that you can find workarounds for them, and if you witness new bugs, report them at http://bugs .adobe.com/.

Find a user group in your area. If one does not exist, create it.

Text

Text should be a particular concern. The absence of a physical keyboard introduces a new interface and user experience. Embedding and rendering fonts affects size and performance.

The Virtual Keyboard

On most devices, pressing on an input text field brings up the virtual keyboard. AIR for Android only uses the Android default alphabet keyboard.

Be aware of the space the keyboard occupies. The stage position is automatically adjusted to keep the text field visible. If the text field is toward the bottom, the application moves the stage up. To dismiss the virtual keyboard, the user usually needs to tap on the stage. Make sure you leave a noninteractive area for the user to tap on.

If you want to overwrite the default behavior, set the softKeyboardBehavior tag of the application descriptor to none.

[code]

<softKeyboardBehavior>none</softKeyboardBehavior>

[/code]

To control how the application moves, set a listener on the softKeyboardActivating event, which is dispatched when the keyboard opens. Use the softKeyboardRect property of the stage, which contains the dimensions of the area covered by the keyboard:

[code]

import flash.events.SoftKeyboardEvent;
import flash.text.TextField;
import flash.text.TextFieldType;
var textField:TextField = new TextField();
textField.type = TextFieldType.INPUT;
textField.width = 400;
textField.height = 200;
addChild(textField);
textField.addEventListener
(SoftKeyboardEvent.SOFT_KEYBOARD_ACTIVATE, onKeyboard);
textField.addEventListener
(SoftKeyboardEvent.SOFT_KEYBOARD_DEACTIVATE, onKeyboard);
function onKeyboard(event:SoftKeyboardEvent):void {
trace(stage.softKeybardRect.y);
trace(stage.softKeybardRect);
}

[/code]

For fullscreen mode, use the keyboard dimensions as an approximation. The values returned may not be perfectly exact.

Fonts

Try to use device fonts for input text fields, as they render faster than embedded fonts. The Font Embedding dialog box monitor in Flash Professional CS5 and later monitors font usage in your application. You can also generate a size report that lists all the assets, including fonts. Only include the font families you use.

The choice of Android font families is limited but well targeted to the Android style. Figure 18-1 shows the Droid Serif font, created by Steve Matteson of Ascender Corporation.

With AIR 2.6 and up, support is provided for scrolling text; text selection for cut, copy, and paste; and context menus.

Consider using an alternative to input text fields, such as already populated fields or plus and minus buttons for digits. The recommended font size is at least 14 pixels, so that text is readable on high-density devices.

The Flash Text Engine

An application with impeccable typography stands out. The Text Layout Framework (TLF) provides the tooling for text quality but is heavy and not yet ready for mobile.

Figure 18-1. The Droid Serif font
Figure 18-1. The Droid Serif font

The Flash Text Engine (FTE) is the low-level API below the TLF. It is light and renders exquisite script with great precision. It is not as immediately accessible as other tools, however. For simplicity, use it for read-only text and keep the classic TextField object for input text if needed.

Here is a “Hello world” example:

[code]

import flash.text.engine.*;
var fd:FontDescription = new FontDescription();
var ef:ElementFormat = new ElementFormat(fd);
var te:TextElement = new TextElement(“Hello world”, ef);
var tb:TextBlock = new TextBlock();
tb.content = te;
var tl:TextLine = tb.createTextLine(null, 200);
addChild(tl);

[/code]

FontDescription is for the font family. ElementFormat is for styling and layout. TextEle ment is for the content as text and inline graphic. TextBlock is the Factory to create one block of text. Finally, TextLine is the display object for a single line of text. Figure 18-2 depicts the classes needed to create text using the Flash Text Engine.

Figure 18-2. The various classes needed to create text using the Flash Text Engine
Figure 18-2. The various classes needed to create text using the Flash Text Engine

This is a lot of classes for such a simple example, but it introduces the benefit of using this engine. It gives you access to a vast range of typographic settings, bidirectional layout, and support for most scripts. Please refer to the article I wrote on FTE to learn more (see http://www.developria.com/2009/03/flash-text-engine.html).

 

 

 

The Display List

The structure of your display list is fundamental in this process for three reasons: memory consumption, tree traversal, and node hierarchy.

Memory Consumption

Memory consumption is the trade-off for better performance in GPU rendering because every off-screen bitmap uses memory. At the same time, mobile development implies less GPU memory caching and RAM.

To get a sense of the memory allocated, you can use the following formula:

[code]

// 4 bytes are required to store a single 32 bit pixel
// width and height of the tile created
// anti-aliasing defaults to high or 4 in Android
4 * width * height * anti-aliasFactor
A 10 × 10 image represents 1600 bytes

[/code]

Be vigilant about saving memory in other areas.

Favor the DisplayObject types that need less memory. If the functionality is sufficient for your application, use a Shape or a Sprite instead of a MovieClip. To determine the size of an object, use the following:

[code]

import flash.sampler.*;
var shape:Shape = new Shape();
var sprite:Sprite = new Sprite();
var mc:MovieClip = new MovieClip();
trace(getSize(shape), getSize(sprite), getSize(mc));
// 224, 412 and 448 bytes respectively in the AIR runtime

[/code]

The process of creating and removing objects has an impact on performance. For display objects, use object pooling, a method whereby you create a defined number of objects up front and recycle them as needed. Instead of deleting them, make them invisible or remove them from the display list until you need to use them again.

You should give the same attention to other types of objects. If you need to remove objects, remove listeners and references so that they can be garbage-collected and free up precious memory.

Tree Structure

Keep your display list fairly shallow and narrow.

The renderer needs to traverse the display list and compute the rendering output for every vector-based object. Matrices on the same branch get concatenated. This is the expected management of nested objects: if a Sprite contains another Sprite, the child position is set in relation to its parent.

Node Relationship

This is the most important point for successful use of caching. Caching the wrong objects may result in confusingly slow performance.

The cacheAsBitmapMatrix property must be set on the moving object, not on its container. If you set it on the parent, you create an unnecessarily larger bitmap. Most importantly, if the container has other children that change, the bitmap needs to be redrawn and the caching benefit is lost.

Let’s use an example. The parent node, the black box shown in the following figures, has two children, a green circle and a red square. They are all vector graphics as indicated by the points.

In the first scenario (depicted in Figure 14-2), cacheAsBitmapMatrix is set on the parent node. The texture includes its children. A bitmap is created and used for any transformation, like the rotation in the figure, without having to perform expensive vector rasterization. This is a good caching practice:

[code]

var box:Sprite = new Sprite();
var square:Shape = new Shape();
var circle:Shape = new Shape();
// draw all three items using the drawing API
box.cacheAsBitmap = true;
box.cacheAsBitmapMatrix = new Matrix();
box.rotation = 15;

[/code]

Figure 14-2. Caching and transformation on the parent only
Figure 14-2. Caching and transformation on the parent only

In the second scenario (depicted in Figure 14-3), cacheAsBitmapMatrix is still on the parent node. Let’s add some interactivity to make the circle larger when clicked. This is a bad use of caching because the circle needs to be rerasterized along with its parent and sibling because they share the same texture:

[code]

// change datatype so the display object can receive a mouse event
var circle:Sprite = new Sprite();
// draw items using the drawing API
circle.addEventListener(MouseEvent.CLICK, bigMe);
function bigMe(event:MouseEvent):void {
var leaf:Sprite = event.currentTarget as Sprite;
leaf.scaleX += .20;
leaf.scaleY += .20;
}

[/code]

Figure 14-3. Caching on the parent, but transformation on the children
Figure 14-3. Caching on the parent, but transformation on the children

In the third scenario (depicted in Figure 14-4), cacheAsBitmapMatrix is set, not on the parent, but on the children. When the circle is rescaled, its bitmap copy can be used instead of rasterization. In fact, both children can be cached for future animation. This is a good use of caching:

[code]

// change datatype so they can receive mouse events
var square:Sprite = new Sprite();
var circle:Sprite = new Sprite();
// draw items using the drawing API
square.addEventListener(MouseEvent.CLICK, bigMe);
circle.addEventListener(MouseEvent.CLICK, bigMe);
var myMatrix:Matrix = new Matrix();
square.cacheAsBitmap = true;
square.cacheAsBitmapMatrix = myMatrix;
circle.cacheAsBitmap = true;
circle.cacheAsBitmapMatrix = myMatrix;
function bigMe(event:MouseEvent):void {
var leaf:Sprite = event.currentTarget as Sprite;
leaf.scaleX += .20;
leaf.scaleY += .20;
}

[/code]

Figure 14-4. Caching and transformation on each individual child
Figure 14-4. Caching and transformation on each individual child

The limitation with using GPU rendering occurs when a parent and its children need to have independent animations as demonstrated earlier. If you cannot break the parent- child structure, stay with vector rendering.

MovieClip with Multiple Frames

Neither cacheAsBitmap nor cacheAsBitmapMatrix works for a MovieClip with multiple frames. If you cache the art on the first frame, as the play head moves, the old bitmap is discarded and the new frame needs to be rasterized again. This is the case even if the animation is a rotation or a position change.

GPU rendering is not the technique for such situations. Instead, load your MovieClip without adding it to the display list. Traverse through its timeline and copy each frame to a bitmap using the BitmapData.draw method. Then display one frame at a time using the BitmapData.copyPixels method.

Interactivity

Setting cacheAsBitmapMatrix to true does not affect the object’s interactivity. It still functions as it would in the traditional rendering model both for events and for function calls.

Multiple Rendering Techniques

On Android devices, you could use traditional rendering along with cacheAsBitmap and/ or cacheAsBitmapMatrix. Another technique is to convert your vector assets as bitmaps, in which case no caching is needed. The technique you use may vary from one application to the next.

Remember that caching is meant to be a solution for demanding rendering. It is helpful for games and certain types of animations (not for traditional timeline animation). If there is no display list conflict, as described earlier, caching all assets makes sense. There is no need to use caching for screen-based applications with fairly static UIs.

At the time of this writing, there seems to be a bug using filters on a noncached object while the GPU mode is set in the application descriptor (as in the example below). It should be fixed in a later release:

[code]

var sprite:Sprite = new Sprite();
sprite.graphics.beginFill(0xFF6600, 1);
sprite.graphics.drawRect(0, 0, 200, 100);
sprite.graphics.endFill();
sprite.filters = [new DropShadowFilter(2, 45, 0x000000, 0.5, 6, 6, 1, 3)];
addChild(sprite);

[/code]

Maximum Texture Memory and Texture Size

The maximum texture size supported is 1,024×1,024 (it is 2,048×2,048 for iPhone and iPad). This dimension represents the size after transformation. The maximum memory is not part of the memory consumed by the application, and therefore is not accessible.

2.5D Objects

A 2.5D object is an object with an additional z property that allows for different types of transformation.

If an object has cacheAsBitmapMatrix on and a z property is added, the caching is lost. A 2.5D shape does not need cacheAsBitmapMatrix because it is always cached for motion, scaling, and rotation without any additional coding. But if its visibility is changed to false, it will no longer be cached.

How to Test the Efficiency of GPU Rendering

There are various ways to test application performance beyond the human eye and perception. Testing your frame rate is your best benchmark.

Raw Data and the Sound Spectrum

With the arrival of digital sound, a new art form quickly followed: the visualization of sound.

A sound waveform is the shape of the graph representing the amplitude of a sound over time. The amplitude is the distance of a point on the waveform from the equilibrium line, also called the time-domain. The peak is the highest point in a waveform.

You can read a digital signal to represent sound in real time using amplitude values.

Making Pictures of Music is a project run by mathematics and music academics that analyses and visualizes music pieces. It uses Unsquare Dance, a complex multi-instrumental piece created by Dave Brubeck. For more information, go to http://www.uwec.edu/walkerjs/PicturesOfMusic/MultiInstrumental%20Complex%20Rhythm.htm.

In AIR, you can draw a sound waveform using the computeSpectrum method of the SoundMixer class. This method takes a snapshot of the current sound wave and stores the data in a ByteArray:

[code]SoundMixer.computeSpectrum(bytes, false, 0);[/code]

The method takes three parameters. The first is the container ByteArray. The second optional parameter is FFTMode (the fast Fourier transform); false, the default, returns a waveform, and true returns a frequency spectrum. The third optional parameter is the stretch factor; 0 is the default and represents 44.1 kHz. Resampling at a lower rate results in a smoother waveform and a less detailed frequency. Figure 11-1 shows the drawing generated from this data.

A waveform (top) and a frequency spectrum (bottom), both generated from the same piece of audio but setting the fast Fourier transform value to false and then to true
Figure 11-1. A waveform (top) and a frequency spectrum (bottom), both generated from the same piece of audio but setting the fast Fourier transform value to false and then to true

A waveform spectrum contains 512 bytes of data: 256 values for the left channel and 256 values for the right channel. Each byte contains a floating-point value between ‒1 and 1, which represents the amplitude of the points in the sound waveform.

If you trace the length of the ByteArray, it returns a value of 2,048. This is because a floating-point value is made of four bytes: 512 * 4 = 2,048.

Our first approach is to use the drawing API. Drawing a vector is appropriate for a relatively simple sound like a microphone audio recording. For a longer, more complex track, we will look at a different approach after this example.

We are using two loops to read the bytes, one at a time. The loop for the left channel goes from 0 to 256. The loop for the right channel starts at 256 and goes back down to 0. The value of each byte, between ‒1 and 1, is multiplied by a constant to obtain a value large enough to see. Finally, we draw a line using the loop counter for the x coordinate and we subtract the byte value from the vertical position of the equilibrium line for the y coordinate.

The same process is repeated every Enter_Frame event until the music stops. Don’t forget to remove the listener to stop calling the drawMusic function:

[code]

const CHANNEL_LENGTH:int = 256; // channel division
// equilibrium line y position and byte value multiplier
var PEAK:int = 100;
var bytes:ByteArray;
var sprite:Sprite;
var soundChannel:SoundChannel;
bytes = new ByteArray();
sprite = new Sprite();
var sound:Sound = new Sound();
sound.addEventListener(Event.COMPLETE, onLoaded);
sound.load(new URLRequest(“mySound.mp3”));
addChild(sprite);
function onLoaded(event:Event):void {
soundChannel = new SoundChannel();
soundChannel = event.target.play();
soundChannel.addEventListener(Event.SOUND_COMPLETE, onPlayComplete);
sprite.addEventListener(event.ENTER_FRAME, drawMusic);
}
function drawMusic(event:Event):void {
var value:Number;
var i:int;
SoundMixer.computeSpectrum(bytes, false, 0);
// erase the previous drawing
sprite.graphics.clear();
// move to the far left
sprite.graphics.moveTo(0, PEAK);
// left channel in red
sprite.graphics.lineStyle(0, 0xFF0000);
for (i = 0; i < CHANNEL_LENGTH; i++) {
value = bytes.readFloat()*PEAK;
// increase the x position by 2 pixels
sprite.graphics.lineTo(i*2, PEAK – value);
}
// move to the far right
sprite.graphics.lineTo(CHANNEL_LENGTH*2, PEAK);
// right channel in blue
sprite.graphics.lineStyle(0, 0x0000FF);
for (i = CHANNEL_LENGTH; i > 0; i–) {
sprite.graphics.lineTo(i*2, PEAK – bytes.readFloat()*PEAK);
}
}
function onPlayComplete(event:Event):void {
soundChannel. removeEventListener(Event.SOUND_COMPLETE, onPlayComplete);
sprite.removeEventListener(Event.ENTER_FRAME, drawMusic);
}

[/code]

On most Android phones, which have a width of 480 pixels, the waveform will draw off-screen on the right to pixel 512 (256 * 2). Consider presenting your application in landscape mode and positioning the sprite container centered on the screen.

For better performance, let’s draw the vector into a bitmap. As a general rule, on mobile devices, you should avoid the drawingAPI, which is redrawn every frame and degrades performance.

The Sprite is not added to the display list, and therefore is not rendered to the screen. Instead, we create a BitmapData and draw the sprite inside its rectangle:

[code]

import flash.display.Bitmap;
import flash.display.BitmapData;
var sprite:Sprite;
var bitmap:Bitmap;
sprite = new Sprite();
// draw a BitmapData to draw the waveform
var bitmapData = new BitmapData(480, PEAK*2, true, 0x000000);
// store it in a Bitmap
bitmap = new Bitmap(bitmapData);
// position and add Bitmap to displayList
bitmap.y = 200;
addChild(bitmap);
function drawMusic(event:Event):void {
var value:Number;
var i:int;
SoundMixer.computeSpectrum(bytes, false, 0);
// use the sprite.graphics as before
// but does not render it to the screen
sprite.graphics.clear();
sprite.graphics.moveTo(0, PEAK);
sprite.graphics.lineStyle(0, 0xFF0000);
for (i = 0; i < CHANNEL_LENGTH; i++) {
value = bytes.readFloat()*PEAK;
sprite.graphics.lineTo(i*2, PEAK – value);
}
sprite.graphics.lineTo(CHANNEL_LENGTH*2, PEAK);
sprite.graphics.lineStyle(0, 0x0000FF);
for (var i:int = CHANNEL_LENGTH; i > 0; i–) {
value = bytes.readFloat()*PEAK;
sprite.graphics.lineTo(i*2, PEAK – value);
}
// instead draw it into a bitmap
// empty bitmap
bitmap.fillRect(bitmap.rect(sprite), 0);
// draw the sprite onto the bitmap image
bitmap.draw(sprite);
}

[/code]

Maps

Several geocoding systems and companies offer web services for the consumer market. They all provide similar features. A map is received. It is drawn or a composite of satellite pictures or street tiles is drawn, the latter being more common for mobile devices. It can pan and zoom. Geographical locations or points of interest are represented in the form of markers. Additional features include custom itineraries, the display of specific areas in color, driving and biking directions, and business searches.

Some of the better-known geocoding systems are Google Maps, Yahoo! Maps, Bing Maps, GeoNames, and USC Geocoder. As the technology is rapidly growing, this list may soon expand or change. A lot of map services get their information from NAVTEQ and Tele Atlas, companies that sell databases of geodata, or from MaxMind which sells IP geolocation data. Google now has its own full set of geodata, gathered by its streetview cars.

Launching Google Maps

As we previously discussed, you can collect a point location (latitude, longitude) using the Geolocation class, and pass it to the device using a URI handler. It then presents the user with the option of using the native Maps application or launching Google Maps in the browser:

<uses-permission android:name=”android.permission.INTERNET” />
import flash.events.GeolocationEvent;
import flash.net.navigateToURL;
import flash.net.URLRequest;
import flash.sensors.Geolocation;
function onTravel(event:GeolocationEvent):void {
geolocation.removeEventListener(GeolocationEvent.UPDATE, onTravel);
var long:String = event.longitude.toString();
var lat:String = event.latitude.toString();
navigateToURL(
new URLRequest(“http://maps.google.com/?q=” + lat + “,” + long));
}

Note that if you navigate to http://maps.yahoo.com instead, launching the native Google Maps is not an option.

The major hurdle with this approach is that your application is now in the background and there is no direct way to go back to it unless you press the device’s native back button.

The Android SDK has a library for embedding maps into native applications with interactivity. AIR doesn’t support this feature at the time of this writing, but there are many other ways to offer a map experience to your audience. To demonstrate some of the map features within AIR, we will use the Yahoo! Maps Web Services (http://developer.yahoo.com/maps) and Google Maps API family (http://code.google.com/apis/maps/).

Static Maps

A static map may be sufficient for your needs. It provides a snapshot of a location; although it doesn’t offer pan or zoom, it does load relatively quickly, even over GPS.

The Yahoo! Map Image API

The Yahoo! Map Image API from Yahoo! Maps (http://developer.yahoo.com/maps/rest/V1/) provides a reference to a static map image based on user-specified parameters. This API requires an applicationID. It doesn’t set a restriction on how to use the service, it serves images up to 1,024×1,024, and it has few customizable options.

To use the API, send a URLRequest with your parameters. In return, you receive the path to the image which you then load using a Loader object. The next example uses the point location from geolocation, the stage dimensions for the image size, and 1 for street level (zoom goes up to 12 for country level):

import flash.display.Loader;
import flash.events.GeolocationEvent;
import flash.events.Event;
import flash.net.URLLoader;
import flash.net.URLRequest;
import flash.sensors.Geolocation;
var geolocation:Geolocation;
const YAHOO_URL:String =
“http://local.yahooapis.com/MapsService/V1/mapImage”;
const applicationID:String = “YOUR_YAHOO_APP_ID”;
var urlLoader:URLLoader;
var loader:Loader;
function findLocation():void {
if (Geolocation.isSupported) {
geolocation = new Geolocation();
geolocation.addEventListener(GeolocationEvent.UPDATE, onTravel);
}
}

function onTravel(event:GeolocationEvent):void {
var request:String = “?appid=YOUR_APPI”
+ “&latitude=” + event.latitude
+ “&longitude=” + event.longitude
+ “&zoom=1”
+ “&image_height=” + stage.stageHeight
+ “&image_width=” + stage.stageWidth;
urlLoader = new URLLoader();
urlLoader.addEventListener(Event.COMPLETE, onXMLReceived);
urlLoader.load(new URLRequest(YAHOO_URL + request));
}

function onXMLReceived(event:Event):void {
urlLoader.removeEventListener(Event.COMPLETE, onXMLReceived);
geolocation.removeEventListener(GeolocationEvent.UPDATE, onTravel);
var xml:XML = XML(event.currentTarget.data);
loader = new Loader();
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, onLoaded);
loader.load(new URLRequest(xml));
}

function onLoaded(event:Event):void {
event.currentTarget.removeEventListener(Event.COMPLETE, onLoaded);
this.addChild(event.currentTarget.content);
}

You should see a map of where you are currently located.

The Google Static Maps API

The Google Static Maps API (http://code.google.com/apis/maps/documentation/staticmaps/) offers more features than the Yahoo! product, but at the time of this writing, it enforces a rule whereby static maps can only be displayed as browser content unless you purchase a Google Maps API Premier license. An AIR application is not considered browser content. Read the terms carefully before developing a commercial product using this API.

With this service, a standard HTTP request returns an image with the settings of your choice.

The required parameters are as follows:

  • center for location as an address or latitude/longitude (not required with marker)
  • zoom from 0 for the Earth to 21 for a building (not required with marker)
  • size (up to 640×640 pixels)
  • sensor (with or without use of GPS locator)

The maximum image size is 640×640. Unless you scale the image up in size, it will not fill the screen on most Android devices. Optional parameters are mobile, format, map type, language, markers, visible, and path.

The following example requests a 480×640 image of Paris centered on the Eiffel Tower:

<uses-permission android:name=”android.permission.INTERNET” />
import flash.display.Loader;
import flash.net.URLRequest;
import flash.events.Event;
const GOOGLE_URL:String = “http://maps.google.com/maps/api/staticmap?”
function loadStaticImage():void {
var request:String = “center=Eiffel+Tower,Paris,France
&zoom=16&size=480×640&sensor=false”;
var loader:Loader = new Loader();
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, imageLoaded);
loader.load(new URLRequest(GOOGLE_URL + request));
}
function imageLoaded(event:Event):void {
event.currentTarget.removeEventListener(Event.COMPLETE, imageLoaded);
addChild(event.currentTarget.content);
}

You should see on your screen the map of Paris, as shown in Figure 10-3.

Let’s make another request using a dynamic location. The sensor parameter is now set to true and we are adding the mobile parameter. The image size is also dynamic, choosing whichever value is the smallest between Google restricted values and our stage size. And we are now using the hybrid version of the maptype:

import flash.display.Loader;
import flash.net.URLRequest;
import flash.sensors.Geolocation;
const GOOGLE_URL:String = “http://maps.google.com/maps/api/staticmap?”
var geolocation:Geolocation = new Geolocation();
geolocation.addEventListener(GeolocationEvent.UPDATE, onTravel);

function onTravel(event:GeolocationEvent):void {
geolocation.removeEventListener(GeolocationEvent.UPDATE, onTravel);
loadStaticImage(event.latitude, event.longitude);
}
function loadStaticImage(lat:Number, long:Number):void {
var width:int = Math.min(640, stage.stageWidth);
var height:int = Math.min(640, stage.stageHeight);
var request:String = “center=” + lat + “,” + long +
“&zoom=15
&size=” + width + “x” + height +
“&maptype=hybrid&mobile=true&sensor=true”;
var loader:Loader = new Loader();
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, imageLoaded);
loader.load(new URLRequest(GOOGLE_URL + request));
}
function imageLoaded(event:Event):void {
event.currentTarget.removeEventListener(Event.COMPLETE, imageLoaded);
addChild(event.currentTarget.content);
}

Map of Paris centered on the Eiffel Tower

The center parameter can be substituted for one or multiple markers. A marker can be given a color and a label, or it can be customized. And we are now using the road map version of the maptype:

function loadStaticImage(lat:Number, long:Number):void {
var width:int = Math.min(640, stage.StageWidth);
var height:int = Math.min(640, stage.StageHeight);
var request:String = “markers=size:large|color:blue|label:L|”
+ lat + “,” + long
+ “&zoom=16″
+ &size=” + width + “x” + height +
“&maptype=roadmap&mobile=true&sensor=true”;
var loader:Loader = new Loader();
addChild(loader);
loader.load(new URLRequest(googleURL + request));
}

This is again a low-impact but limited solution. The user only sees an image that cannot be scaled and is only given basic information. Let’s now look at using actual maps.

Dynamic Maps

Yahoo! and Google both provide well-documented AS3 libraries for use with their map APIs. The only restriction for both is a maximum of 50,000 uses per day.

Maps are slow to initialize. Create placeholder art to display instead of the default gray rectangle, and display an attractive loading animation. Do not forget to remove the art after the map appears. Any art under the map would slow down its performance.

The Google Maps API for Flash

With the Google Maps API for Flash (http://code.google.com/apis/maps/documentation/flash/), Flex developers can embed Google maps in Flash applications. Sign up for a Google Maps API key and download the Flash SDK. Use version 20 or up (map_1_20.swc or map_flex_1_20.swc), as version 19 has a known issue with ResizeE vent. Add the path to the .swc file in the library path and set Default Linkage to “Merged into code”. As you will see, this API offers a wealth of options that are easy to implement.

To set the library path in Flash Professional, go to File→Publish Settings. Click the tool icon next to Script. Select the Library Path tab and click the Flash icon to navigate to the .swc file. Once you’ve imported the file, change Default Linkage to “Merged into code”.

In Flash Builder, right-click your project, go to Properties→ ActionScript Build Path, and click Add SWC to navigate to the .swc file. “Merged into code” is the default setting.

Create a Map object as well as key and url properties. Entering both your API key and the site URL you submitted when you applied for the key is required even though you are not displaying the map in your website but rather as a standalone Android application.

The sensor parameter, also required, states whether you use a GPS sensor. It needs to be a string, not a boolean. The map size, defined by setSize, is set dynamically to the dimensions of the stage.

When the map is ready, the geolocation listener is set up. After the first update is received, the setCenter function is called with location, zoom level, and the type of map to use. Finally, the zoom control is added. Figure 10-4 shows the result:

My current location

<uses-permission android:name=”android.permission.INTERNET” />
import com.google.maps.Map;
import com.google.maps.MapEvent;
import com.google.maps.MapType;
import com.google.maps.LatLng;
import flash.geom.Point;
import flash.sensors.Geolocation;
import flash.events.GeolocationEvent;
import com.google.maps.controls.ZoomControl;
const KEY:String = YOUR_API_KEY;
const SITE:String = YOUR_SITE;
var map:Map;
var geolocation:Geolocation;
map = new Map();
map.key = KEY;
map.url = SITE;
map.sensor = “true”;
map.setSize(new Point(stage.stageWidth, stage.stageHeight));
map.addEventListener(MapEvent.MAP_READY, onMapReady);
function onMapReady(event:MapEvent):void {
geolocation = new Geolocation();
geolocation.addEventListener(GeolocationEvent.UPDATE, onTravel);
addChild(map);
}
function onTravel(event:GeolocationEvent):void {
geolocation.removeEventListener(GeolocationEvent.UPDATE, onTravel);
map.setCenter(new LatLng(event.latitude, event.longitude),
18, MapType.NORMAL_MAP_TYPE);
map.addControl(new ZoomControl());
}

Add a marker as landmarks and navigation control. Here the marker is customized to have a shadow, a blue color, and a defined radius. When you click on it, an information window opens with text content:

import com.google.maps.overlays.Marker;
import com.google.maps.overlays.MarkerOptions;
import com.google.maps.InfoWindowOptions;
var options:Object = {hasShadow:true,
fillStyle: new FillStyle({color:0x0099FF, alpha:0.75}),
radius:12
};
var marker:Marker =
new Marker(new LatLng(45.7924, 15.9696), new MarkerOptions(options));
marker.addEventListener(MapMouseEvent.CLICK, markerClicked);
map.addOverlay(marker);
function markerClicked(event:MapMouseEvent):void {
event.currentTarget.openInfoWindow
(new InfoWindowOptions({content:”hello”});
}

Styled Maps support

In October 2010, Google announced support for Styled Maps on Flash, included in Flash SDK version 20 and up (see http://code.google.com/apis/maps/documentation/flash/maptypes.html#StyledMaps). This addition gives you control over color scheme and customization of markers and controls. It makes your map look more unique or match your brand and design. You can also write or draw over the map. The Google Geo Developers Blog (http://googlegeodevelopers.blogspot.com/2010/10/five-great-styled-maps-examples.html) shows some examples of how Styled Maps has been used.

Google Maps 5

Google Maps 5 was released in December 2010. It provides 3D building rendering, dynamic vector-based map drawing, and offline reliability. If you would like to see it supported in AIR, file a software request on the Adobe site, http://www.mobilecrunch.com/2010/12/16/google-maps-5-with-3d-buildings-now-available-for-android/.

 

Geolocation Classes

The flash.events.GeolocationEvent class is a new Event object that contains updated geolocation information. The new flash.sensors.Geolocation class is a subclass of the EventDispatcher class. It listens for and receives information from the device’s location sensor in the form of a GeolocationEvent.

To use the geolocation classes, first you must add the necessary permissions. In Flash Professional, enable ACCESS_FINE_LOCATION and ACCESS_COARSE_LOCATION device permissions under File→AIR Android Settings→Permissions. In Flash Builder, select ACCESS_NETWORK_STATE and ACCESS_WIFI_STATE under Mobile Settings→Permissions. To make changes, append your application manifest file as follows:

<uses-permission android:name=”android.permission.ACCESS_FINE_LOCATION” />
<uses-permission android:name=”android.permission.ACCESS_COARSE_LOCATION” />

Fine location refers to GPS communication, while coarse location refers to network communication.

During development and testing, you must select the checkboxes on your device in Android Settings→Location and Security→Use GPS satellites and Android Settings→Location and Security→Use wireless networks to enable both sensors, as shown in Figure 10-1.

Enabling sensors for wireless networks and GPS satellites

Next, verify that the device running your application supports geolocation:

import flash.sensors.Geolocation;
import flash.events.GeolocationEvent;
if (Geolocation.isSupported) {
// geolocation supported
}

Now let’s write a simple application to listen to geolocation event updates. Make geolocation a class variable, not a local variable, to guarantee that it does not get out of scope:

import flash.sensors.Geolocation;
import flash.events.GeolocationEvent;
var geolocation:Geolocation;
if (Geolocation.isSupported) {
geolocation = new Geolocation();
geolocation.addEventListener(GeolocationEvent.UPDATE, onTravel);
}
function onTravel(event:GeolocationEvent):void {
trace(event.latitude);
trace(event.longitude);
}

You should see your current latitude and longitude in degrees as floating-point values. My home location, for instance, is latitude 40.74382781982420 and longitude 74.00146007537840.

The user has the ability to enable or disable access to the location sensor on the device. Check the geolocation muted boolean property when you first run your application to see its value. You should also create a listener to receive status updates in case the property changes while the application is running:

import flash.events.StatusEvent;
if (!geolocation.muted) {
geolocation.addEventListener(StatusEvent.STATUS, onStatusChange);
} else {
// inform the user to turn on the location sensor
}
function onStatusChange(event:StatusEvent):void {
trace(“status:” + event.code);
if (event.code == “Geolocation.Muted”) {
// inform the user to turn on the location sensor
}
}

If muted is true, or if event.code is equal to Geolocation.Muted, display a message to the user requesting the need for location data.

The GeolocationEvent Class

A GeolocationEvent.UPDATE event is delivered when the listener is first created. Then, it is delivered when a new location update is received from the device/platform. An event is also delivered if the application wakes up after being in the background.

Using the geolocation API drains the battery very quickly. In an effort to save battery life, control the frequency of updates by setting an update interval on the geoloca tion object. Unless you are moving very quickly and want to check your speed, you don’t need to check your location more than once every few seconds or minutes:

geolocation.setRequestedUpdateInterval(10000);

If not specified by you, the updates are based on the device/platform default interval. Look up the hardware documentation if you want to know the device default interval; this is not something you can get in the Android or AIR API.

The Earth is divided using a grid system with latitude from the equator toward the North and South Poles and longitude from Greenwich, England, to the international date line in the Pacific Ocean. Values are positive from the equator going north and from Greenwich going east. Values are negative from the equator going south and from Greenwich going west.

The GeolocationEvent properties are as follows:

  • event.latitude ranges from ‒90 to 90 degrees and event.longitude ranges from ‒ 180 to 180 degrees. They are both of data type Number for greater precision.
  • event.horizontalAccuracy and event.verticalAccuracy are in meters. This value comes back from the location service and represents how accurate the data is. A small number represents a better reading. Less than 60 meters is usually considered GPS accurate. This measurement may change as the technology improves.
  • event.timeStamp is in milliseconds and starts counting from the moment the application initializes. If you need to get a regular update, use a timer instead of GeolocationEvent because it may not fire up at regular intervals.
  • event.altitude is in meters and event.speed is in meters/second.
  • event.heading, moving toward true north, is an integer. It is not supported on Android devices at the time of this writing, and it returns a value of NaN (Not a Number). However, you can calculate it by comparing longitude and latitude over time to determine a direction, assuming your device moves fast enough.

When your application moves to the background, the geolocation sensor remains active. If you want to overwrite the default behavior, listen to Native Application Event.DEACTIVATE to remove the event listener and Event.ACTIVATE to set it again.

When you first register for geolocation updates, AIR sets location listeners. It also queries for the LastKnownLocation and sends it as the first update. Because it is cached, the initial data may be incorrect until a first real location update is received.

 

Unit Testing Windows Phone 7 Applications

Unit testing is the process of exercising individual business objects, state, and validation for your applications. A  comprehensive set of unit tests written before you develop the code, and executed regularly during the entire development cycle, help to ensure that individual components and services are performing correctly and providing valid results.

You can use a special version of the Silverlight Unit Testing Framework adapted for use with Silverlight 3 to run unit tests for Windows Phone 7 applications. You can obtain this from Jeff Wilcox’s site (http://www.jeff.wilcox.name/2010/05/sl3-utf-bits/). Jeff is a Senior Software Development Engineer at Microsoft on the Silverlight for Windows Phone team.

The Silverlight Unit Testing Framework adds the Silverlight Unit Test Application templates to the New Project dialog box in Visual Studio and Visual Studio Express, and you can use this to add a test project to your application. The framework allows you to execute tests that carry the standard Visual Studio test attributes and include the standard test assertions such as IsTrue and IsNotNull. It can be used to run all the unit tests in a solution or selected tests; it reports the results on the phone, showing full details of each test that passed or failed.

You can also run tests on the desktop using a traditional test framework such as the Microsoft Test Framework (MSTest), instead of deploying the application to the phone or the emulator and running it there under the test framework. This is generally faster, but you must be aware that there are some differences in the execution environment, and tests may behave differently on the desktop when compared to running on the emulator or a physical device. When performing integration testing, you should run the tests on the emulator or a physical device.

There are some tests that you cannot run in an emulator. For example, you cannot test the Global Positioning System (GPS) or the accelerometer. In these cases, you may prefer to use alternative testing techniques, such as creating mock objects that represent the actual service or component that your application uses and substituting these for the physical service or component.

To view a video presentation about unit testing using the Silverlight Unit Testing Framework, see Unit Testing Silverlight and Windows Phone Applications on the Mix 10 website (http://live.visitmix.com/MIX10/Sessions/CL59).

Automated Unit Testing

It is possible to automate testing on the Windows Phone 7 emulator if you need to implement a continuous integration process and build Windows Phone 7 projects on a separate build server that does not have the Windows Phone Developer Tools installed. This requires setting up a folder for the external dependencies and editing the project files. For more information, see the post, “Windows Phone 7 Continuous Integration,” on Justin Angel’s blog (http://justinangel.net/#B logPost=TFS2010WP7ContinuousIntegration).

It is also possible to automate building and deploying an application to a phone or emulator without using Visual Studio. This approach uses the Smart Device Connectivity API implemented in the assembly Microsoft.SmartDevice.Connectivity.dll. For more information about using this API, see “Smart Device Connectivity API
Reference” on MSDN (http://msdn.microsoft.com/en-us/library/bb545992(VS.90).aspx) and “Windows Phone 7 Emulator Automation” on Justin Angel’s blog (http://justinangel.net/#BlogPost=WindowsPhone7EmulatorAutomation).

 

The TouchEvent Class

A touch event is similar to a mouse event, except that you can have multiple inputs at once. Because this event uses more power, you should only use it if you need to capture more than one point. If you only need to track one point, the mouse event will work well even though your mouse is now a finger.

Touches are also called raw touch data because you receive them as is. If you want to interpret them as gestures, you need to write the logic yourself or use a third-party library.

First, set the input mode to TOUCH_POINT:

import flash.ui.Multitouch;
import flash.ui.MultitouchInputMode;
import flash.events.TouchEvent;
Multitouch.inputMode = MultitouchInputMode.TOUCH_POINT;

TOUCH_TAP is similar to a mouse up event. The following code creates a simple application where every touch creates a new circle:

import flash.events.TouchEvent;
Multitouch.inputMode = MultitouchInputMode.TOUCH_POINT;
stage.addEventListener(TouchEvent.TOUCH_TAP, onTouchTap);
function onTouchTap(event:TouchEvent):void {
var sprite:Sprite = new Sprite();
sprite.graphics.lineStyle(25, Math.random()*0xFFFFFF);
sprite.graphics.drawCircle(0, 0, 80);
sprite.x = event.stageX;
sprite.y = event.stageY;
addChild(sprite);
}

The touchPointID is a new and important event property. Each new touch has a unique ID associated with it, from TOUCH_BEGIN to TOUCH_END. touchPointID gives you a way to identify and store every point and associate data with it if needed.

In this example, we use an Object to store the ID as a property and associate it with a sprite. To drag and drop the sprites, we use the startTouchDrag and stopTouchDrag methods:

var touches:Object = {};
stage.addEventListener(TouchEvent.TOUCH_BEGIN, onTouchBegin);
stage.addEventListener(TouchEvent.TOUCH_END, onTouchEnd);
function onTouchBegin(event:TouchEvent):void {
var sprite:Sprite = createCircle(event.stageX, event.stageY);
addChild(sprite);
// store the touchPointID and the sprite
touches[event.touchPointID] = sprite;
// drag the sprite
sprite.startTouchDrag(event.touchPointID, true);
}
function onTouchEnd(event:TouchEvent):void {
// retrieve the sprite using the touchPointID
var sprite:Sprite = touches[event.touchPointID];
// stop drag and destroy
stopTouchDrag(event.touchPointID);
sprite.graphics.clear();
removeChild(sprite);
touches[event.touchPointID] = null;
}
function createCircle(x:int, y:int):Sprite {
var sprite:Sprite = new Sprite();
sprite.graphics.lineStyle(25, Math.random()*0xFFFFFF);
sprite.graphics.drawCircle(0, 0, 100);
sprite.x = x;
sprite.y = y;
return sprite;
}

As we discussed earlier, Multitouch.maxTouchPoints determines how many touches a device can support. Many Android devices only support the detection of two simultaneous touch points.

There is no built-in mechanism to prevent a new touch if the maximum has been reached. In fact, expect unpredictable behavior such as the oldest touch no longer functioning. To prevent this, keep a count of how many points are present and stop the code from executing if you have reached the limit:

var pointCount:int = 0;
function onTouchBegin(event:TouchEvent):void {
if (pointCount == Multitouch.maxTouchPoints) {
return;
}
pointCount++;
// create new sprite
}
function onTouchEnd(event:TouchEvent):void {
pointCount–;
// remove old sprite
}

Let’s create an application using touch events and the drawing API. On TouchBegin, a new sprite is created and associated with a touch ID. It draws on TouchMove and is removed on TouchEnd. Draw using two fingers or two separate users:

Multitouch.inputMode = MultitouchInputMode.TOUCH_POINT;
var touches:Object = {};
stage.addEventListener(TouchEvent.TOUCH_BEGIN, onTouchBegin);
stage.addEventListener(TouchEvent.TOUCH_MOVE, onTouchMove);
stage.addEventListener(TouchEvent.TOUCH_END, onTouchEnd);
function onTouchBegin(event:TouchEvent):void {
var sprite:Sprite = new Sprite();
addChild(sprite);
sprite.graphics.lineStyle(3, Math.random()*0xFFFFFF);
sprite.graphics.moveTo(event.stageX, event.stageY);
touches[event.touchPointID] = sprite;
}
function onTouchMove(event:TouchEvent):void {
var sprite:Sprite = touches[event.touchPointID];
sprite.graphics.lineTo(event.stageX, event.stageY);
}
function onTouchEnd(event:TouchEvent):void {
var sprite:Sprite = touches[event.touchPointID];
sprite.graphics.clear();
removeChild(sprite);
touches[event.touchPointID] = null;
}

Other available events are TOUCH_OUT, TOUCH_OVER, TOUCH_ROLL_OUT, and TOUCH_ROLL_OVER.

 

The GestureEvent Class

A GestureEvent is the interpretation of multiple points as a recognizable pattern. The Flash platform offers three gesture classes: GestureEvent, TransformGestureEvent, and PressAndTapGestureEvent. Gestures cannot be detected in sequence. The user must finish the first gesture, lift her fingers, and then start the next gesture.

Here is how to set a listener for the event type you want to receive:

Multitouch.inputMode = MultitouchInputMode.GESTURE;
stage.addEventListener(TransformGestureEvent.GESTURE_ZOOM, onZoom);

Gesture events have a phase property that is used to indicate the progress of the gesture. Its value is BEGIN when the finger is first pressed down; UPDATE while the finger is moving; and END when the finger leaves the screen. Another phase, ALL, is for events such as swipes or two-finger taps, which only return one phase.

A typical use for the phase property is to play one sound when the gesture begins and another sound when it ends:

import flash.ui.MultitouchInputMode;
import flash.events.GesturePhase;
import flash.events.TransformGestureEvent
function onZoom(event:TransformGestureEvent):void {
if (event.phase == GesturePhase.BEGIN) {
// play hello sound
} else if (event.phase == GesturePhase.END) {
// play good bye sound
}
}

Gesture events have other properties related to position, as well as some that are relevant to their particular type. One gesture event, TransformGestureEvent, has many types, which we will discuss in the following subsections.

The Zoom Gesture

The zoom gesture is also referred to as pinching. With this gesture, the user places two fingers on the object, increasing and decreasing the distance between the fingers to scale the object up and down in size (see Figure 7-1).

The zoom gesture

The following code creates a sprite and scales it according to the movement being performed:

import flash.ui.Multitouch;
import flash.ui.MultitouchInputMode;
import flash.display.Sprite;
import flash.events.TransformGestureEvent;
var sprite:Sprite;
Multitouch.inputMode = MultitouchInputMode.GESTURE;
sprite = new Sprite();
sprite.x = stage.stageWidth * 0.5;
sprite.y = stage.stageHeight * 0.5;
var g:Graphics = sprite.graphics;
g.beginFill(0xFF6600);
g.drawCircle(0, 0, 150);
g.endFill();
sprite.addEventListener(TransformGestureEvent.GESTURE_ZOOM, onZoom);
sprite.x = stage.stageWidth * 0.5;
sprite.y = stage.stageHeight * 0.5;
function onZoom(event:TransformGestureEvent):void {
sprite.scaleX *= event.scaleX;
sprite.scaleY *= event.scaleY;
}

The event.scaleX and event.scaleY values are used to calculate the relative distance between the two fingers.

The Rotate Gesture

You can rotate an object using two different gestures. With the first gesture, you place one finger on the object and move the second finger around it. With the second gesture, you spread the two fingers apart and rotate one clockwise and the other counterclockwise (see Figure 7-2). The latter seems to work better on small devices.

The rotate gesture

Here we use the drawing API to create a sprite with a rectangle shape:

import flash.display.Sprite;
import flash.events.TransformGestureEvent;
import flash.ui.Multitouch;
import flash.ui.MultitouchInputMode;
var sprite:Sprite;
Multitouch.inputMode = MultitouchInputMode.GESTURE;
sprite = new Sprite();
sprite.x = stage.stageWidth * 0.5;
sprite.y = stage.stageHeight * 0.5;
var g:Graphics = sprite.graphics;
g.beginFill(0xFF6600);
g.drawRect(-150, -150, 300, 300);
g.endFill();
sprite.addEventListener(TransformGestureEvent.GESTURE_ROTATE, onRotate);
sprite.x = stage.stageWidth * 0.5;
sprite.y = stage.stageHeight * 0.5;
function onRotate(event:TransformGestureEvent):void {
event.currentTarget.rotation += event.rotation;
}

The event.rotation value is the cumulated rotation change relative to the position of the stage. Notice how I drew the rectangle so that it is centered in the middle of the sprite. The default position is top left, so offset your art to have its registration point in the center of the pixels.

The following code moves the child sprite to be offset by half its dimension:

someParent.x = 0;
someParent.y = 0;
someChild.x = – someChild.width * 0.5;
someChild.y = – someChild.height * 0.5;

Bitmaps are also offset according to the dimension of their bitmapData:

var bitmapData:BitmapData = new BitmapData();
var bitmap:Bitmap = new Bitmap(bitmapData);
bitmap.x = – bitmapData.width * 0.5;
bitmap.y = – bitmapData.height * 0.5;

The Pan Gesture

You use a pan gesture to reveal an object that is off-screen if it is larger than the screen. The use of two fingers is not immediately intuitive. This gesture seems to work best when using a light touch. Figure 7-3 shows an example.

The pan gesture

In this example, we are drawing a 1,000-pixel-long rectangle with a sine wave on it. The wave is so that you can see the sprite move when you pan:

Multitouch.inputMode = MultitouchInputMode.GESTURE;
var sprite:Sprite;
function createArt():void {
sprite = new Sprite();
addChild(sprite);
var g:Graphics = sprite.graphics;
g.beginFill(0xFFCCFF);
g.drawRect(0, 550, 1000, 200);
g.endFill();
g.lineStyle(3, 0xFF0000);
// 650 is an arbitrary pixel position
g.moveTo(2, 650);
// draw a sin wave
var xpos:Number = 0;
var ypos:Number = 0;
var angle:Number = 0;
for (var i:int = 0; i < 200; i++) {
xpos += 5;
ypos = Math.sin(angle)*100 + 650;
angle += 0.20;
sprite.graphics.lineTo (xpos, ypos);
}
stage.addEventListener(TransformGestureEvent.GESTURE_PAN, onPan);
}
function onPan(event:TransformGestureEvent):void {
// move the sprite along with the motion
sprite.x += event.offsetX;
}

offsetX is the horizontal magnitude of change since your finger made contact with the screen as it is moving across the screen.

The Swipe Gesture

A swipe gesture is often used as a way to dismiss an element as though you are pushing it off-screen. The direction of the swipe is defined by a single integer. Left to right and bottom to top returns ‒1. Right to left and bottom to top returns 1. Figure 7-4 shows an example of the swipe gesture.

The swipe gesture

The following code simulates the act of reading a book. Swiping from left to right brings you further in the book. Swiping in the other direction returns you to the beginning of the book:

import flash.events.TransformGestureEvent;
import flash.text.TextField;
import flash.text.TextFormat;
var pageText:TextField;
var counter:int = 1;
function createArt():void {
var textFormat:TextFormat = new TextFormat();
textFormat.size = 90;
textFormat.color = 0xFF6600;
// create a text field to display the page number
pageText = new TextField();
pageText.x = 100;
pageText.y = 200;
pageText.autoSize = TextFieldAutoSize.LEFT;
pageText.defaultTextFormat = textFormat;
pageText.text = “Page ” + counter;
addChild(pageText);
// create a listener for a swipe gesture
stage.addEventListener(TransformGestureEvent.GESTURE_SWIPE, onSwipe);
}
function onSwipe(event:TransformGestureEvent):void {
counter -= event.offsetX;
if (counter < 1) counter = 1;
pageText.text = “Page ” + counter;
}

The offsetX value is used to decrement or increment the page number of the book.

The Press and Tap Gesture

The press and tap gesture, PressAndTapGestureEvent, only has one type: GESTURE_PRESS_AND_TAP. This gesture is more complicated than the others, and users may not figure it out without instructions. Unlike the previous gestures, this gesture happens in two steps: first one finger is pressed and then another finger taps (see Figure 7-5). It is really two events synthesized as one.

The press and tap gestureThe following code creates an elegant UI for selecting a menu and then tapping to access its submenu, as a context menu:

import flash.events.PressAndTapGestureEvent;
stage.addEventListener(PressAndTapGestureEvent.GESTURE_PRESS_AND_TAP,
onPressAndTap);
function onPressAndTap(event:PressAndTapGestureEvent):void {
trace(event.tapLocalX);
trace(event.tapLocalY);
trace(event.tapStageX);
trace(event.tapStageY);
}

The Two-Finger Tap Gesture

GestureEvent only has one type, GESTURE_TWO_FINGER_TAP, and it is not supported by Android at the time of this writing. A tap is similar to a mouse click, but it requires making contact on a limited spatial area, with the two fingers close together, in a short time period. Figure 7-6 shows an example of a two-finger tap.

The two-finger tap gestureThe two-finger tap is a good gesture for a strong statement, as when you want the user to make a decision. You could use it, for instance, to pause and play a video:

import flash.events.PressAndTapGestureEvent;
sprite.addEventListener(GestureEvent.GESTURE_TWO_FINGER_TAP, onTwoFinger);
function onTwoFinger(event:GestureEvent):void {
// play or pause video
}

 

 

 

Accessing Windows Marketplace within an Application

It is possible to access Windows Marketplace using code within a Windows Phone 7 application. This is a useful way to offer users of your application other applications that you publish, or to help them find upgrades and new versions of the application.

The Windows Phone 7 operating system includes an API that allows your application to open the Windows Marketplace hub on the phone to show specific types of content, such as applications, music, or podcasts. You can also open the hub showing a filtered list of items from one of these categories using a search string, or it can show just a specific item if you specify its GUID content identifier. Finally, you can open the Reviews screen. You can also include a direct link to a specific product on Windows Marketplace in non-Windows Phone 7 applications (such as websites and desktop applications).