Diagnostics Tools

You can monitor performance and memory using diagnostics tools. Let’s look at a few of them.

Hi-Res-Stats

The Hi-Res-Stats class, from mrdoob, calculates the frame rate, the time to render each frame, the amount of memory used per frame, and the maximum frame rate and memory consumption. Import the library and add a new instance of Stats as a displayOb ject. This is simple and convenient to use (see https://github.com/bigfish):

[code]

import net.hires.debug.*;
var myStats:Stats = new Stats();
addChild(myStats);

[/code]

Because it needs to be added to the displayList and draws its progress visually, as shown in Figure 19-4, this tool may impact rendering slightly, or get in the way of other graphics. A trick I use is to toggle its visibility when pressing the native search key on my device:

[code]

import flash.ui.Keyboard;
import flash.events.KeyboardEvent;
stage.addEventListener(KeyboardEvent.KEY_DOWN, onKey);
function onKey(e:KeyboardEvent):void {
switch (e.keyCode) {
case Keyboard.SEARCH:
event.preventDefault();
myStats.visible = !myStats.visible;
break;
}
}

[/code]

Figure 19-4. Hi-Res-Stats display
Figure 19-4. Hi-Res-Stats display

Flash Builder Profiler

The premium version of Flash Builder comes with Flash Builder Profiler, which watches live data and samples your application at small, regular intervals and over time. It is well documented. Figure 19-5 shows the Configure Profiler screen.

Figure 19-5. The Configure Profiler screen in Flash Builder Profiler
Figure 19-5. The Configure Profiler screen in Flash Builder Profiler

When “Enable memory profiling” is selected, the profiler collects memory data and memory usage. This is helpful for detecting memory leaks or the creation of large objects. It shows how many instances of an object are used.

When “Watch live memory data” is selected, the profiler displays memory usage data for live objects. When “Generate object allocation stack traces” is selected, every new creation of an object is recorded.

When “Enable performance profiling” is selected, the profiler collects stack trace data at time intervals. You can use this information to determine where your application spends its execution time. It shows how much time is spent on a function or a process.

You can also take memory snapshots and performance profiles on demand and compare them to previous ones. When doing so, the garbage collector is first run implicitly. Garbage collection can also be monitored.

Flash Preload Profiler

The Flash Preload Profiler is an open source multipurpose profiler created by Jean- Philippe Auclair. This tool features a simple interface and provides data regarding frame rate history and memory history, both current and maximum.

Other more unusual and helpful features are the overdraw graph, mouse listener graph, internal events graph, displayObject life cycle graph, full sampler recording dump, memory allocation/collection dump, function performance dump, and run on debug/ release SWFs capability. More information on the Flash Preload Profiler is available at http://jpauclair.net/2010/12/23/complete-flash-profiler-its-getting-serious/.

Grant Skinner’s PerformanceTest

Grant’s PerformanceTest class is a tool for doing unit testing and formal test suites. Some of its core features are the ability to track time and memory usage for functions, and the ability to test rendering time for display objects. The class performs multiple iterations to get minimum, maximum, and deviation values, and it runs tests synchronously or queued asynchronously.

The class returns a MethodTest report as a text document or XML file. It can perform comparisons between different versions of Flash Player and different versions of the same code base. More information on this class is available at http://gskinner.com/blog/ archives/2010/02/performancetest.html.

Native Tools

The Android Debug Bridge (ADB) logcat command grabs information from the device and dumps it onto a log screen via USB. A lot of information is provided. Some basic knowledge of the Android framework will help you understand it better.

Components

If you use the Flex framework, try the Tour de Flex application (see Figure 18-4). It is a good starting point for examples using components to develop AIR for Android applications. You can get it from the Android Market.

Figure 18-4. The Tour de Flex application
Figure 18-4. The Tour de Flex application

Flash Builder was initially not suited for mobile development. Components, such as the DataGrid or the Chart, were too complex and too large for the memory footprint.

Some work has been done from the ground up to optimize the framework with Flex Hero. Some components were rewritten to be mobile-optimized. The mobile theme, used when the MobileApplication tag is detected, has larger, touch-friendly controls, including for scroll bars.

The ViewNavigator helps in the development and management of screens and offers transition animations. The TabNavigator is used for subnavigation. The ActionBar is used for global navigation and messaging.

Using ActionScript and bitmaps is recommended over MXML and FXG at runtime.

If you like the convenience of components but prefer pure ActionScript development, Keith Peters has created lightweight and easy-to-use components (see http://www.min imalcomps.com/ and http://www.bit-101.com/blog/?p=2979).

 

Case Study

The Album Application

In the mobile version of the AIR application, the user takes a picture or pulls one from the camera roll. She can save it to a dedicated database, along with an audio caption and geolocation information. The group of saved images is viewable in a scrollable menu. Images can be sent from the device to the desktop via a wireless network.

In the desktop version, the user can see an image at full resolution on a large screen. It can be saved to the desktop, and it can also be edited and uploaded to a photo service.

Please download the two applications from this book’s website at http://oreilly.com/ catalog/9781449394820.

Design

The design is simple, using primary colors and crisp type. The project was not developed for flexible layout. It was created at 800×480 resolution with auto-orientation turned off; you can use it as a base from which to experiment developing for other resolutions. The art is provided in a Flash movie to use in Flash Professional or as an .swc file to import into Flash Builder by selecting Properties→ActionScript Build Path→Library Path and clicking on “Add swc.”

Architecture

The source code consists of the Main document class and the model, view, and events packages (see Figure 17-1).

The model package contains the AudioManager, the SQLManager, the GeoService, and the PeerService. The view package contains the NavigationManager and the various views. The events package contains the various custom events.

The SQLManager class is static, so it can be accessed from anywhere in the application without instantiation. The other model classes are passed by reference.

Flow

The flow of the application is straightforward. The user goes through a series of simple tasks, one step at a time. In the opening screen, the OpeningView, the user can select a new picture or go to the group menu of images, as shown in Figure 17-2.

From the AddView page, the user can open the Media Gallery or launch the camera, as shown in Figure 17-3. Both choices go to the same CameraView view. An id parameter is passed to the new view to determine the mode.

The image data received from either of the sources is resized to fit the dimensions of the stage. The user can take another picture or accept the current photograph if satisfied. The image url is sent to the SQLManager to store in its class variable current Photo of type Object, and the application goes to the CaptionView.

In the CaptionView, the user can skip the caption-recording step or launch the Audio Manager. The recording is limited to four seconds and automatically plays the caption sound back. The user can record again or choose to keep the audio. The AudioMan ager compresses the recording as a WAV file and saves it on the SD card. Its url is saved in the SQLManager’s currentPhoto object. The next step is to add geographic information.

Figure 17-1. Packages and classes for the Album application
Figure 17-1. Packages and classes for the Album application
Figure 17-2. The OpeningView
Figure 17-2. The OpeningView

In the GeoView, the user can skip or launch the GeoService. This service creates an instance of the GeoLocation, fetches coordinates, and then requests the corresponding city and country from the Yahoo! API. As in the previous steps, the geodata is saved in the SQLManager’s currentPhoto object. These three steps are shown in Figure 17-4.

Figure 17-3. The AddView, native camera application, and Media Gallery
Figure 17-3. The AddView, native camera application, and Media Gallery
Figure 17-4. The CameraView, CaptionView, and GeoView
Figure 17-4. The CameraView, CaptionView, and GeoView

In the SavingView mode, data saving can be skipped or the data can be saved. For the latter, the SQLManager opens an SQL connection and saves the data, then closes the connection. The application goes back to the OpeningView.

Back at our starting point, another navigation choice is the Group menu. The Menu View page requests the number of images saved for the SQLManager and displays it as a list of items. If the list height is taller than the screen, it becomes scrollable. Selecting one of the items takes the user to the PhotoView screen. The SavingView page and MenuView page are shown in Figure 17-5.

The PhotoView displays the image selected in the MenuView. Choosing to connect calls the PeerService to set up a P2P connection using the WiFi network. Once it is established, the data is requested from the SQLManager using the item ID. The data is then sent. It includes the byteArray from the image, a WAV file for the audio, and the city and country as text. These steps are displayed in Figure 17-6.

Figure 17-5. The SavingView and MenuView
Figure 17-5. The SavingView and MenuView
Figure 17-6. The PhotoView and the steps to send the picture information using a P2P connection
Figure 17-6. The PhotoView and the steps to send the picture information using a P2P connection

Permissions

This application needs the following permissions to access the Internet, write to the SD card, and access GPS sensors, the camera, and the microphone:

[code]

<android>
<manifestAdditions>
<![CDATA[
<manifest>
<uses-permission
android:name=”android.permission.INTERNET”/>
<uses-permission
android:name=”android.permission.WRITE_EXTERNAL_STORAGE”/>
<uses-permission
android:name=”android.permission.ACCESS_FINE_LOCATION”/>
<uses-permission
android:name=”android.permission.ACCESS_COARSE_LOCATION”/>
<uses-permission
android:name=”android.permission.CAMERA”/>
<uses-permission
android:name=”android.permission.RECORD_AUDIO”/>
</manifest>
]]>
</manifestAdditions>
</android>

[/code]

Navigation

The ViewManager class discussed here is almost identical. The flow is a step-by-step process whereby the user can choose to skip the steps that are optional.

Images

The CameraView is used to get an image, either by using the media library or by taking one using the camera. The choice is based on a parameter passed from the previous screen. The process of receiving the bytes, scaling, and displaying the image is the same regardless of the image source. It is done by a utility class called BitmapDataSizing and is based on the dimensions of the screen.

To improve this application, check if an image is already saved when the user selects it again to avoid duplicates.

Audio

The audio caption is a novel way to save a comment along with the image. There is no image service that provides the ability to package an audio commentary, but you could build such an application.

WAV files using the Adobe class WAVReader and then extracted using a third-party library. Here, we create an Album directory on the SD card and a mySounds directory inside it to store the WAV files.

Reverse Geolocation

the process of using geographical coordinates to get an address location such as a city and a street address.

In this application, we are only interested in the city name and country. Therefore, coarse data is sufficient. We do not need to wait for the GPS data to stabilize. As soon as we get a response for the Yahoo! service, we move on to the next step.

SQLite

SQLManager is a static class, so it can be accessed from anywhere in the application. The Main class holds an object which stores information related to a photo until it is complete and ready to be saved:

[code]

var currentPhoto:Object = {photo:””, audio:””, geo:””};

[/code]

The photo property stores the path to where the image is saved in the Gallery. The audio property stores the path to where the WAV file is located and the geo property stores a string with city and country information.

From the SavingView view, the object is saved in the myAlbum.db file on the SD card.

P2P Connection

The peer-to-peer connection is used to send the image, audio caption, and location over a LAN. This example is to demonstrate the potential of what you can do more than a proper use case because the transfer is slow, unless the information is sent in packets and reassembled. This technology is feasible for fairly small amounts of data and has a lot of potential for gaming and social applications.

Once the user has selected an image, she can transfer it to a companion desktop application from the SavingView view. The PeerService class handles the communication to the LAN and the posting of data.

Scrolling Navigation

The MenuView that displays the images saved in the database has scrolling capability if the content is larger than the height of the screen.

There are two challenges to address. The first is the performance of a potentially large list to scroll. The second is the overlapping interactive objects. The scrollable list contains elements that also respond to a mouse event. Both functionalities need to work without conflicting.

We only need to scroll the view if the container is larger than the device height, so there is no need to add unnecessary code. Let’s check its dimensions in the onShow method:

[code]

function onShow():void {
deviceHeight = stage.stageHeight;
container = new Sprite();
addChild(container);
// populate container
if (container.height > deviceHeight) {
trace(“we need to add scrolling functionality”);
}
}

[/code]

If the container sprite is taller than the screen, let’s add the functionality to scroll. Touch events do not perform as well as mouse events. Because we only need one touch point, we will use a mouse event to detect the user interaction. Note that we set cacheAsBit map to true on the container to improve rendering:

[code]

function onShow():void {
if (container.height > deviceHeight) {
container.cacheAsBitmap = true;
stage.addEventListener(MouseEvent.MOUSE_DOWN,
touchBegin, false, 0, true);
stage.addEventListener(MouseEvent.MOUSE_UP,
touchEnd, false, 0, true);
}
}

[/code]

To determine if the mode is to scroll or to click an element that is part of the container, we start a timeout. We will see later why we need this timer in relation to the elements:

[code]

import flash.utils.setTimeout;
var oldY:Number = 0.0;
var newY:Number = 0.0;
var timeout:int;
function touchBegin(event:MouseEvent):void {
oldY = event.stageY;
newY = event.stageY;
timeout = setTimeOut(startMove, 400);
}

[/code]

When the time expires, we set the mode to scrollable by calling the startMove method. We want to capture the position change on MOUSE_MOVE but only need to render the change to the screen on ENTER_FRAME. This guarantees a smoother and more consistent motion. UpdateAfterEvent should never be used in mobile development because it is too demanding for devices:

[code]

function startMove(event:MouseEvent):void {
stage.addEventListener(MouseEvent.MOUSE_MOVE,
touchMove, false, 0, true);
stage.addEventListener(Event.ENTER_FRAME, frameEvent, false, 0, true);
}

[/code]

When the finger moves, we update the value of the newY coordinate:

[code]

function touchMove(event:MouseEvent):void {
newY = event.stageY;
}

[/code]

On the enterFrame event, we render the screen using the new position. The container is moved according to the new position. To improve performance, we show and hide the elements that are not in view using predefined bounds:

[code]

var totalChildren:int = container.numChildren;
var topBounds:int = -30;
function frameEvent(event:Event):void {
if (newY != oldY) {
var newPos = newY – oldY;
oldY = newY;
container.y += newPos;
for (var i:int = 0; i < totalChildren; i++) {
var mc:MovieClip = container.getChildAt(i) as MovieClip;
var pos:Number = container.y + mc.y;
mc.visible = (pos > topBounds && pos < deviceHeight);
}

}
}

[/code]

On touchEnd, the listeners are removed:

[code]

function touchEnd(event:MouseEvent):void {
stage.removeEventListener(MouseEvent.MOUSE_MOVE, touchMove);
stage.removeEventListener(Event.ENTER_FRAME, frameEvent);
}

[/code]

As mentioned before, elements in the container have their own mouse event listeners:

[code]

element.addEventListener(MouseEvent.MOUSE_DOWN, timeMe, false, 0, true);
element.addEventListener(MouseEvent.MOUSE_UP, clickAway, false, 0, true);

[/code]

On mouse down, the boolean variable isMoving is set to false and the visual cue indicates that the element was selected:

[code]

var isMoving:Boolean = false;
var selected:MovieClip;
function timeMe():void {
isMoving = false;
selected = event.currentTarget as MovieClip;
selected.what.textColor = 0x336699;
}

[/code]

On mouse up and within the time allowed, the stage listeners and the timeout are removed. If the boolean isMoving is still set to false and the target is the selected item, the application navigates to the next view:

[code]

function clickAway(event:MouseEvent):void {
touchEnd(event);
clearTimeOut(timeout);
if (selected == event.currentTarget && isMoving == false) {
dispatchEvent(new ClickEvent(ClickEvent.NAV_EVENT,
{view:”speaker”, id:selected.id}));
}

[/code]

Now let’s add to the frameEvent code to handle deactivating the element when scrolling. Check that an element was pressed, and check that the selected variable holds a value and that the motion is more than two pixels. This is to account for screens that are very responsive. If both conditions are met, change the boolean value, reset the look of the element, and set the selected variable to null:

[code]

function frameEvent(event:Event):void {
if (newY != oldY) {
var newPos = newY – oldY;
oldY = newY;
container.y += newPos;
for (var i:int = 0; i < totalChildren; i++) {
var mc:MovieClip = container.getChildAt(i) as MovieClip;
var pos:Number = container.y + mc.y;
mc.visible = (pos > topBounds && pos < deviceHeight);
}
if (selected != null && Math.abs(newPos) > 2) {
isMoving = true;
selected.what.textColor = 0x000000;
selected = null;
}
}
}

[/code]

There are various approaches to handle scrolling. For a large number of elements, the optimal way is to only create as many element containers as are visible on the screen and populate their content on the fly. Instead of moving a large list, move the containers as in a carousel animation and update their content by pulling the data from a Vector or other form of data content.

If you are using Flash Builder and components, look at the Adobe lighthouse package (http://www.adobe.com/devnet/devices/fpmobile.html). It contains DraggableVertical Container for display objects and DraggableVerticalList for items.

Desktop Functionality

The AIR desktop application, as shown in Figure 17-7, is set to receive the data and display it. Seeing a high resolution on a large screen demonstrates how good the camera quality of some devices can be. The image can be saved on the desktop as a JPEG.

Figure 17-7. AIR desktop companion application to display images received from the device
Figure 17-7. AIR desktop companion application to display images received from the device

Another technology, not demonstrated here, is Pixel Bender, used for image manipulation. It is not available for AIR for Android but is for AIR on the desktop. So this would be another good use case where devices and the desktop can complete one another.

Audio Assets

As with visual assets, there are different methods for using audio assets in your application. We will go over the available options next.

Embedding Files

You can embed sounds in your application by adding them to your Flash library or your Flash Builder project. Embedded files should be small, like the ones used for sound effects or user interface audio feedback.

Your application will not display until all of its assets are loaded. Test it. If it sits on a black screen for too long, you may want to group the sounds in an external .swf file that you load as a separate process.

Using Flash Professional

Unless you place audio on the timeline, you need to give it a linkage name. Go to Library→Properties→Linkage and click on Export for ActionScript. Change the name so that it doesn’t include an extension, and add a capital letter to conform to class naming conventions. For instance, “mySound.mp3” should be “MySound”. Note that the Base class becomes flash.media.Sound:

var mySound:MySound = new MySound();
mySound.play();

Using Flash Builder

Place your audio file in your project folder. Embed it and assign it to a class so that you can create an instance of it:

import flash.media.Sound;
[Embed(source=”mySound.mp3″)]
public var Simple:Class;
var mySound:Sound = new Simple as Sound;
mySound.play();

Using External Files

Using external files is best for long sounds or if you want the flexibility to replace the files without recompiling your application.

import flash.media.Sound;
import flash.net.URLRequest;
var urlRequest:URLRequest = new URLRequest(“mySound.mp3”);
var sound:Sound = new Sound();
sound.load(urlRequest);
sound.play();

This example works for a small file, which loads quickly. We will cover how to handle larger files in the section “Loading Sounds”

Settings and the Audio Codec

The Flash Authoring tool offers the option to modify audio files directly in the library. You can change compression, convert from stereo to mono, and choose a sample rate without requiring an external audio tool. Settings are set globally in the Publish Settings panel and can be overwritten for individual files in the library.

If you own Soundbooth, or another external audio application, you can launch it for an individual sound from within the development tools and make changes, which will be applied to the sound in your project. You can, for instance, change the track from stereo to mono or adjust the volume.

In Flash Professional, select the track in the library, click the top pull-down menu, and select “Edit with” to launch the audio editing application. In Flash Builder, single-click the asset, right-click, and select “Open with” to launch the sound application.

The most professional approach, of course, is to work in an audio application directly, as you have more control over your sound design: all files can be opened together and you can set uniform settings such as volume. Prepare your audio carefully beforehand to remove any unnecessary bytes. For background music, write a small file which loops, rather than a long track.

Compression

Supported compressed formats are MP3 (MPEG-1 Audio Layer 3), AAC (Advanced Audio Coding), WAV (Waveform Audio File Format), and AIFF (Audio Interchange File Format).

MP3 can be imported dynamically using the Sound object. MP3 adds a problematic small silence at the beginning of the track. MP3 encodes incoming audio data in blocks. If the data does not fill up a complete block, the encoder adds padding at the beginning and the end of the track. Read André Michelle’s blog on the issue, and a potential solution, at http://blog.andre-michelle.com/2010/playback-mp3-loop-gapless/.

AAC audio can also be loaded dynamically using the NetStream class. AAC is considered the successor of the MP3 format. It is particularly interesting to us because it is hardware-decoded in AIR for Android:

import flash.net.NetConnection;
import flash.net.NetStream;
var connection:NetConnection = new NetConnection();
connection.connect(null);
var stream:NetStream = new NetStream(connection);
var client:Object = new Object();
client.onMetaData = onMetaData;
stream.client = client;
stream.play(“someAudio.m4a”);

To control or manipulate an AAC file, refer to the section “Playing Sounds” Here is some sample code:

var mySound:SoundTransform;
stream.play(“someAudio.m4a”);
mySound = stream.soundTransform;
// change volume
mySound.volume = 0.75;
stream.soundTransform = mySound;

You can embed WAV or AIFF files in your project or library. Or you can use one of the third-party tools mentioned earlier.

Supported uncompressed settings are Adaptive Differential Pulse Code Modulation (ADPCM), and Raw, which uses no compression at all. Uncompressed formats must be embedded.

Bit rate

The bit rate represents the amount of data encoded for one second of a sound file. The higher the bit rate, the better the audio fidelity, but the bigger the file size. For mobile applications, consider reducing the bit rate that you would normally choose for desktop applications.

Bit rate is represented in kilobits per second (kbps), and ranges from 8 to 160 kbps. The default audio publish setting in Flash Professional is 16 kbps Mono.

Sampling rate

The sampling rate is the number of samples taken from an analog audio signal to make a digital signal—44.1 kHz represents 44,100 samples per second. The most common rates are 11.025, 22.05, and 44.1; 44.1 kHz/16-bit is referred to as CD-Quality and is the sampling rate Flash Player always assumes is used.

Stereo or mono

Stereo or mono
The external speaker on Android devices is monophonic. The headphones are usually stereo, although the output may not be true stereo.

Publish to Android Installer

Now that you have created your new application, it is time to publish it to an Android installer file, which is an archive file with an .apk extension. Flash Builder provides all of the tools to accomplish this task.

To demonstrate how to compile an application to an Android installer, let’s walk through this process with the following steps:

  1. First, click on File→Export within Flash Builder’s main menu (see Figure 7-1).
  2. Next, select Flash Builder→Release Build (see Figure 7-2).
  3. Within the Export Release Build window, select the Project and Application that you would like to compile (see Figure 7-3).
  4. If you already have a certificate compiled, select that certificate, enter its password, and click the Finish button to compile the Android installer file (.apk). If you do not yet have a certificate, click the Create button (see Figure 7-4).

    To create a new certificate, complete the Create Self-Signed Digital Certificate form and click on the OK button (see Figure 7-5).

  5. To compile the Android installer file (.apk), click on the Finish button (see Figure 7-6).

Congratulations: you have just compiled your first Android application. To publish your new application to the Android Market, just visit https://market.android.com/publish.

Selecting File→Export

Selecting Flash Builder→Release Build

The Export Release Build screen

Selecting or creating a certificate

 

Creating a new certificate

Completing the export

The GestureWorks Library

Ideum, a design company specializing in museum work, developed and sells a library for detecting multitouch and gestures for Flash Professional CS5 and Flash Builder 4.0. Called GestureWorks (see http://gestureworks.com), the library supports all the gestures we have discussed thus far.

GestureWorks provides unique gestures such as flip/flick which calculates acceleration, 3D tilt, and a multitouch gesture scroll. It also supports continuous transitional and concurrent gesturing, which means you can use multiple gestures, such as move, rotate, and scale, simultaneously.

GestureWorks also includes a simulator for testing touch-based interactions within your application if you do not have ready access to touch-based screens. This should facilitate a quicker and smoother development process.

Lastly, GestureWorks comes with many examples, including a Google Maps example which demonstrates the expected gestures when manipulating a map. This will give you a head start if you are interested in applications using geocoding.

 

Internal or External Storage?

Let’s consider where to save data first. Data can be saved internally or externally.

Internally, File.ApplicationDirectory is the directory where your application and its assets are installed. AIR made this directory read-only because it is not writable on all systems. Instead, you can write to File.ApplicationStorageDirectory, the storage directory allocated to your application. Use this location to save fairly small amounts of data and information such as preferences and user settings. For example, your application data should not occupy more than a portion of the application’s full size.

If your application is removed, the data saved in storage is deleted along with it.

Users can erase the data by selecting Settings→Applications→Manage Applications→ Application→Clear Data. They will be alerted to the consequences with a warning that reads, “All of this application’s data will be deleted permanently. This includes all files, settings, accounts, databases and so on.” Android provides the option to set allowClearUserData to false to prevent users from clearing data. At the time of this writing, this feature is not available in the Flash Professional and Flash Builder permissions panel.

Data can also be saved internally in the memory cache allocated to your application. To use this approach, create a temporary file or folder and save data in it. This is a good place to save noncritical information such as downloaded data files that may not have a lasting value, or temporary saved files. If your application is removed, the data saved in the cache is also deleted.

Users can erase the cache under Settings→Applications→Manage Applications→Application→Clear Cache. Android does not provide the option to prevent clearing the cache.

Externally, data can be saved on the device’s SD card under the File.documentsDirectory directory, also referred to as File.userDirectory or File.desktopDirectory. Use this approach for any relatively large amounts of data, such as images or video or temporary files. Create a directory with your application name to keep it distinct from other applications’
data.

Writing to the card requires a permission, which needs to be added to the descriptor file. If you don’t have this permission, AIR will throw a runtime error:

<uses-permission android:name=
“android.permission.WRITE_EXTERNAL_STORAGE” />

Before installing any data, make sure the user’s phone has an SD card:

if (File.userDirectory)
// proceeds with saving data

You can use this approach as a way for one application to write data and another application to access that data.

If your application is deleted, the data is not deleted automatically. However, the data is visible, and can therefore be removed by the user even if your application is not. If the user removes the SD card, the data becomes unavailable.

A word of warning: during development, if you are using Flash Professional to install your application on the device, every uninstall/reinstall deletes previously saved data, including updates. In Flash Builder, you can prevent this behavior from occurring by unchecking the “Clear application data” box on each Launch option when you first create your project. If the user installs an update of your application, however, previously saved data is preserved.

It is better to use filenames than paths to guarantee consistency across devices and platforms, and the resolvePath method to refine the path.

Here is the list of directories and their equivalent paths:

/data/data/app.appId/app/assets
app:/
File.applicationDirectory
/data/data/app.appID/appID/Local Store
app-storage:/
/data/data/app.appID/appID/Local Store
File.applicationStorageDirectory
/sdcard
File.documentsDirectory
File.userDirectory
File.desktopDirectory
/data/data/app.appId/cache
File.createTempDirectory()
File.createTempFile()

There are several ways to save persistent application data on your device. The amount of space your data requires, and the complexity of the data, will determine which approach to take. We will discuss these options next.

 

Permissions

The AIR 2.6 release includes the permission options outlined below, which can be selected within the new Flex Mobile project interface of Flash Builder 4.5. This is shown in Figure 3-1. Figure 3-2 shows the warning the user will see when installing an application with permission requests. The permissions are:

INTERNET
Allows applications to open sockets and embed HTML content.
WRITE_EXTERNAL_STORAGE
Allows an application to write to external storage.
READ_PHONE_STATE
Allows the AIR Runtime to mute audio from application, in case of incoming call.
ACCESS_FINE_LOCATION
Allows an application to access GPS location.
DISABLE_KEYGUARD, WAKE_LOCK
Allows applications to access screen dimming provision.
CAMERA
Allows applications to access device camera.
RECORD_AUDIO
Allows applications to access device microphone.
ACCESS_NETWORK_STATE, ACCESS_WIFI_STATE
Allows applications to access information about network interfaces associated with
the device.

Permission selections

Installer permission warnings

These permissions are also editable within the application’s XML configuration file.

Here is a sample of what that looks like:

<android>
<manifestAdditions><![CDATA[
<manifest installLocation=”auto”>
<!–See the Adobe AIR documentation for more information about setting
Google Android permissions–>
<!–Removing the permission android.permission.INTERNET will have the
side effect of preventing you from debugging your application
on your device–>
<uses-permission name=”android.permission.INTERNET”/>
<!–<uses-permission name=”android.permission.WRITE_EXTERNAL_STORAGE”/>–>
<!–<uses-permission name=”android.permission.READ_PHONE_STATE”/>–>
<!–<uses-permission name=”android.permission.ACCESS_FINE_LOCATION”/>–>
<!–The DISABLE_KEYGUARD and WAKE_LOCK permissions should be toggled
together in order to access AIR’s SystemIdleMode APIs–>
<!–<uses-permission name=”android.permission.DISABLE_KEYGUARD”/>–>
<!–<uses-permission name=”android.permission.WAKE_LOCK”/>–>
<!–<uses-permission name=”android.permission.CAMERA”/>–>
<!–<uses-permission name=”android.permission.RECORD_AUDIO”/>–>
<!–The ACCESS_NETWORK_STATE and ACCESS_WIFI_STATE permissions should be
toggled together in order to use AIR’s NetworkInfo APIs–>
<!–<uses-permission name=”android.permission.ACCESS_NETWORK_STATE”/>–>
<!–<uses-permission name=”android.permission.ACCESS_WIFI_STATE”/>–>
</manifest>
]]></manifestAdditions>
</android>

Tabbed Application

The final option for application type is the Tabbed Application. Selecting Tabbed Application when creating a new Flex Mobile project will prompt Flash Builder to provide some additional functionality. As you can see within Figure 2-8, choosing Tabbed Application allows you to define your tabs right within the New Flex Mobile Project interface. In this example, I have added “My Application” and “My Preferences” tabs. After clicking Finish, Flash Builder will create my new Tabbed Application, as well as views for the tabs I defined. The code example below shows the contents of my main application file, named Tabbed.mxml. It is important to note that each of the views I defined (My Application and My Preferences) are included as View Navigator objects. This means that they will have their own navigator objects and can
include their own independent navigation, just as within the View-Based Application we previously discussed. Figure 2-9 shows the running Tabbed Application. Figure 2-10 shows the View-Based Application views we previously created, within the My Application tab of the Tabbed Application:

<?xml version=”1.0″ encoding=”utf-8″?>
<s:TabbedViewNavigatorApplication xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adobe.com/flex/spark”>
<s:ViewNavigator label=”My Application” width=”100%” height=”100%”
firstView=”views._MyApplicationView”/>
<s:ViewNavigator label=”My Preferences” width=”100%” height=”100%”
firstView=”views._MyPreferencesView”/>
<fx:Declarations>
<!– Place non-visual elements (e.g., services, value objects) here –>
</fx:Declarations>
</s:TabbedViewNavigatorApplication>

Create a new Tabbed Application

A Tabbed ApplicationA Tabbed Application with navigators

 

View-Based Application

The View-Based Application adds the concept of a navigator, which is a built-in navigation framework specifically built for use within mobile applications. The navigator will manage the screens within your application. Creating a new View-Based Application within Flash Builder 4.5 will result in the generation of two files. These files are the main application file, as well as the default view that will be shown within your application. Unlike the Blank Application, where the main application file was created with the <s:Application> as the parent, a View-Based Application uses the new <s:View NavigatorApplication> as its parent, as shown below:

<?xml version=”1.0″ encoding=”utf-8″?>
<s:ViewNavigatorApplication xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adobe.com/flex/spark”
firstView=”views.ViewBasedHomeView”>
<fx:Declarations>
<!– Place non-visual elements (e.g., services, value objects) here –>
</fx:Declarations>
</s:ViewNavigatorApplication>

The second file that is created is the default view, which is automatically placed in a package named views. In this case, it was named ViewBasedHomeView, and was automatically set as the firstView property of ViewNavigatorApplication. The autogenerated code for this file is shown below:

<?xml version=”1.0″ encoding=”utf-8″?>
<s:View xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adobe.com/flex/spark” title=”HomeView”>
<fx:Declarations>
<!– Place non-visual elements (e.g., services, value objects) here –>
</fx:Declarations>
</s:View>

Figure 2-3 shows the View-Based Application after adding a Label to ViewBasedHome View. As you can see, the navigation framework automatically provides a header and places the title of the current view in that header.

A View-Based Application

Now let’s explore the navigator a bit. I have created a second view for my application named SecondView. I updated ViewBasedHomeView to have a Button, and also added a Button to the SecondView shown below. As you can see, each view contains a Button with a similar clickHandler. The clickHandler simply calls the pushView function on the navigator and passes in the view that you wish to have the user navigate to. Home- View will navigate to Second View, and Second View will navigate to HomeView.

Between each view, a transition is automatically played and the title of the view is reflected in the navigation bar. This can be seen in Figures 2-4 and 2-5:

<?xml version=”1.0″ encoding=”utf-8″?>
<s:View xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adobe.com/flex/spark” title=”HomeView”>
<fx:Script>
<![CDATA[
protectedfunction button1_clickHandler(event:MouseEvent):void
{
navigator.pushView(views.SecondView);
}
]]>
</fx:Script>
<fx:Declarations>
<!– Place non-visual elements (e.g., services, value objects) here –>
</fx:Declarations>
<s:Button label=”Go To Second View”
horizontalCenter=”0″ verticalCenter=”0″
click=”button1_clickHandler(event)”/>
</s:View>
<?xml version=”1.0″ encoding=”utf-8″?>
<s:View xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adobe.com/flex/spark” title=”SecondView”>
<fx:Script>
<![CDATA[
protectedfunction button1_clickHandler(event:MouseEvent):void
{
navigator.pushView(views.ViewBasedHomeView);
}
]]>
</fx:Script>
<fx:Declarations>
<!– Place non-visual elements (e.g., services, value objects) here –>
</fx:Declarations>
<s:Button label=”Go To Home View”
horizontalCenter=”0″ verticalCenter=”0″
click=”button1_clickHandler(event)”/>
</s:View>

The HomeView screen

The Second View screen

The navigator has additional methods for moving between views within your application. They are as follows:

navigator.popAll()

Removes all of the views from the navigator stack. This method changes the display to a blank screen.

navigator.popToFirstView()

Removes all views except the bottom view from the navigation stack. The bottom view is the one that was first pushed onto the stack.

navigator.popView()

Pops the current view off the navigation stack. The current view is represented by the top view on the stack. The previous view on the stack becomes the current view.

navigator.pushView()

Pushes a new view onto the top of the navigation stack. The view pushed onto the stack becomes the current view.

Each of the methods described above allow for a transition to be passed in. By default, they will use a Wipe transition. All pop actions will wipe from left to right, while a push action will wipe from right to left.

Another important item to note on navigator.pushView() is the ability to pass an object into the method call. I have updated the sample below to demonstrate how to use this within your applications.

The ViewBasedHomeView shown below now includes a piece of String data (“Hello from Home View”) within the pushView() method. SecondView has also been updated to include a new Label, which is bound to the data object. This data object is what will hold the value of the object passed in through the pushView() method. Figure 2-6 shows how
SecondView is created with the Label showing our new message:

<?xml version=”1.0″ encoding=”utf-8″?>
<s:View xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adobe.com/flex/spark” title=”HomeView”>
<fx:Script>
<![CDATA[
protectedfunction button1_clickHandler(event:MouseEvent):void
{
navigator.pushView(views.SecondView, “Hello from Home View”);
}
]]>
</fx:Script>
<fx:Declarations>
<!– Place non-visual elements (e.g., services, value objects) here –>
</fx:Declarations>
<s:Button label=”Go To Second View”
horizontalCenter=”0″ verticalCenter=”0″
click=”button1_clickHandler(event)”/>
</s:View>

<?xml version=”1.0″ encoding=”utf-8″?>
<s:View xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adobe.com/flex/spark” title=”SecondView”>
<fx:Script>
<![CDATA[
protectedfunction button1_clickHandler(event:MouseEvent):void
{
navigator.pushView(views.ViewBasedHomeView);
}
]]>
</fx:Script>
<fx:Declarations>
<!– Place non-visual elements (e.g., services, value objects) here –>
</fx:Declarations>
<s:Label text=”{data}” horizontalCenter=”0″ top=”30″/>
<s:Button label=”Go To Home View”
horizontalCenter=”0″ verticalCenter=”0″
click=”button1_clickHandler(event)”/>
</s:View>

pushView() with data passed through

The navigation bar at the top of a View-Based Application allows you to set specific elements. These are navigationContent and actionContent. By setting these elements, your application can include a common navigation throughout. Here is an example of the View-Based Application’s main file updated with these new elements. You will notice that navigationContent, actionContent, and the Spark components are defined in MXML. Within each, I have included a Button. Each Button has a clickHandler that includes a call to one of the navigator methods. The Button labeled “Home” has a click Handler that includes a call to the popToFirstView() method, which will always send the user back to the view defined in the firstView property of the ViewNavigation Application. The Button labeled “Back” has a clickHandler that includes a call to the popView() method, which will always send the user to the previous view in the stack.

Figure 2-7 shows the application, which now includes the new navigation elements within the navigation bar:

<?xml version=”1.0″ encoding=”utf-8″?>
<s:ViewNavigatorApplication xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adobe.com/flex/spark”
firstView=”views.ViewBasedHomeView”>
<fx:Script>
<![CDATA[
protectedfunction homeButton_clickHandler(event:MouseEvent):void
{
navigator.popToFirstView();
}
protectedfunction backButton_clickHandler(event:MouseEvent):void
{
navigator.popView();
}
]]>
</fx:Script>
<fx:Declarations>
<!– Place non-visual elements (e.g., services, value objects) here –>
</fx:Declarations>
<s:navigationContent>
<s:Button id=”homeButton” click=”homeButton_clickHandler(event)”
label=”Home”/>
</s:navigationContent>

<s:actionContent>
<s:Button id=”backButton” click=”backButton_clickHandler(event)”
label=”Back”/>
</s:actionContent>
</s:ViewNavigatorApplication>

 

navigationContent and actionContent