Case Study

The Album Application

In the mobile version of the AIR application, the user takes a picture or pulls one from the camera roll. She can save it to a dedicated database, along with an audio caption and geolocation information. The group of saved images is viewable in a scrollable menu. Images can be sent from the device to the desktop via a wireless network.

In the desktop version, the user can see an image at full resolution on a large screen. It can be saved to the desktop, and it can also be edited and uploaded to a photo service.

Please download the two applications from this book’s website at http://oreilly.com/ catalog/9781449394820.

Design

The design is simple, using primary colors and crisp type. The project was not developed for flexible layout. It was created at 800×480 resolution with auto-orientation turned off; you can use it as a base from which to experiment developing for other resolutions. The art is provided in a Flash movie to use in Flash Professional or as an .swc file to import into Flash Builder by selecting Properties→ActionScript Build Path→Library Path and clicking on “Add swc.”

Architecture

The source code consists of the Main document class and the model, view, and events packages (see Figure 17-1).

The model package contains the AudioManager, the SQLManager, the GeoService, and the PeerService. The view package contains the NavigationManager and the various views. The events package contains the various custom events.

The SQLManager class is static, so it can be accessed from anywhere in the application without instantiation. The other model classes are passed by reference.

Flow

The flow of the application is straightforward. The user goes through a series of simple tasks, one step at a time. In the opening screen, the OpeningView, the user can select a new picture or go to the group menu of images, as shown in Figure 17-2.

From the AddView page, the user can open the Media Gallery or launch the camera, as shown in Figure 17-3. Both choices go to the same CameraView view. An id parameter is passed to the new view to determine the mode.

The image data received from either of the sources is resized to fit the dimensions of the stage. The user can take another picture or accept the current photograph if satisfied. The image url is sent to the SQLManager to store in its class variable current Photo of type Object, and the application goes to the CaptionView.

In the CaptionView, the user can skip the caption-recording step or launch the Audio Manager. The recording is limited to four seconds and automatically plays the caption sound back. The user can record again or choose to keep the audio. The AudioMan ager compresses the recording as a WAV file and saves it on the SD card. Its url is saved in the SQLManager’s currentPhoto object. The next step is to add geographic information.

Figure 17-1. Packages and classes for the Album application
Figure 17-1. Packages and classes for the Album application
Figure 17-2. The OpeningView
Figure 17-2. The OpeningView

In the GeoView, the user can skip or launch the GeoService. This service creates an instance of the GeoLocation, fetches coordinates, and then requests the corresponding city and country from the Yahoo! API. As in the previous steps, the geodata is saved in the SQLManager’s currentPhoto object. These three steps are shown in Figure 17-4.

Figure 17-3. The AddView, native camera application, and Media Gallery
Figure 17-3. The AddView, native camera application, and Media Gallery
Figure 17-4. The CameraView, CaptionView, and GeoView
Figure 17-4. The CameraView, CaptionView, and GeoView

In the SavingView mode, data saving can be skipped or the data can be saved. For the latter, the SQLManager opens an SQL connection and saves the data, then closes the connection. The application goes back to the OpeningView.

Back at our starting point, another navigation choice is the Group menu. The Menu View page requests the number of images saved for the SQLManager and displays it as a list of items. If the list height is taller than the screen, it becomes scrollable. Selecting one of the items takes the user to the PhotoView screen. The SavingView page and MenuView page are shown in Figure 17-5.

The PhotoView displays the image selected in the MenuView. Choosing to connect calls the PeerService to set up a P2P connection using the WiFi network. Once it is established, the data is requested from the SQLManager using the item ID. The data is then sent. It includes the byteArray from the image, a WAV file for the audio, and the city and country as text. These steps are displayed in Figure 17-6.

Figure 17-5. The SavingView and MenuView
Figure 17-5. The SavingView and MenuView
Figure 17-6. The PhotoView and the steps to send the picture information using a P2P connection
Figure 17-6. The PhotoView and the steps to send the picture information using a P2P connection

Permissions

This application needs the following permissions to access the Internet, write to the SD card, and access GPS sensors, the camera, and the microphone:

[code]

<android>
<manifestAdditions>
<![CDATA[
<manifest>
<uses-permission
android:name=”android.permission.INTERNET”/>
<uses-permission
android:name=”android.permission.WRITE_EXTERNAL_STORAGE”/>
<uses-permission
android:name=”android.permission.ACCESS_FINE_LOCATION”/>
<uses-permission
android:name=”android.permission.ACCESS_COARSE_LOCATION”/>
<uses-permission
android:name=”android.permission.CAMERA”/>
<uses-permission
android:name=”android.permission.RECORD_AUDIO”/>
</manifest>
]]>
</manifestAdditions>
</android>

[/code]

Navigation

The ViewManager class discussed here is almost identical. The flow is a step-by-step process whereby the user can choose to skip the steps that are optional.

Images

The CameraView is used to get an image, either by using the media library or by taking one using the camera. The choice is based on a parameter passed from the previous screen. The process of receiving the bytes, scaling, and displaying the image is the same regardless of the image source. It is done by a utility class called BitmapDataSizing and is based on the dimensions of the screen.

To improve this application, check if an image is already saved when the user selects it again to avoid duplicates.

Audio

The audio caption is a novel way to save a comment along with the image. There is no image service that provides the ability to package an audio commentary, but you could build such an application.

WAV files using the Adobe class WAVReader and then extracted using a third-party library. Here, we create an Album directory on the SD card and a mySounds directory inside it to store the WAV files.

Reverse Geolocation

the process of using geographical coordinates to get an address location such as a city and a street address.

In this application, we are only interested in the city name and country. Therefore, coarse data is sufficient. We do not need to wait for the GPS data to stabilize. As soon as we get a response for the Yahoo! service, we move on to the next step.

SQLite

SQLManager is a static class, so it can be accessed from anywhere in the application. The Main class holds an object which stores information related to a photo until it is complete and ready to be saved:

[code]

var currentPhoto:Object = {photo:””, audio:””, geo:””};

[/code]

The photo property stores the path to where the image is saved in the Gallery. The audio property stores the path to where the WAV file is located and the geo property stores a string with city and country information.

From the SavingView view, the object is saved in the myAlbum.db file on the SD card.

P2P Connection

The peer-to-peer connection is used to send the image, audio caption, and location over a LAN. This example is to demonstrate the potential of what you can do more than a proper use case because the transfer is slow, unless the information is sent in packets and reassembled. This technology is feasible for fairly small amounts of data and has a lot of potential for gaming and social applications.

Once the user has selected an image, she can transfer it to a companion desktop application from the SavingView view. The PeerService class handles the communication to the LAN and the posting of data.

Scrolling Navigation

The MenuView that displays the images saved in the database has scrolling capability if the content is larger than the height of the screen.

There are two challenges to address. The first is the performance of a potentially large list to scroll. The second is the overlapping interactive objects. The scrollable list contains elements that also respond to a mouse event. Both functionalities need to work without conflicting.

We only need to scroll the view if the container is larger than the device height, so there is no need to add unnecessary code. Let’s check its dimensions in the onShow method:

[code]

function onShow():void {
deviceHeight = stage.stageHeight;
container = new Sprite();
addChild(container);
// populate container
if (container.height > deviceHeight) {
trace(“we need to add scrolling functionality”);
}
}

[/code]

If the container sprite is taller than the screen, let’s add the functionality to scroll. Touch events do not perform as well as mouse events. Because we only need one touch point, we will use a mouse event to detect the user interaction. Note that we set cacheAsBit map to true on the container to improve rendering:

[code]

function onShow():void {
if (container.height > deviceHeight) {
container.cacheAsBitmap = true;
stage.addEventListener(MouseEvent.MOUSE_DOWN,
touchBegin, false, 0, true);
stage.addEventListener(MouseEvent.MOUSE_UP,
touchEnd, false, 0, true);
}
}

[/code]

To determine if the mode is to scroll or to click an element that is part of the container, we start a timeout. We will see later why we need this timer in relation to the elements:

[code]

import flash.utils.setTimeout;
var oldY:Number = 0.0;
var newY:Number = 0.0;
var timeout:int;
function touchBegin(event:MouseEvent):void {
oldY = event.stageY;
newY = event.stageY;
timeout = setTimeOut(startMove, 400);
}

[/code]

When the time expires, we set the mode to scrollable by calling the startMove method. We want to capture the position change on MOUSE_MOVE but only need to render the change to the screen on ENTER_FRAME. This guarantees a smoother and more consistent motion. UpdateAfterEvent should never be used in mobile development because it is too demanding for devices:

[code]

function startMove(event:MouseEvent):void {
stage.addEventListener(MouseEvent.MOUSE_MOVE,
touchMove, false, 0, true);
stage.addEventListener(Event.ENTER_FRAME, frameEvent, false, 0, true);
}

[/code]

When the finger moves, we update the value of the newY coordinate:

[code]

function touchMove(event:MouseEvent):void {
newY = event.stageY;
}

[/code]

On the enterFrame event, we render the screen using the new position. The container is moved according to the new position. To improve performance, we show and hide the elements that are not in view using predefined bounds:

[code]

var totalChildren:int = container.numChildren;
var topBounds:int = -30;
function frameEvent(event:Event):void {
if (newY != oldY) {
var newPos = newY – oldY;
oldY = newY;
container.y += newPos;
for (var i:int = 0; i < totalChildren; i++) {
var mc:MovieClip = container.getChildAt(i) as MovieClip;
var pos:Number = container.y + mc.y;
mc.visible = (pos > topBounds && pos < deviceHeight);
}

}
}

[/code]

On touchEnd, the listeners are removed:

[code]

function touchEnd(event:MouseEvent):void {
stage.removeEventListener(MouseEvent.MOUSE_MOVE, touchMove);
stage.removeEventListener(Event.ENTER_FRAME, frameEvent);
}

[/code]

As mentioned before, elements in the container have their own mouse event listeners:

[code]

element.addEventListener(MouseEvent.MOUSE_DOWN, timeMe, false, 0, true);
element.addEventListener(MouseEvent.MOUSE_UP, clickAway, false, 0, true);

[/code]

On mouse down, the boolean variable isMoving is set to false and the visual cue indicates that the element was selected:

[code]

var isMoving:Boolean = false;
var selected:MovieClip;
function timeMe():void {
isMoving = false;
selected = event.currentTarget as MovieClip;
selected.what.textColor = 0x336699;
}

[/code]

On mouse up and within the time allowed, the stage listeners and the timeout are removed. If the boolean isMoving is still set to false and the target is the selected item, the application navigates to the next view:

[code]

function clickAway(event:MouseEvent):void {
touchEnd(event);
clearTimeOut(timeout);
if (selected == event.currentTarget && isMoving == false) {
dispatchEvent(new ClickEvent(ClickEvent.NAV_EVENT,
{view:”speaker”, id:selected.id}));
}

[/code]

Now let’s add to the frameEvent code to handle deactivating the element when scrolling. Check that an element was pressed, and check that the selected variable holds a value and that the motion is more than two pixels. This is to account for screens that are very responsive. If both conditions are met, change the boolean value, reset the look of the element, and set the selected variable to null:

[code]

function frameEvent(event:Event):void {
if (newY != oldY) {
var newPos = newY – oldY;
oldY = newY;
container.y += newPos;
for (var i:int = 0; i < totalChildren; i++) {
var mc:MovieClip = container.getChildAt(i) as MovieClip;
var pos:Number = container.y + mc.y;
mc.visible = (pos > topBounds && pos < deviceHeight);
}
if (selected != null && Math.abs(newPos) > 2) {
isMoving = true;
selected.what.textColor = 0x000000;
selected = null;
}
}
}

[/code]

There are various approaches to handle scrolling. For a large number of elements, the optimal way is to only create as many element containers as are visible on the screen and populate their content on the fly. Instead of moving a large list, move the containers as in a carousel animation and update their content by pulling the data from a Vector or other form of data content.

If you are using Flash Builder and components, look at the Adobe lighthouse package (http://www.adobe.com/devnet/devices/fpmobile.html). It contains DraggableVertical Container for display objects and DraggableVerticalList for items.

Desktop Functionality

The AIR desktop application, as shown in Figure 17-7, is set to receive the data and display it. Seeing a high resolution on a large screen demonstrates how good the camera quality of some devices can be. The image can be saved on the desktop as a JPEG.

Figure 17-7. AIR desktop companion application to display images received from the device
Figure 17-7. AIR desktop companion application to display images received from the device

Another technology, not demonstrated here, is Pixel Bender, used for image manipulation. It is not available for AIR for Android but is for AIR on the desktop. So this would be another good use case where devices and the desktop can complete one another.

Saving a Recording

Let’s now save your recording on the device. In the following examples, we are saving the audio files on the SD card. Your application needs permission to write to external storage. If you do not have this permission, AIR will throw a runtime error:

<uses-permission android:name=
“android.permission.WRITE_EXTERNAL_STORAGE”/>

The BLOB type

At the time of this writing, there is no native library in which to save an MP3 file that can be played back into the application at runtime. As an alternative, you can save the bytes in an SQLite database as BLOB data. The BLOB type is raw binary data that stores information exactly as it was input.

In this section, we will compress the file to reduce its size. First, let’s create a database and a table to store the audio files:

import flash.data.SQLConnection;
import flash.events.SQLEvent;
import flash.data.SQLStatement;
import flash.errors.SQLError;
import flash.filesystem.File;
var connection:SQLConnection;
// open connection to database
connection = new SQLConnection();
connection.addEventListener(SQLEvent.OPEN, openDatabase);
var file:File = File.documentsDirectory.resolvePath(“Dictaphone.db”);
connection.open(file);

function openDatabase(event:SQLEvent) {
connection.removeEventListener(SQLEvent.OPEN, openDatabase);
createTable();
}
// create or open table
function createTable():void {
var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
var request:String =
“CREATE TABLE IF NOT EXISTS mySounds (” +
“id INTEGER PRIMARY KEY AUTOINCREMENT, ” +
“audio BLOB )”;
statement.text = request;
try {
statement.execute();
} catch(error:SQLError) {
trace(error.message, error.details);
}
}

Now we’ll compress the audio and save it in the database. Here we are using ZLIB compression, which provides good results but is somewhat slow to execute:

import flash.utils.CompressionAlgorithm;
var statement:SQLStatement;
function saveItem():void {
// compress the bytes
bytes.position = 0;
bytes.compress(CompressionAlgorithm.ZLIB);
var command:String =
“INSERT INTO mySounds(audio) VALUES (?)”;
statement = new SQLStatement();
statement.sqlConnection = connection;
statement.text = command;
statement.parameters[0] = bytes;
try {
statement.execute();
} catch(error:SQLError) {
trace(error.message, error.details);
}
}

Retrieve the first audio item from the database, and decompress it to use it:

import flash.data.SQLResult;
function getItem(id:Number):ByteArray {
var command:String = “SELECT * FROM mySounds WHERE id=:id;”
var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
statement.text = command;
statement.parameters[“:id”] = id;
statement.execute(1);
var result:SQLResult = statement.getResult();
if (result.data != null) {
return result.data[0];
}
return new ByteArray();
}
// to read the data back, decompress it
bytes = getItem(1).audio;
bytes.uncompress(CompressionAlgorithm.ZLIB);
bytes.position = 0;
// play audio

Use the bytes to play the audio in a Sound object, as in the previous example.

WAV files

You can save your recording as a WAV file on your device. Download the Adobe.audio.format.WAVWriter class from the audio_sampler.zip file located at http://www.adobe.com/devnet/air/flex/articles/using_mic_api.html, and import it to your project.

In this example, we are encoding our previous recording as a WAV file and saving it on the SD card in a directory called mySounds.

import com.adobe.audio.format.WAVWriter;
import flash.filesystem.File;
import flash.filesystem.FileStream;
import flash.filesystem.FileMode;
function saveWav(bytes:ByteArray):void {
// point to mySounds directory on the SD card.
var directory:File = File.documentsDirectory.resolvePath(“mySounds”);
// if directory does not exist yet, create it
if (!directory.exists) {
directory.createDirectory();
}
// create name of a new wav file
var file:File = directory.resolvePath(“mySound.wav”);
// create an instance of the WAVWriter class and set properties
var wav:WAVWriter = new WAVWriter();
wav.numOfChannels = 1; // mono
wav.sampleBitRate = 16; // or 8
wav.samplingRate = 44100; // or 22000
// rewind to the beginning of the ByteArray
bytes.position = 0;
// create stream as conduit to copy data and write file
var stream:FileStream = new FileStream();
stream.open(file, FileMode.WRITE);
// convert byteArray to WAV format and close stream
wav.processSamples(stream, bytes, 44100, 1);
stream.close();
}

Open source libraries

The current native libraries cannot load a WAV file dynamically or encode a ByteAr ray as an MP3 file. As an alternative, you can try some of the available open source libraries.

For instance, Shine, written by Gabriel Bouvigné, is an Alchemy/Flash MP3 encoder (see https://github.com/kikko/Shine-MP3-Encoder-on-AS3-Alchemy and http://code.google.com/p/flash-kikko/):

import fr.kikko.lab.ShineMP3Encoder;
encoder = new ShineMP3Encoder(bytes);
encoder.addEventListener(Event.COMPLETE, onEncoding);
encoder.addEventListener(ProgressEvent.PROGRESS, onProgress);
encoder.addEventListener(ErrorEvent.ERRROR, onError);
encoder.start();
file.save(mp3Encoder.mp3Data, “recording.mp3”);

In addition, the following WAV decoders are also available:

  • AS3WavSound (http://www.ohloh.net/p/as3wavsound)
  • standingwave3 (http://maxl0rd.github.com/standingwave3/)
  • Ogg/Vorbis (http://vorbis.com/software/
  • Tonfall (http://code.google.com/p/tonfall/; this is also an encoder)

Saving to a remote server

If you have access to a streaming media server such as Flash Media Server, you can save and stream audio to the device. The microphone can be attached to a NetStream for uploading. Audio data can also be streamed from the server and played back using a Video object.

Two compression codecs are available:

import flash.media.soundCodec;
mic.codec = SoundCodec.NELLYMOSER; // default
mic.coder = SoundCodec.SPEEX;

If you are using this technology, urge your audience to use a WiFi connection over 3G unless they have a flat-fee data plan.

 

Using XNA Interop to Record Audio

Before the Tailspin mobile client application can handle events raised by XNA objects, such as a Microphone object, it must create an XNA asynchronous dispatcher service. The following code example from the App.xaml.cs file shows how this is done.

C#
public App()
{

this.ApplicationLifetimeObjects.Add(
new XnaAsyncDispatcher(TimeSpan.FromMilliseconds(50)));
}

The VoiceQuestionView.xaml file defines two buttons, one toggles recording on and off, and the other plays back any saved audio. The recording toggle button is bound to the DefaultAction Command command in the view model, and the play button is bound to the PlayCommand command in the view-model.

The DefaultAction command uses the StartRecording and StopRecording methods in the VoiceQuestionViewModel class to start and stop audio recording. The following code example shows the StartRecording method.

C#
private void StartRecording()
{
var mic = Microphone.Default;
if (mic.State == MicrophoneState.Started)
{
mic.Stop();
}
this.formatter = new WaveFormatter(
this.wavFileName, (ushort)mic.SampleRate, 16, 1);
this.observableMic = Observable.FromEvent<EventArgs>(

h => mic.BufferReady += h, h => mic.BufferReady -= h)
.Subscribe(p =>
{
var content =
new byte[mic.GetSampleSizeInBytes(mic.BufferDuration)];
mic.GetData(content);
if (this.formatter != null)
{
this.formatter.WriteDataChunk(content);
}
});
mic.Start();
}

This method gets a reference to the default microphone on the device and creates a WaveFormatter instance to convert the raw audio data to the WAV format.

The method uses the Observable.FromEvent method to subscribe to the microphone’s BufferReady event, and whenever the event is raised, the application uses the WaveFormatter instance to write the audio data to isolated storage. Finally, the method starts the microphone.

The following code example shows the StopRecording method that disposes of the Microphone and WaveFormatter instances and attaches the name of the saved audio file to the question.

C#
private void StopRecording()
{
Microphone.Default.Stop();
this.observableMic.Dispose();
this.formatter.Dispose();
this.formatter = null;
this.Answer.Value = this.wavFileName;
}

The play button in the VoiceQuestionView view plays the recorded audio by using the SoundEffect class from the Microsoft. Xna.Framework.Audio namespace. The following code example shows the Play method from the VoiceQuestionViewModel class that loads audio data from isolated storage and plays it back.

C#
private void Play()
{
this.IsPlaying = true;
using (var fileSystem =
IsolatedStorageFile.GetUserStoreForApplication())
{
using (var dat = fileSystem.OpenFile(
this.wavFileName, FileMode.Open, FileAccess.Read))
{
try
{
using (var effect = SoundEffect.FromStream(dat))
{
var instance = effect.CreateInstance();
instance.Play();
while (instance.State == SoundState.Playing)
{
System.Threading.Thread.Sleep(100);
}
}
}
catch (ArgumentException)
{
}
}
}
this.IsPlaying = false;