Playing Video

You can play videos running from your device or loaded remotely.

Embedded Video

You can embed a video in your application using Flash Professional. Embedded video will appear in the library as a symbol. Create a MovieClip and add the video content to it. You can then control its playback by calling the standard MovieClip navigation methods.

Using this approach is simple, but it has disadvantages. The video is compiled into the application and adds to its size. Also, it is always loaded in memory and cannot be removed.

As an alternative, you can embed the video in an external .swf file which you load using the Loader class.

External Video

You can package the video with your application. It is placed in the application directory. The application will not display until all of the assets are loaded. You can also serve the video from a remote web server. The code is identical in both cases.

Progressive Video

To load video locally, you need to know the path of the file in your application directory.

NetConnection creates a connection with the local filesystem when calling its connect method. Pass a null parameter in its construction to indicate that it is not streaming.

Within the connection, NetStream opens the channel between AIR and the local filesystem. Pass the connection object as a parameter in its construction, and use its play method to receive video data. Note that this object needs its client property defined as well as the onMetaData method to prevent a runtime error.

The Video object displays the video data.

In this example, the Video object dimensions are hardcoded:

[code]

import flash.net.NetConnection;
import flash.net.NetStream;
import flash.media.Video;
import flash.events.NetStatusEvent;
var connection:NetConnection;
var video:Video;
video = new Video();
video.width = 480;
video.height = 320;
connection = new NetConnection();
connection.addEventListener(NetStatusEvent.NET_STATUS, netConnectionEvent);
connection.connect(null);
function netConnectionEvent(event:NetStatusEvent):void {
event.target.removeEventListener(NetStatusEvent.NET_STATUS,
netConnectionEvent);
if (event.info.code == “NetConnection.Connect.Success”) {
var stream:NetStream = new NetStream(connection);
stream.addEventListener(NetStatusEvent.NET_STATUS, netStreamEvent);
var client:Object = new Object();
client.onMetaData = onMetaData;
stream.client = client;
// attach the stream to the video to display
video.attachNetStream(stream);
stream.play(“someVideo.flv”);
addChild(video);
}
}
function onMetaData(info:Object):void {}

[/code]

At the time of this writing, video.smoothing is always false. This is consistent with AIR runtime default settings, but does not provide the best video experience. Setting video.smoothing to true does not change it.

SD card

You can play videos from the SD card. Playback is nearly as fast as playing back from the device.

You need to resolve the path to where the video is located before playing it. In this example, there is a directory called myVideos on the SD card and a video called myVideo inside it:

[code]

var videosPath:File = File.documentsDirectory.resolvePath(“myVideos”);
var videoName:String = “myVideo.mp4”;
stream.play(videosPath + “/” + videoName);

[/code]

Browsing for video

You cannot use CameraRoll to browse for videos, but you can use the filesystem.

You could create a custom video player for the user to play videos installed on the device or on the SD card. The browseForOpen method triggers the filesystem to search for videos:

[code]

import flash.filesystem.File;
import flash.net.FileFilter;
import flash.media.Video;
var video:Video;
var filter:FileFilter = new FileFilter(“video”, “*.mp4;*.flv;*.mov;*.f4v”);
var file:File = new File();
file.addEventListener(Event.SELECT, fileSelected);
file.browseForOpen(“open”, [filter]);

[/code]

At the time of this writing, it seems that only the FLV format is recognized when browsing the filesystem using AIR.

A list of the video files found appears. The following code is executed when the user selects one of the files. The video file is passed in the Event.SELECT event as file.tar get and is played using its url property. Note how the video is sized and displayed in the onMetaData function. We will cover this technique next:

[code]

import flash.net.NetConnection;
import flash.net.NetStream;
function fileSelected(event:Event):void {
video = new Video();
var connection:NetConnection = new NetConnection();
connection.connect(null);
var stream:NetStream = new NetStream(connection);
var client:Object = new Object();
client.onMetaData = onMetaData;
stream.client = client;
video.attachNetStream(stream);
stream.play(event.target.url);
}
function onMetaData(info:Object):void {
video.width = info.width;
video.height = info.height;
addChild(video);
}

[/code]

Metadata

The client property of NetStream is used to listen to onMetaData. In this example, we use the video stream width and height, received in the metadata, to scale the Video object. Other useful information is the duration, the frame rate, and the codec:

[code]

// define the Stream client to receive callbacks
var client:Object = new Object();
client.onMetaData = onMetaData;
stream.client = client;
// attach the stream to the video
video.attachNetStream(stream);
stream.play(“someVideo.flv”);
// size the video object based on the metadata information
function onMetaData(info:Object):void {
video.width = info.width;
video.height = info.height;
addChild(video);
trace(info.duration);
trace(info.framerate);
trace(info.codec);
for (var prop:String in info) {
trace(prop, data[prop]);
}
}

[/code]

Cue points

The FLVPlaybackComponent gives us the ability to add cue points to a video. The component listens to the current time code and compares it to a dictionary of cue points. When it finds a match, it dispatches an event with the cue point information.

The cue points come in two forms. Navigation cue points are used as markers for chapters or time-specific commentary. Event cue points are used to trigger events such as calling an ActionScript function. The cue point object looks like this:

[code]

var cuePoint:Object = {time:5, name:”cue1″, type:”actionscript”,
parameters:{prop:value}};

[/code]

This component is not available in AIR for Android. If you want to use something similar, you need to write the functionality yourself. It can be a nice addition to bridge your video to your AIR content if you keep your cue points to a minimum. Use them sparsely, as they have an impact on performance.

Cue points can be embedded dynamically server-side if you are recording the file on Flash Media Server.

Buffering

The moov atom, video metadata that holds index information, needs to be placed at the beginning of the file for a progressive file. Otherwise, the whole file needs to be completely loaded in memory before playing. This is not an issue for streaming. Look at Renaun Erickson’s wrapper to fix the problem, at http://renaun.com/blog/code/qtindexswapper/.

By default, the application uses an input buffer. To modify the default buffering time, use the following:

[code]var stream:NetStream = new NetStream(connection);
stream.bufferTime = 5; // value in seconds[/code]

When using a streaming server, managing bandwidth fluctuation is a good strategy:

[code]

var stream:NetStream = new NetStream(connection);
stream.addEventListener(NetStatusEvent.NET_STATUS, netStreamEvent);
function netStreamEvent(event:NetStatusEvent):void {
var buffTime:int;
swith(event.info.code) {
case “NetStream.Buffer.Full” :
buffTime = 15.0;
break;
case “NetStream.Buffer.empty” :
buffTime = 2.0;
break;
}
stream.bufferTime = buffTime;
}

[/code]

Read Fabio Sonnati’s article on using dual-threshold buffering, at http://www.adobe.com/devnet/flashmediaserver/articles/fms_dual_buffering.html.

EXIF Data and the Map Object

a JPEG image stores location information if the user allows that feature. Let’s look at an example in which the user can choose an image from the camera roll, read its location information, and display the corresponding map:

import com.google.maps.Map;
import com.google.maps.MapEvent;
import com.google.maps.LatLng;
import com.google.maps.MapType;
import com.google.maps.overlays.Marker;
import com.google.maps.overlays.MarkerOptions;
import flash.events.Event;
import flash.events.MediaEvent;
import flash.media.CameraRoll;
import flash.net.URLRequest;
import jp.shichiseki.exif.*;
public static KEY:String = YOUR_API_KEY;
public static SITE:String = YOUR_SITE;

var cameraRoll:CameraRoll;
var exifLoader:ExifLoader;
var map:Map;

Create your Map object as before:

map = new Map();
map.url = SITE;
map.key = KEY;
map.sensor = “false”;
map.setSize(new Point(stage.stageWidth, stage.stageHeight));
map.addEventListener(MapEvent.MAP_READY, onMapReady);
addChild(map);

Get an image from the device Gallery using the CameraRoll API:

function onMapReady(event:MapEvent):void {
map.setCenter
(new LatLng(40.736072, -73.992062), 14, MapType.NORMAL_MAP_TYPE);
if (CameraRoll.supportsBrowseForImage) {
var camera:CameraRoll = new CameraRoll();
camera.addEventListener(MediaEvent.SELECT, onImageSelected);
camera.browseForImage();
}
}

After the user selects an image, create an instance of the ExifLoader class and pass it the photo url. It will load the image and read its EXIF data:

function onImageSelected(event:MediaEvent):void {
exifLoader = new ExifLoader();
exifLoader.addEventListener(Event.COMPLETE, onExifRead);
exifLoader.load(new URLRequest(event.data.file.url));
}

If the image contains geolocation information, it is used to draw the map and a marker at the exact location:

function onImageLoaded(event:Event):void {
var exif:ExifInfo = reader.exif;
if (exif.ifds.gps) {
var gpsIfd:IFD = exif.ifds.gps;
var exifLat:Array = gpsIfd[“GPSLatitude”] as Array;
var latitude:Number = shorten(exifLat, gpsIfd[“GPSLatitudeRef”]);
var exifLon:Array = gpsIfd[“GPSLongitude”] as Array;
var longitude:Number = shorten(exifLon, gpsIfd[“GPSLongitudeRef”]);
var marker:Marker;
var parts:Array;
marker = new Marker(new LatLng(latitude, longitude));
map.addOverlay(marker);
map.setCenter(new LatLng(latitude, longitude));
}
}

function shorten(info:Array, reference:String):Number {
var degree:Number = info[0] + (info[1]/60) + (info[2]/3600);
// position from Greenwich and equator
if (reference == “S” || reference == “E”) {
degree * -1;
}
return degree;
}

EXIF Data

EXIF stands for Exchangeable Image File. EXIF data is low-level information stored in JPEG images. EXIF was created by the Japan Electronic Industries Development Association (JEIDA) and became a convention adopted across camera manufacturers, including on mobile devices. You can read about the EXIF format at http://en.wikipedia.org/wiki/Exchangeable_image_file_format and http://www.exif.org/Exif2-2.PDF.

EXIF data can include the date and time the image was created, the camera manufacturer and camera settings, location information, and even a thumbnail image. Visit Jeffrey Friedl’s website at http://regex.info/exif.cgi and load a JPEG image to see the information it contains.

In AIR for Android, you could use the geolocation API to get location information and associate it with the photo you just shot, but it is more efficient to get this information directly from the image if it is available. To store image location on an Android device when taking a picture, the user must have Location & Security→Use GPS Satellites selected and then turn on the camera’s Store Location option.

Several open source AS3 libraries are available for reading EXIF data. I chose the one by Kenichi Ishibashi. You can download his library using Subversion at http://code.shichiseki.jp/as3/ExifInfo/. Ishibashi’s Loader class uses the loadBytes function and passes its data as a ByteArray to access the raw data information. Import his package to your class.

Our first example loads an image from the Gallery, reads its thumbnail data, and displays it. Note that thumbnail creation varies among devices and is not always available. Check that it exists before trying to display it:

import flash.display.Loader;
import flash.display.MovieClip;
import flash.media.CameraRoll;
import flash.media.MediaPromise;
import flash.events.MediaEvent;
import flash.events.Event;
import flash.net.URLRequest
import jp.shichiseki.exif.*;
var loader:ExifLoader;
var cameraRoll:CameraRoll;
function Exif1() {
if (CameraRoll.supportsBrowseForImage) {
init();
}
}
function init():void {
cameraRoll = new CameraRoll();
cameraRoll.addEventListener(MediaEvent.SELECT, onSelect);
cameraRoll.browseForImage();
}
function onSelect(event:MediaEvent):void {
var promise:MediaPromise = event.data as MediaPromise;
loader = new ExifLoader();
loader.addEventListener(Event.COMPLETE, imageLoaded);
loader.load(new URLRequest(promise.file.url));
}
function imageLoaded(event:Event):void {
var exif:ExifInfo = loader.exif as ExifInfo;
if (exif.thumbnailData) {
var thumbLoader:Loader = new Loader();
thumbLoader.loadBytes(exif.thumbnailData);
addChild(thumbLoader);
}
}

The next example also lets you choose an image from the device’s Gallery and display its geographic information. The user must have GPS enabled and must have authorized the camera to save the location when the picture was taken:

import flash.display.Loader;
import flash.display.MovieClip;
import flash.media.CameraRoll;
import flash.media.MediaPromise;
import flash.events.MediaEvent;
import flash.events.Event;
import flash.net.URLRequest
import flash.text.TextField;
import flash.text.TextFormat;
import flash.text.TextFieldAutoSize;
import jp.shichiseki.exif.*;
var cameraRoll:CameraRoll;
var loader:ExifLoader;
if (CameraRoll.supportsBrowseForImage) {
cameraRoll = new CameraRoll();
cameraRoll.addEventListener(MediaEvent.SELECT, onSelect);
cameraRoll.browseForImage();
}
function onSelect(event:MediaEvent):void {
var promise:MediaPromise = event.data as MediaPromise;
loader = new ExifLoader();
loader.addEventListener(Event.COMPLETE, onImageLoaded);
loader.load(new URLRequest(promise.file.url));
}
function onImageLoaded(event:Event):void {
var exif:ExifInfo = loader.exif as ExifInfo;
var textFormat:TextFormat = new TextFormat();
textFormat.size = 40;
textFormat.color = 0x66CC99;
var where:TextField = new TextField();
where.x = 50;
where.y = 200;
where.defaultTextFormat = textFormat;
where.autoSize = TextFieldAutoSize.LEFT;
addChild(where);
if (exif.ifds.gps) {
var gpsIfd:IFD = exif.ifds.gps;
var exifLat:Array = gpsIfd[“GPSLatitude”] as Array;
var latitude:Number = shorten(exifLat, gpsIfd[“GPSLatitudeRef”]);
var exifLon:Array = gpsIfd[“GPSLongitude”] as Array;
var longitude:Number = shorten(exifLon, gpsIfd[“GPSLongitudeRef”]);
where.text = latitude + “n” + longitude;
} else {
where.text = “No geographic information”;
}
}
function shorten(info:Array, reference:String):Number {
var degree:Number = info[0] + (info[1]/60) + (info[2]/3600);
// position from Greenwich and equator
if (reference == “S” || reference == “E”) {
degree * -1;
}
return degree;
}

Base 60 is commonly used to store geographic coordinates in degrees. Degrees, minutes, and seconds are stored separately. Put them back together and sign them depending on whether they are south of the equator and east of Greenwich Mean Time.

Displaying latitude and longitude is not very helpful, nor is it interesting for most users. But you can render a static map using latitude and longitude or retrieve an address.

The Gallery Application and the CameraRoll Class

The Gallery application is the display for the repository of images located on the SD card and accessible by various applications. Launch it, choose an image, and then select Menu→Share. A list of applications (such as Picasa, Messaging, and Email) appears, a convenient way to upload media from the device to another destination (see Figure 9-1).

The flash.media.CameraRoll class is a subclass of the EventDispatcher class. It gives you access to the Gallery. It is not supported for AIR desktop applications.

The Gallery application

Selecting an Image

You can test that your device supports browsing the Gallery by checking the supports BrowseForImage property:

import flash.media.CameraRoll;
if (CameraRoll.supportsBrowseForImage == false) {
trace(“this device does not support access to the Gallery”);
return;
}

If your device does support the Gallery, you can create an instance of the CameraRoll class. Make it a class variable, not a local variable, so that it does not lose scope:

var cameraRoll:CameraRoll = new CameraRoll();

You can add listeners for three events:

  • A MediaEvent.SELECT when the user selects an image:
    import flash.events.MediaEvent;
    cameraRoll.addEventListener(MediaEvent.SELECT, onSelect);
  • An Event.CANCEL event if the user opts out of the Gallery:
    import flash.events.Event;
    cameraRoll.addEventListener(Event.CANCEL, onCancel);
    function onCancel(event:Event):void {
    trace(“user left the Gallery”, event.type);
    }
  • An ErrorEvent.ERROR event if there is an issue in the process:
    import flash.events.ErrorEvent;
    cameraRoll.addEventListener(ErrorEvent.ERROR, onError);
    function onError(event:Event):void {
    trace(“Gallery error”, event.type);
    }

Call the browseForImage function to bring the Gallery application to the foreground:

cameraRoll.browseForImage();

Your application moves to the background and the Gallery interface is displayed, as shown in Figure 9-2.

The Gallery interface

When you select an image, a MediaEvent object is returned. Use its data property to reference the image and cast it as MediaPromise. Use a Loader object to load the image:

import flash.display.Loader;
import flash.events.IOErrorEvent;
import flash.events.MediaEvent;
import flash.media.MediaPromise;
function onSelect(event:MediaEvent):void {
var promise:MediaPromise = event.data as MediaPromise;
var loader:Loader = new Loader()
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, onImageLoaded);
loader.contentLoaderInfo.addEventListener(IOErrorEvent.IO_ERROR,onError);
loader.loadFilePromise(promise);
}

The concept of MediaPromise was first introduced on the desktop in a drag-and-drop scenario where an object doesn’t yet exist in AIR but needs to be referenced. Access its file property if you want to retrieve the image name, its nativePath, or its url.

The url is the qualified domain name to use to load an image. The nativePath refers to the hierarchical directory structure:

promise.file.name;
promise.file.url;
promise.file.nativePath;

Let’s now display the image:

function onImageLoaded(event:Event):void {
addChild(event.currentTarget.content);
}

Only the upper-left portion of the image is visible. This is because the resolution of the camera device is much larger than your AIR application stage.

Let’s modify our code so that we can drag the image around and see all of its content. We will make the image a child of a sprite, which can be dragged around:

import flash.events.MouseEvent;
import flash.display.DisplayObject;
import flash.geom.Rectangle;
var rectangle:Rectangle;
function onImageLoaded(event:Event):void {
var container:Sprite = new Sprite();
var image:DisplayObject = event.currentTarget.content as DisplayObject;
container.addChild(image);
addChild(container);
// set a constraint rectangle to define the draggable area
rectangle = new Rectangle(0, 0,
-(image.width – stage.stageWidth),
-(image.height – stage.stageHeight)
);
container.addEventListener(MouseEvent.MOUSE_DOWN, onDown);
container.addEventListener(MouseEvent.MOUSE_UP, onUp);
}
function onDown(event:MouseEvent):void {
event.currentTarget.startDrag(false, rectangle);
}
function onUp(event:MouseEvent):void {
event.currentTarget.stopDrag();
}

It may be interesting to see the details of an image at its full resolution, but this might not result in the best user experience. Also, because camera resolution is so high on most devices, there is a risk of exhausting RAM and running out of memory.

Let’s now store the content in a BitmapData, display it in a Bitmap, and scale the bitmap to fit our stage in AIR. We will use the Nexus One as our benchmark first. Its camera has a resolution of 2,592×1,944. The default template size on AIR for Android is 800×480. To complicate things, the aspect ratio is different. In order to preserve the image fidelity and fill up the screen, you need to resize the aspect ratio to 800×600, but some of the image will be out of bounds.

Instead, let’s resize the image to 640×480. The image will not cover the whole stage, but it will be fully visible. Take this into account when designing your screen.

First, detect the orientation of your image. Resize it accordingly using constant values, and rotate the image if it is in landscape mode:

import flash.display.Bitmap;
import flash.display.BitmapData;
const MAX_HEIGHT:int = 640;
const MAX_WIDTH:int = 480;
function onImageLoaded(event:Event):void {
var bitmapData:BitmapData = Bitmap(event.target.content).bitmapData;
var bitmap:Bitmap = new Bitmap(bitmapData);
// determine the image orientation
var isPortrait:Boolean = (bitmapData.height/bitmapData.width) > 1.0;
if (isPortrait) {
bitmap.width = MAX_WIDTH;
bitmap.height = MAX_HEIGHT;
} else {
bitmap.width = MAX_HEIGHT;
bitmap.height = MAX_WIDTH;
// rotate a landscape image
bitmap.y = MAX_HEIGHT;
bitmap.rotation = -90;
}
addChild(bitmap);
}

The preceding code is customized to the Nexus One, and it will not display well for devices with a different camera resolution or screen size. We need a more universal solution.

The next example shows how to resize the image according to the dynamic dimension of both the image and the stage. This is the preferred approach for developing on multiple screens:

function onImageLoaded(event:Event):void {
var bitmapData:BitmapData = Bitmap(event.target.content).bitmapData;
var bitmap:Bitmap = new Bitmap(bitmapData);
// determine the image orientation
var isPortrait:Boolean = (bitmapData.height/bitmapData.width) > 1.0;
// choose the smallest value between stage width and height
var forRatio:int = Math.min(stage.stageHeight, stage.stageWidth);
// calculate the scaling ratio to apply to the image
var ratio:Number;
if (isPortrait) {
ratio = forRatio/bitmapData.width;
} else {
ratio = forRatio/bitmapData.height;
}
bitmap.width = bitmapData.width * ratio;
bitmap.height = bitmapData.height * ratio;
// rotate a landscape image and move down to fit to the top corner
if (!isPortrait) {
bitmap.y = bitmap.width;
bitmap.rotation = -90;
}
addChild(bitmap);
}

Beware that the browseForImage method is only meant to load images from the Gallery. It is not for loading images from the filesystem even if you navigate to the Gallery. Some devices bring up a dialog to choose between Gallery and Files. If you try to load an image via Files, the application throws an error. Until this bug is fixed, set a listener to catch the error and inform the user:

cameraRoll.browseForImage();
cameraRoll.addEventListener(ErrorEvent.ERROR, onError);
function onError(event:Event):void {
if (event.errorID == 2124) {
trace(“you can only load images from the Gallery”);
}
}

If you want to get a list of all the images in your Gallery, you can use the filesystem as follows:

var gallery:File = File.userDirectory.resolvePath(“DCIM/Camera”);
var myPhotos:Array = gallery.getDirectoryListing();
var bounds:int = myPhotos.length;
for (var i:uint = 0; i < bounds; i++) {
trace(myPhotos[i].name, myPhotos[i].nativePath);
}

Adding an Image

You can add an image to the Gallery from within AIR. To write data to the SD card, you must set permission for it:

<uses-permission android:name=”android.permission.WRITE_EXTERNAL_STORAGE” />

Check the supportsAddBitmapData property to verify that your device supports this feature:

import flash.media.CameraRoll;
if (CameraRoll.supportsAddBitmapData == false) {
trace(“You cannot add images to the Gallery.”);
return;
}

If this feature is supported, create an instance of CameraRoll and set an Event.COM PLETE listener. Call the addBitmapData function to save the image to the Gallery. In this example, a stage grab is saved.

This feature could be used for a drawing application in which the user can draw over time. The following code allows the user to save his drawing, reload it, and draw over it again:

var cameraRoll:CameraRoll;
cameraRoll = new CameraRoll();
cameraRoll.addEventListener(ErrorEvent.ERROR, onError);
cameraRoll.addEventListener(Event.COMPLETE, onComplete);
var bitmapData:BitmapData =
new BitmapData(stage.stageWidth, stage.stageHeight);
bitmapData.draw(stage);
cameraRoll.addBitmapData(bitmapData);
function onComplete(event:Event):void {
// image saved in gallery
}

Remember that the image that is saved is the same dimension as the stage, and therefore it has a much smaller resolution than the native camera. At the time of this writing, there is no option to specify a compression, to name the image, or to save it in a custom directory. AIR follows Android naming conventions, using the date and time of capture.

 

File Browse for a Single File

The browse for file functionality of the File class works a bit differently in Android as compared to the desktop. Within Android, the browseForOpen method will open up a specific native file selector that will allow you to open a file of type Audio, Image, or Video.

Let’s review the code below. The Button with the Browse label will call the button1_click Handler when clicked. Within this function, an instance of File is created with the variable name file. An event listener is added for Event.SELECT with the responding method of onFileSelect, and the browseForOpen method is called. The application can be seen in Figure 5-5. When browseForOpen is called, the Android file selector is launched. This can be seen in Figure 5-6. After selecting a file within the Android file selector, the event is fired and the onFileSelect method is called. The event.current Target is cast to a File object, and its nativePath, extension, and url properties are used to display the nativePath and the image in the example (shown in Figure 5-7):

<?xml version=”1.0″ encoding=”utf-8″?>
<s:Application xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adobe.com/flex/spark”>
<fx:Script>
<![CDATA[
protectedfunction button1_clickHandler(event:MouseEvent):void
{
var file:File = new File();
file.addEventListener(Event.SELECT, onFileSelect);
file.browseForOpen(“Open”);
}
privatefunction onFileSelect(event:Event):void {
var file:File = File(event.currentTarget);
filepath.text = file.nativePath;
if(file.extension == “jpg”){
image.source = file.url;
}
}
]]>
</fx:Script>
<fx:Declarations>
<!– Place non-visual elements (e.g., services, value objects) here –>
</fx:Declarations>
<s:Button horizontalCenter=”0″ top=”10″ label=”Browse”
click=”button1_clickHandler(event)”/>
<s:Label id=”filepath” left=”10″ right=”10″ top=”100″/>
<s:Image id=”image” width=”230″ height=”350″ top=”150″ horizontalCenter=”0″/>
</s:Application>

 

The Browse for File application

The file selector

The Browse for File application with an image selected

 

Using the SQLite Database

Using the SQLite database system is another solution for saving local persistent data, and it is the preferred solution if your information is somewhat complex, if you want the option to organize it in different ways, or if you want to keep it private.

The AIR runtime contains an SQL database engine to create, organize, retrieve, and manipulate the data, using the open source Structured Query Language Lite (SQLite) database system. It does not use the Android OS SQLite framework.

The SQL classes compose the bulk of the flash.data package. Once again, you have a choice between synchronous and asynchronous mode. For the sake of simplicity, we will use synchronous mode in our examples.

Creating the database file

If the database doesn’t exist yet, create it and save it as a single file in the filesystem:

import flash.filesystem.File;
function createDatabase():void {
var file:File =
File.applicationStorageDirectory.resolvePath(“myData.db”);
if (file.exists) {
trace(“I already exist, ready to be used”);
} else {
trace(“I did not exist, now I am created”);
}
}

It is usually a good idea to keep the database in the ApplicationStorageDirectory directory so that it is not accessible by other applications and it is preserved when the application is updated. If you want to save it to the SD card instead, the path should be:

var file:File = File.documentsDirectory.resolvePath(“myData.db”);

Opening the database file

The SQLConnection class is used to create queries or execute them. It is essential that it is a class variable, not a local variable, so that it doesn’t go out of scope.

import flash.data.SQLConnection;
var connection:SQLConnection;
connection = new SQLConnection();

To open the connection pointing to your database file, call the open method and pass the File reference:

import flash.events.SQLEvent;
import flash.events.SQLErrorEvent;
try {
connection.open(file);
trace(“connection opened”);
} catch(error:Error) {
trace(error.message);
}

Creating the table

An SQL database is organized into tables. Each table consists of columns representing individual attributes and their values. Create the table according to your needs by giving each column a name and a data type. The table will have as many rows as items, or records, created.

You communicate to the database by creating an SQLStatement object and then sending its sqlConnection property to the connection that is open. Next, write the command to its text attribute as a string, and finally, call its execute method.

In this example, we are creating a new table called geography using the statement CREATE TABLE IF NOT EXISTS to guarantee that it is only created once. It has three columns: an id column which self-increments and functions as the primary key, a country column of type Text, and a city column of type Text. The primary key is a unique identifier to
distinguish each row. Figure 6-1 shows the geography table:

The geography table’s fields

import flash.data.SQLStatement;
import flash.data.SQLMode;
var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
var request:String =
“CREATE TABLE IF NOT EXISTS geography (”
+ “id INTEGER PRIMARY KEY AUTOINCREMENT, country TEXT, city TEXT )”;
statement.text = request;
try {
statement.execute();
} catch(error:Error) {
trace(error.message);
}

Adding data

Once the table is created, data is added using an INSERT INTO statement and some values:

var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
var insert:String =
“INSERT INTO geography (country, city) VALUES (‘France’, ‘Paris’)”;
statement.text = insert;
try {
statement.execute();
} catch(error:Error) {
trace(error.message);
}

If the data is dynamic, you can use the following syntax. Note that unnamed parameters are used, therefore relying on the automatically assigned index value. Figure 6-2 shows the result:

addItem({country:”France”, city:”Paris”});
function addItem(object:Object):void {
var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
var insert:String = “INSERT INTO geography (country, city) VALUES (?, ?)”;
statement.text = insert;
statement.parameters[0] = object.country;
statement.parameters[1] = object.city;
try {
statement.execute();
trace(“item created”);

The geography table with some dynamic data added

} catch(error:SQLError) {
trace(error.message);
}
}

As an alternative, you can use the following syntax. Here we assume named parameters that work much like an associate array. Figure 6-3 shows the result:

that work much like an associate array. Figure 6-3 shows the result:
addItem({country:”United States”, city:”New York”});
function addItem(object:Object):void {
var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
var insert:String =
“INSERT INTO geography (country, city) VALUES (:co, :ci)”;
statement.text = insert;
statement.parameters[“:co”] = object.country;
statement.parameters[“:ci”] = object.city;
try {
statement.execute();
trace(“item created”);
} catch(error:SQLError) {
trace(error.message);
}
}

The geography table with dynamic data and named parameters added

Using either of these two dynamic approaches facilitates re-use of the same SQL statement to add many items, but is also more secure because the parameters are not written in the SQL text. This prevents a possible SQL injection attack.

Requesting data

Data is requested by using the SELECT statement. The result is an SQLResult that you can get as a property of the SQLStatement: statement.getResult(). Each row item is received as an object with property names corresponding to the table column names. Note the use of the * in place of the columns’ name to get the entire table:

import flash.data.SQLResult;
var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
statement.text = “SELECT * FROM geography”;
statement.addEventListener(SQLEvent.RESULT, selectionReceived);
statement.execute();
function selectionReceived(event:SQLEvent):void {
statement.removeEventListener(SQLEvent.RESULT, selectionReceived);
var result:SQLResult = statement.getResult();
if (result != null) {
var rows:int = result.data.length;
for (var i:int = 0; i < rows; i++) {
var row:Object = result.data[i];
trace(row.id + “” + row.country + “” + row.city);
}
}
}

Instead of requesting the entire table, you may want to receive only one item in the table. Let’s request the country that has New York as a city. Execute(1) only returns the item stored under table ID 1:

var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
statement.text = “SELECT country FROM geography WHERE city = ‘New York'”;
try {
statement.execute(1);
var result:SQLResult = statement.getResult();
if (result.data != null) {
trace(result.data[0].country);
}
} catch(error:Error) {
trace(“item”, error.message);
}

Let’s make the same request again, passing the city as dynamic data:

getCountry(“New York”);
function getCountry(myCity:String):void {
var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
statement.text = “SELECT country FROM geography WHERE city = :ci”;
statement.parameters[“:ci”] = myCity;
try {
statement.execute(1);
var result:SQLResult = statement.getResult();
if (result.data != null) {
trace(result.data[0].country);
}
} catch(error:Error) {
trace(“item”, error.message);
}
}

Editing existing data

Existing data can be modified. In this example, we’re searching for the country United States and changing the city to Washington, DC. Figure 6-4 shows the result:

modifyItem(“United States”, “Washington DC”);
function modifyItem(myCountry:String, myCity:String):void {
var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
var updateMessage:String =
“UPDATE geography SET city = :ci where country = :co”;
statement.text = updateMessage;
statement.parameters[“:co”] = myCountry;
statement.parameters[“:ci”] = myCity;
try {
statement.execute();
trace(“all removed”);
} catch(error:Error) {
trace(“item”, error.message);
}
}

The geography table with existing data modified

Now let’s look for the country France and delete the row that contains it (see Figure 6-5). We are using the DELETE FROM statement. Note that deleting a row does not modify the IDs of the other items. The ID of the deleted row is no longer usable:

function deleteItem(myCountry:String):void {
var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
var deleteMessage:String = “DELETE FROM geography where country = :co”;
statement.text = deleteMessage;
statement.parameters[“:co”] = myCountry;
try {
statement.execute();
trace(“all removed”);
} catch(error:Error) {
trace(“item”, error.message);

}
}

The geography table with row 0 deleted

As you have seen, you can do a lot while working within a structure you create.

 

Camera Roll

The Camera Roll provides access to the camera’s gallery of images.

If your application requires the use of the device’s camera roll, you will need to select the WRITE_EXTERNAL_STORAGE permission when you are creating your project.

Let’s review the code below. First, you will notice that there is a private variable named cameraRoll declared, of type flash.media.CameraRoll. Within applicationComplete of the application, an event handler function is called, which first checks to see if the device supports access to the image gallery by reading the static property of the CameraRoll class. If this property returns as true, a new instance of CameraRoll is created and event listeners of type MediaEvent.COMPLETE and ErrorEvent.COMPLETE are added to handle a successfully captured image (as well as any errors that may occur).

A Button with an event listener on the click event is used to allow the user to browse the image gallery. When the user clicks the BROWSEGALLERY button, the browse Gallery method is called, which then opens the device’s image gallery. At this point, the user is redirected from your application to the native gallery application. Once the user selects an image from the gallery, she is directed back to your application, the MediaEvent.COMPLETE event is triggered, and the mediaSelected method is called. Within the mediaSelected method, the event.data property is cast to a  flash.Media.MediaPro mise object. The mediaPromise.file.url property is then used to populate Label and Image components that display the path to the image and the actual image to the user. Figure 4-6 shows the application and Figure 4-7 shows the application after a picture was selected from the gallery and the user has returned to the application:

<?xml version=”1.0″ encoding=”utf-8″?>
<s:Application xmlns:fx=”http://ns.adobe.com/mxml/2009″
xmlns:s=”library://ns.adobe.com/flex/spark”
applicationComplete=”application1_applicationCompleteHandler(event)”>
<fx:Script>
<![CDATA[
import mx.events.FlexEvent;
privatevar cameraRoll:CameraRoll;
protectedfunction application1_applicationCompleteHandler
(event:FlexEvent):void {
if(CameraRoll.supportsBrowseForImage){
cameraRoll = new CameraRoll();
cameraRoll.addEventListener(MediaEvent.SELECT,
mediaSelected);
cameraRoll.addEventListener(ErrorEvent.ERROR, onError);
} else{
status.text=”CameraRoll NOT supported”;
}
}
privatefunction browseGallery(event:MouseEvent):void {
cameraRoll.browseForImage();
}
privatefunction onError(event:ErrorEvent):void {
trace(“error has occurred”);
}
privatefunction mediaSelected(event:MediaEvent):void{
var mediaPromise:MediaPromise = event.data;
status.text = mediaPromise.file.url;
image.source = mediaPromise.file.url;
}
]]>
</fx:Script>
<fx:Declarations>
<!– Place non-visual elements (e.g., services, value objects) here –>
</fx:Declarations>
<s:Label id=”status” text=”Click Browse Gallery to select image” top=”10″
width=”100%” textAlign=”center”/>
<s:Button width=”300″ height=”60″ label=”BROWSE GALLERY”
click=”browseGallery(event)”
enabled=”{CameraRoll.supportsBrowseForImage}”
top=”80″ horizontalCenter=”0″/>
<s:Image id=”image” width=”230″ height=”350″ top=”170″ horizontalCenter=”0″/>
</s:Application>

The Browse Gallery application