P2P Over a Remote Network

To use networking remotely, you need an RTMFP-capable server, such as Flash Media Server.

If you do not have access to such a server, Adobe provides a beta developer key to use its Cirrus service. Sign up to instantly receive a developer key and a URL, at http://labs .adobe.com/technologies/cirrus/.

The traditional streaming model requires clients to receive all data from a centralized server cluster. Scaling is achieved by adding more servers. Figure 15-2 shows traditional streaming/communication with the Unicast model and RTMFP in Flash Player/Cirrus.

Figure 15-2. Traditional streaming/communication with the Unicast model (left) and RTMFP in Flash Player 10.1/Cirrus 2 (right)
Figure 15-2. Traditional streaming/communication with the Unicast model (left) and RTMFP in Flash Player 10.1/Cirrus 2 (right)

RTMFP, now in its second generation, supports application-level multicast. Multicasting is the process of sending messages as a single transmission from one source to the group where each peer acts as a relay to dispatch the data to the next peer. It reduces the load on the server and there is no need for a streaming server.

The Cirrus service is only for clients communicating directly. It has low latency and good security. It does not support shared objects or custom server-side programming. You could still use shared objects with Flash Media Server, but via the traditional clientserver conduit.

The NetGroup uses ring topology. Its neighborCount property stores the number of peers. Each peer is assigned a peerID, which can be mapped to a group address using Group.convertPeerIDToGroupAddress(connection.nearID). An algorithm is run every few seconds to update ring positions for the group.

When the group is connected, you can obtain statistics such as Quality of Service in bytes per second from NetGroup’s info property:

[code]

function onStatus(event:NetStatusEvent):void {
if (event.info.code == NetGroup.Connect.Success”) {
trace(event.info.group);
// NetGroupInfo object with Quality of Service statistics
}
}

[/code]

The NetStream object is now equipped with new multicast properties. For instance, multicastWindowDuration specifies the duration in seconds of the peer-to-peer multicast reassembly window. A short value reduces latency but also quality.

NetGroup is best used for an application with a many-to-many spectrum. NetStream is for a one-to-many or few-to-many spectrum.

Communication can be done in different ways:

  • Posting is for lots of peers sending small messages.
  • Multicasting is for any size group, but with a small number of the peers being senders, and for continuous/live data which is large over time.
  • Direct routing is for sending messages to specific peers in the group using methods such as sendToAllNeighbors, sendToNeighbor, and sendToNearest.
  • Object replication is for more reliable data delivery whereby information is sent in packets between clients and reassembled.

Matthew Kaufman explains this technology in depth in his MAX 2009 presentation, at http://tv.adobe.com/watch/max-2009-develop/p2p-on-the-flash-platform-with-rtmfp.

Simple Text Chat

This example is very similar to the one we created for P2P over a local network, except for a few minor, yet important, changes.

The connection is made to a remote server using the NetConnection object and RTMFP. If you have the Adobe URL and developer key, use them as demonstrated in the following code:

[code]

const SERVER:String = “rtmfp://” + YOUR_SERVER_ADDRESS;
const KEY:STRING = YOUR_DEVELOPER_KEY;
var connection:NetConnection = new NetConnection();
connection.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
connection.connect(SERVER, KEY);

[/code]

Connecting to a traditional streaming server would still use the URI construct as “rtmfp://server/application/instance” and additional optional parameters to connect, such as a login and password.

The GroupSpecifier now needs serverChannelEnabled set to true to use the Cirrus server, and helps in peer discovery. PostingEnabled is still on to send messages. The IPMulticastAddress property is optional but can help optimize the group topology if the group is large:

[code]

function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetConnection.Connect.Success”) {
var groupSpec:GroupSpecifier = new GroupSpecifier(“chatGroup”);
groupSpec.postingEnabled = true;
groupSpec.serverChannelEnabled = true;
group = new NetGroup(connection,
groupSpec.groupspecWithAuthorizations());
group.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
}
}

[/code]

The exchange of messages is very similar to the local example. Note that a post method is well suited for many peers sending small messages, as in a chat application that is not time-critical:

[code]

function sendMessage():void {
var object:Object = new Object();
object.user = “Véronique”;
object.message = “This is a chat message”;
object.time = new Date().time;
group.post(object);
}
function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetGroup.Posting.Notify”) {
trace(event.info.message);
}
}

[/code]

Multicast Streaming

This example demonstrates a video chat between one publisher and many receivers who help redistribute the stream to other receivers.

The application connects in the same way as in the previous example, but instead of a NetGroup, we create a NetStream to transfer video, audio, and messages.

Publisher

This is the code for the publisher sending the stream.

To access the camera, add the permission in your descriptor file:

[code]<uses-permission android:name=”android.permission.CAMERA”/>[/code]

Set the GroupSpecifier and the NetStream. The GroupSpecifier needs to have multicas tEnabled set to true to support streaming:

[code]

import flash.net.NetStream;
var outStream:NetStream;
function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetConnection.Connect.Success”) {
var groupSpec:GroupSpecifier = new GroupSpecifier(“videoGroup”);
groupSpec.serverChannelEnabled = true;
groupSpec.multicastEnabled = true;
outStream = new NetStream(connection,
groupSpec.groupspecWithAuthorizations());
outStream.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
}
}

[/code]

Once the NetStream is connected, add a reference to the camera and the microphone and attach them to the stream. A Video object displays the camera feed. Finally, call the publish method and pass the name of your choice for the video session:

[code]

function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetStream.Connect.Success”) {
var camera:Camera = Camera.getCamera();
var video:Video = new Video();
video.attachCamera(camera);
addChild(video);
outStream.attachAudio(Microphone.getMicrophone());
outStream.attachCamera(camera);
outStream.publish(“remote video”);
}
}

[/code]

Recipients

The code for the peers receiving the video is similar, except for the few changes described next.

The incoming NetStream, used for the peers receiving the stream, must be the same GroupSpecifier as the publisher’s stream. The same stream cannot be used for sending and receiving:

[code]

var inStream:NetStream = new NetStream(connection,
groupSpec.groupspecWithAuthorizations());
inStream.addEventListener(NetStatusEvent.NET_STATUS, onStatus);

[/code]

The recipient needs a Video object but no reference to the microphone and the camera. The play method is used to stream the video in:

[code]

var video:Video = new Video();
addChild(video);
inStream.play(“remote video”);

[/code]

Sending and receiving data

Along with streams, NetStream can be used to send data. It is only an option for the publisher:

[code]

var object:Object = new Object();
object.type = “chat”;
object.message = “hello”;
outStream.send(“onReceiveData”, object);

[/code]

To receive data, the incoming stream must assign a NetStream.client for callbacks. Note that the onReceiveData function matches the first parameter passed in the publisher send call:

[code]

inStream.client = this;
function onReceiveData(object:Object):void {
trace(object.type, object.message); // chat, hello
}

[/code]

Closing a stream

Do not forget to remove the stream and its listener after it closes:

[code]

function onStatus(event:NetStatusEvent):void {
switch(event.info.code) {
case “NetStream.Connect.Closed” :
case “NetStream.Connect.Failed” :
onDisconnect();
break;
}
}
function onDisconnect():void {
stream.removeEventListener(NetStatusEvent.NET_STATUS, onStatus);
stream = null;
}
group.peerToPeerDisabled = false;
group.objectReplicationEnabled = true;

[/code]

End-to-End Stream

Another approach is for the publisher to send a separate stream to each receiver. This limits the number of users, but is the most efficient transmission with the lowest latency. No GroupSpecifier is needed for this mode of communication. In fact, this is no longer a group, but a one-to-one transfer or unidirectional NetStream channel.

Sending a peer-assisted stream

Set the connection parameter to NetStream.DIRECT_CONNECTIONS; the stream now has its bufferTime property set to 0 for maximum speed:

[code]

var outStream:NetStream =
new NetStream(connection, NetStream.DIRECT_CONNECTIONS);
outStream.bufferTime = 0;
outStream.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
var video:Video = new Video();
var camera:Camera = Camera.getCamera();
video.attachCamera(camera);
addChild(video);
outStream.attachAudio(Microphone.getMicrophone());
outStream.attachCamera(camera);
outStream.publish(“privateVideo”);

[/code]

When first connected, every peer is assigned a unique 256-bit peerID. Cirrus uses it to match it to your IP address and port number when other peers want to communicate with you, as in this example. nearID represents you:

[code]

var myPeerID:String
function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetConnection.Connect.Success) {
myPeerID = connection.nearID;
trace(myPeerID);
// 02024ab55a7284ad9d9d4586dd2dc8d2fa1b207e53118d93a34abc946836fa4
}
}

[/code]

The receivers need the peerID of the publisher to subscribe. The publisher needs a way to communicate the ID to others. In a professional application, you would use a web service or a remote sharedObject, but for web development, or if you know the people you want to communicate with, you can send your peerID in the body of an email:

[code]

var myPeerID:String
function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetConnection.Connect.Success”) {
myPeerID = connection.nearID;
navigateToURL(new URLRequest(‘mailto:FRIEND_EMAIL?subject=id&body=’+
myPeerID));
}
}

[/code]

The streams are not sent until another endpoint subscribes to the publisher’s stream.

Receiving a stream

In this example, the subscribers get the ID via email and copy its content into the system clipboard. Then they press the giveMe button:

[code]

var giveMe:Sprite = new Sprite();
giveMe.y = 100;
var g:Graphics = giveMe.graphics;
g.beginFill(0x0000FF);
g.drawRect(20, 20, 100, 75);
g.endFill();
giveMe.addEventListener(MouseEvent.CLICK, startStream);

[/code]

The startStream method gets the content of the clipboard and uses it to create the stream. The ID needs to be passed as the second parameter in the stream constructor:

[code]

function startStream():void {
var id:String =
Clipboard.generalClipboard.getData(ClipboardFormats.TEXT_FORMAT) as String;
var inStream:NetStream = new NetStream(connection, id);
inStream.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
var video:Video = new Video();
addChild(video);
inStream.play(“privateVideo”);
video.attachNetStream(inStream);
}

[/code]

The publisher has control, if needed, over accepting or rejecting subscribers. When a subscriber attempts to receive the stream, the onPeerConnect method is invoked. Create an object to capture the call. The way to monitor whom to accept (or not) is completely a function of your application:

[code]

var farPeerID:String;
var outClient:Object = new Object();
outClient.onPeerConnect = onConnect;
outStream.client = outClient;
function onConnect(stream:NetStream):Boolean {
farPeerID = stream.farID;
return true; // accept
OR
return false; // reject
}

[/code]

The publisher stream has a peerStreams property that holds all the subscribers for the publishing stream. Use NetStream.send() to send messages to all the recipients or Net Stream.peerStreams[0].send() for an individual user, here the first one in the list.

NetConnection.maxPeerConnections returns the limit of peer streams, typically set to a maximum of eight.

Directed Routing

Directed routing is for sending data to a specific peer in a group. Peers can send each other messages if they know their counterpart PeerID. This feature only works in a group via NetGroup. It is not available via NetStream.

Sending a message

Individual messages can be sent from one neighbor to another using the NetGroup.send ToNeighbor method:

[code]

var groupSpec:GroupSpecifier = new GroupSpecifier(“videoGroup”);
groupSpec.postingEnabled = true;
groupSpec.serverChannelEnabled = true;
groupSpec.routingEnabled = true;
var netGroup = new NetGroup(connection,
groupSpec.groupspecWithAuthorizations());
netGroup.addEventListener(NetStatusEvent.NET_STATUS, onStatus);

[/code]

The message is an Object. It needs a destination which is the peer receiving the message. Here, PeerID is converted to a group address. It also needs the message itself. Here, we added the time to make each message unique and a type to filter the conversation:

[code]

var message:Object = new Object();
var now:Date = new Date();
message.time = now.getHours() + “” + now.getMinutes()+ “” + now.getSeconds();
message.destination = group.convertPeerIDToGroupAddress(peerID);
message.value = “south”;
message.type = “direction”;
group.sendToNearest(message, message.destination);

[/code]

Receiving a message

The recipient must be in the same group. The message is received at an event with an info.code value of NetGroup.SendTo.Notify. The recipient checks to see if the message is for her by checking if event.info.fromLocal is true, and if it is not, sends it to the next neighbor until its destination is reached:

[code]

function onStatus(event:NetStatusEvent):void {
switch(event.info.code) {
case “NetGroup.SendTo.Notify” :
trace(event.info.fromLocal);
// if true, recipient is the intended destination
var message:Object = event.info.message;
(if message.type == “direction”) {
trace(message.value); // south
}
break;
}
}

[/code]

Relay

A simple message relay service was introduced in January 2011. It is not intended for ongoing communication, but rather for a few introductory messages, and is a feature for the Cirrus service only. It requires that the sender knows the PeerID of the recipient.

The sender requests a relay:

[code]

connection.call(“relay”, null, “RECIPIENT_ID”, “hello”);

[/code]

The recipient receives and responds to the relay:

[code]

connection.client = this;
function onRelay(senderID:String, message):void {
trace(senderID); // ID of the sender
trace(message); // “hello”
}

[/code]

Treasure Hunt

This treasure hunt game illustrates various aspects of this technology.

Referring to Figure 15-3, imagine the first user on the left walking outdoors looking for a treasure without knowing where it is. She streams a live video as she walks to indicate her progress. The second user from the left knows where the treasure is but is off-site. She guides the first user by pressing keys, representing the cardinal points, to send directions. Other peers (in the two screens toward the right) can watch the live stream and chat among themselves.

Figure 15-3. The Treasure Hunt activity; the panels shown here are (left to right) for the hunter walking, for the guide, and for users viewing the video and chatting over text
Figure 15-3. The Treasure Hunt activity; the panels shown here are (left to right) for the hunter walking, for the guide, and for users viewing the video and chatting over text

Review the sample code provided in this chapter to build such an application. We covered a one-to-many streaming example. We discussed chat in an earlier example. And we just went over sending direct messages.

As a final exercise, you can put all the pieces together to build a treasure hunt application. Good luck, and please post your results.

Other Multiuser Services

If you want to expand your application beyond what this service offers, several other options are available to set up communication between parties remotely, such the Adobe Media Server, Electrotank’s ElectroServer, and gotoAndPlay()’s SmartFox. All of them require server setup and some financing.

ElectroServer was developed for multiplayer games and tools to build a multiplayer lobby system. One installation scales up to tens of thousands of connected game players with message rates of more than 100,000 messages per second. You can try a free 25- user license (see http://www.electrotank.com/). Server-side code requires Java or ActionScript 1. It supports AIR and Android.

SmartFox is a platform for developing massive multiuser games and was designed with simplicity in mind. It is fast and reliable and can handle tens of thousands of concurrent clients with low CPU and memory usage. It is well documented. You can try a full version with a free 100-user license (see http://www.smartfoxserver.com/). Server-side
code requires Java. It supports AIR and Android.

 

 

RTMFP UDP

Peer-to-peer (P2P) communication is the real-time transfer of data and media between clients.

Real Time Media Flow Protocol (RTMFP) is an Adobe proprietary protocol. It enables peer-to-peer communication between applications running in Flash Player or the AIR runtime. It is meant to provide a low-latency, secure, peering network experience.

Real Time Messaging Protocol (RTMP) uses Transmission Control Protocol (TCP). RTMFP uses User Datagram Protocol (UDP), which provides better latency, higher security (128-bit AES encryption), and scalability. RTMP is faster than RTMFP, but does not guarantee perfect message ordering and delivery.

RTMP was designed to connect via an RTMFP-capable server, such as the Cirrus service (RTMP uses the Flash Media Server), but it can also be used over a local network using WiFi without the need for a server. Note that Flash Media Server 4.0 speaks RTMFP as well.

 

RTMP Streaming

Real Time Messaging Protocol (RTMP) is a protocol using a streaming server such as Flash Media Server or a streaming service such as Influxis or Flash Media Server for Amazon Web Services.

Streaming uses a lot of data. Inform your users to use WiFi so that it is not too costly and guarantees the best possible quality experience.

RTMP server

Let’s use our RTMP server to stream an on-demand video. As in progressive downloading, streaming uses a Video, a NetConnection, and a NetStream object.

NetConnection connects to the streaming server. The protocol used is rtmp. Note the streamClient variable. You need it for callbacks; otherwise, you will get a runtime error:

[code]static const SERVER:String = “rtmp://SERVER_URI/vod/;
static const VIDEO_PATH:String = “/myVideo”;
video = new Video();
video.width = 480;
video.height = 320;
connection.addEventListener(NetStatusEvent.NET_STATUS, onNetEvent);
connection.connect(SERVER);
function netConnectionEvent(event:NetStatusEvent):void {
if (event.info.code == “NetConnection.Connect.Success”) {
var stream:NetStream = new NetStream(connection);
stream.addEventListener(NetStatusEvent.NET_STATUS, onStreamEvent);
var streamClient:Object = new Object();
streamClient.onMetaData = onMetaData;
stream.client = streamClient;
video.attachNetStream(stream);
stream.play(VIDEO_PATH);
addChild(video);
break;
}
}
function onStreamEvent(event:NetStatusEvent):void {}
function onMetaData(info:Object):void {}
function onBWDone():void {}[/code]

It is a good idea to add listeners for connection errors while debugging and to inform your audience in case of an issue:

[code]connection.addEventListener(IOErrorEvent.IO_ERROR, onIOError);
connection.addEventListener
(SecurityErrorEvent.SECURITY_ERROR, onSecurityError);
connection.addEventListener(AsyncErrorEvent.ASYNC_ERROR, onAsyncError);
function onIOError(event:IOErrorEvent):void {}
function onSecurityError(event:SecurityErrorEvent):void {}
function onASyncError(event:AsyncErrorEvent):void {}[/code]

Local Flash Media Server

You can install a local Flash Media Server to run and test your applications. Change the path to your local server. To ensure the video plays, turn off .swf verifications on the server:

[code]static const SERVER:String = “rtmp://localhost/vod/”;[/code]

Flash Media Server offers features to examine and monitor your video streams. The Quality of Service API, for instance, returns the user’s current bandwidth. You can extend the functionality of your video management by writing additional server code.

 

Playing Video

You can play videos running from your device or loaded remotely.

Embedded Video

You can embed a video in your application using Flash Professional. Embedded video will appear in the library as a symbol. Create a MovieClip and add the video content to it. You can then control its playback by calling the standard MovieClip navigation methods.

Using this approach is simple, but it has disadvantages. The video is compiled into the application and adds to its size. Also, it is always loaded in memory and cannot be removed.

As an alternative, you can embed the video in an external .swf file which you load using the Loader class.

External Video

You can package the video with your application. It is placed in the application directory. The application will not display until all of the assets are loaded. You can also serve the video from a remote web server. The code is identical in both cases.

Progressive Video

To load video locally, you need to know the path of the file in your application directory.

NetConnection creates a connection with the local filesystem when calling its connect method. Pass a null parameter in its construction to indicate that it is not streaming.

Within the connection, NetStream opens the channel between AIR and the local filesystem. Pass the connection object as a parameter in its construction, and use its play method to receive video data. Note that this object needs its client property defined as well as the onMetaData method to prevent a runtime error.

The Video object displays the video data.

In this example, the Video object dimensions are hardcoded:

[code]

import flash.net.NetConnection;
import flash.net.NetStream;
import flash.media.Video;
import flash.events.NetStatusEvent;
var connection:NetConnection;
var video:Video;
video = new Video();
video.width = 480;
video.height = 320;
connection = new NetConnection();
connection.addEventListener(NetStatusEvent.NET_STATUS, netConnectionEvent);
connection.connect(null);
function netConnectionEvent(event:NetStatusEvent):void {
event.target.removeEventListener(NetStatusEvent.NET_STATUS,
netConnectionEvent);
if (event.info.code == “NetConnection.Connect.Success”) {
var stream:NetStream = new NetStream(connection);
stream.addEventListener(NetStatusEvent.NET_STATUS, netStreamEvent);
var client:Object = new Object();
client.onMetaData = onMetaData;
stream.client = client;
// attach the stream to the video to display
video.attachNetStream(stream);
stream.play(“someVideo.flv”);
addChild(video);
}
}
function onMetaData(info:Object):void {}

[/code]

At the time of this writing, video.smoothing is always false. This is consistent with AIR runtime default settings, but does not provide the best video experience. Setting video.smoothing to true does not change it.

SD card

You can play videos from the SD card. Playback is nearly as fast as playing back from the device.

You need to resolve the path to where the video is located before playing it. In this example, there is a directory called myVideos on the SD card and a video called myVideo inside it:

[code]

var videosPath:File = File.documentsDirectory.resolvePath(“myVideos”);
var videoName:String = “myVideo.mp4”;
stream.play(videosPath + “/” + videoName);

[/code]

Browsing for video

You cannot use CameraRoll to browse for videos, but you can use the filesystem.

You could create a custom video player for the user to play videos installed on the device or on the SD card. The browseForOpen method triggers the filesystem to search for videos:

[code]

import flash.filesystem.File;
import flash.net.FileFilter;
import flash.media.Video;
var video:Video;
var filter:FileFilter = new FileFilter(“video”, “*.mp4;*.flv;*.mov;*.f4v”);
var file:File = new File();
file.addEventListener(Event.SELECT, fileSelected);
file.browseForOpen(“open”, [filter]);

[/code]

At the time of this writing, it seems that only the FLV format is recognized when browsing the filesystem using AIR.

A list of the video files found appears. The following code is executed when the user selects one of the files. The video file is passed in the Event.SELECT event as file.tar get and is played using its url property. Note how the video is sized and displayed in the onMetaData function. We will cover this technique next:

[code]

import flash.net.NetConnection;
import flash.net.NetStream;
function fileSelected(event:Event):void {
video = new Video();
var connection:NetConnection = new NetConnection();
connection.connect(null);
var stream:NetStream = new NetStream(connection);
var client:Object = new Object();
client.onMetaData = onMetaData;
stream.client = client;
video.attachNetStream(stream);
stream.play(event.target.url);
}
function onMetaData(info:Object):void {
video.width = info.width;
video.height = info.height;
addChild(video);
}

[/code]

Metadata

The client property of NetStream is used to listen to onMetaData. In this example, we use the video stream width and height, received in the metadata, to scale the Video object. Other useful information is the duration, the frame rate, and the codec:

[code]

// define the Stream client to receive callbacks
var client:Object = new Object();
client.onMetaData = onMetaData;
stream.client = client;
// attach the stream to the video
video.attachNetStream(stream);
stream.play(“someVideo.flv”);
// size the video object based on the metadata information
function onMetaData(info:Object):void {
video.width = info.width;
video.height = info.height;
addChild(video);
trace(info.duration);
trace(info.framerate);
trace(info.codec);
for (var prop:String in info) {
trace(prop, data[prop]);
}
}

[/code]

Cue points

The FLVPlaybackComponent gives us the ability to add cue points to a video. The component listens to the current time code and compares it to a dictionary of cue points. When it finds a match, it dispatches an event with the cue point information.

The cue points come in two forms. Navigation cue points are used as markers for chapters or time-specific commentary. Event cue points are used to trigger events such as calling an ActionScript function. The cue point object looks like this:

[code]

var cuePoint:Object = {time:5, name:”cue1″, type:”actionscript”,
parameters:{prop:value}};

[/code]

This component is not available in AIR for Android. If you want to use something similar, you need to write the functionality yourself. It can be a nice addition to bridge your video to your AIR content if you keep your cue points to a minimum. Use them sparsely, as they have an impact on performance.

Cue points can be embedded dynamically server-side if you are recording the file on Flash Media Server.

Buffering

The moov atom, video metadata that holds index information, needs to be placed at the beginning of the file for a progressive file. Otherwise, the whole file needs to be completely loaded in memory before playing. This is not an issue for streaming. Look at Renaun Erickson’s wrapper to fix the problem, at http://renaun.com/blog/code/qtindexswapper/.

By default, the application uses an input buffer. To modify the default buffering time, use the following:

[code]var stream:NetStream = new NetStream(connection);
stream.bufferTime = 5; // value in seconds[/code]

When using a streaming server, managing bandwidth fluctuation is a good strategy:

[code]

var stream:NetStream = new NetStream(connection);
stream.addEventListener(NetStatusEvent.NET_STATUS, netStreamEvent);
function netStreamEvent(event:NetStatusEvent):void {
var buffTime:int;
swith(event.info.code) {
case “NetStream.Buffer.Full” :
buffTime = 15.0;
break;
case “NetStream.Buffer.empty” :
buffTime = 2.0;
break;
}
stream.bufferTime = buffTime;
}

[/code]

Read Fabio Sonnati’s article on using dual-threshold buffering, at http://www.adobe.com/devnet/flashmediaserver/articles/fms_dual_buffering.html.

Saving a Recording

Let’s now save your recording on the device. In the following examples, we are saving the audio files on the SD card. Your application needs permission to write to external storage. If you do not have this permission, AIR will throw a runtime error:

<uses-permission android:name=
“android.permission.WRITE_EXTERNAL_STORAGE”/>

The BLOB type

At the time of this writing, there is no native library in which to save an MP3 file that can be played back into the application at runtime. As an alternative, you can save the bytes in an SQLite database as BLOB data. The BLOB type is raw binary data that stores information exactly as it was input.

In this section, we will compress the file to reduce its size. First, let’s create a database and a table to store the audio files:

import flash.data.SQLConnection;
import flash.events.SQLEvent;
import flash.data.SQLStatement;
import flash.errors.SQLError;
import flash.filesystem.File;
var connection:SQLConnection;
// open connection to database
connection = new SQLConnection();
connection.addEventListener(SQLEvent.OPEN, openDatabase);
var file:File = File.documentsDirectory.resolvePath(“Dictaphone.db”);
connection.open(file);

function openDatabase(event:SQLEvent) {
connection.removeEventListener(SQLEvent.OPEN, openDatabase);
createTable();
}
// create or open table
function createTable():void {
var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
var request:String =
“CREATE TABLE IF NOT EXISTS mySounds (” +
“id INTEGER PRIMARY KEY AUTOINCREMENT, ” +
“audio BLOB )”;
statement.text = request;
try {
statement.execute();
} catch(error:SQLError) {
trace(error.message, error.details);
}
}

Now we’ll compress the audio and save it in the database. Here we are using ZLIB compression, which provides good results but is somewhat slow to execute:

import flash.utils.CompressionAlgorithm;
var statement:SQLStatement;
function saveItem():void {
// compress the bytes
bytes.position = 0;
bytes.compress(CompressionAlgorithm.ZLIB);
var command:String =
“INSERT INTO mySounds(audio) VALUES (?)”;
statement = new SQLStatement();
statement.sqlConnection = connection;
statement.text = command;
statement.parameters[0] = bytes;
try {
statement.execute();
} catch(error:SQLError) {
trace(error.message, error.details);
}
}

Retrieve the first audio item from the database, and decompress it to use it:

import flash.data.SQLResult;
function getItem(id:Number):ByteArray {
var command:String = “SELECT * FROM mySounds WHERE id=:id;”
var statement:SQLStatement = new SQLStatement();
statement.sqlConnection = connection;
statement.text = command;
statement.parameters[“:id”] = id;
statement.execute(1);
var result:SQLResult = statement.getResult();
if (result.data != null) {
return result.data[0];
}
return new ByteArray();
}
// to read the data back, decompress it
bytes = getItem(1).audio;
bytes.uncompress(CompressionAlgorithm.ZLIB);
bytes.position = 0;
// play audio

Use the bytes to play the audio in a Sound object, as in the previous example.

WAV files

You can save your recording as a WAV file on your device. Download the Adobe.audio.format.WAVWriter class from the audio_sampler.zip file located at http://www.adobe.com/devnet/air/flex/articles/using_mic_api.html, and import it to your project.

In this example, we are encoding our previous recording as a WAV file and saving it on the SD card in a directory called mySounds.

import com.adobe.audio.format.WAVWriter;
import flash.filesystem.File;
import flash.filesystem.FileStream;
import flash.filesystem.FileMode;
function saveWav(bytes:ByteArray):void {
// point to mySounds directory on the SD card.
var directory:File = File.documentsDirectory.resolvePath(“mySounds”);
// if directory does not exist yet, create it
if (!directory.exists) {
directory.createDirectory();
}
// create name of a new wav file
var file:File = directory.resolvePath(“mySound.wav”);
// create an instance of the WAVWriter class and set properties
var wav:WAVWriter = new WAVWriter();
wav.numOfChannels = 1; // mono
wav.sampleBitRate = 16; // or 8
wav.samplingRate = 44100; // or 22000
// rewind to the beginning of the ByteArray
bytes.position = 0;
// create stream as conduit to copy data and write file
var stream:FileStream = new FileStream();
stream.open(file, FileMode.WRITE);
// convert byteArray to WAV format and close stream
wav.processSamples(stream, bytes, 44100, 1);
stream.close();
}

Open source libraries

The current native libraries cannot load a WAV file dynamically or encode a ByteAr ray as an MP3 file. As an alternative, you can try some of the available open source libraries.

For instance, Shine, written by Gabriel Bouvigné, is an Alchemy/Flash MP3 encoder (see https://github.com/kikko/Shine-MP3-Encoder-on-AS3-Alchemy and http://code.google.com/p/flash-kikko/):

import fr.kikko.lab.ShineMP3Encoder;
encoder = new ShineMP3Encoder(bytes);
encoder.addEventListener(Event.COMPLETE, onEncoding);
encoder.addEventListener(ProgressEvent.PROGRESS, onProgress);
encoder.addEventListener(ErrorEvent.ERRROR, onError);
encoder.start();
file.save(mp3Encoder.mp3Data, “recording.mp3”);

In addition, the following WAV decoders are also available:

  • AS3WavSound (http://www.ohloh.net/p/as3wavsound)
  • standingwave3 (http://maxl0rd.github.com/standingwave3/)
  • Ogg/Vorbis (http://vorbis.com/software/
  • Tonfall (http://code.google.com/p/tonfall/; this is also an encoder)

Saving to a remote server

If you have access to a streaming media server such as Flash Media Server, you can save and stream audio to the device. The microphone can be attached to a NetStream for uploading. Audio data can also be streamed from the server and played back using a Video object.

Two compression codecs are available:

import flash.media.soundCodec;
mic.codec = SoundCodec.NELLYMOSER; // default
mic.coder = SoundCodec.SPEEX;

If you are using this technology, urge your audience to use a WiFi connection over 3G unless they have a flat-fee data plan.