Case Study

The Album Application

In the mobile version of the AIR application, the user takes a picture or pulls one from the camera roll. She can save it to a dedicated database, along with an audio caption and geolocation information. The group of saved images is viewable in a scrollable menu. Images can be sent from the device to the desktop via a wireless network.

In the desktop version, the user can see an image at full resolution on a large screen. It can be saved to the desktop, and it can also be edited and uploaded to a photo service.

Please download the two applications from this book’s website at http://oreilly.com/ catalog/9781449394820.

Design

The design is simple, using primary colors and crisp type. The project was not developed for flexible layout. It was created at 800×480 resolution with auto-orientation turned off; you can use it as a base from which to experiment developing for other resolutions. The art is provided in a Flash movie to use in Flash Professional or as an .swc file to import into Flash Builder by selecting Properties→ActionScript Build Path→Library Path and clicking on “Add swc.”

Architecture

The source code consists of the Main document class and the model, view, and events packages (see Figure 17-1).

The model package contains the AudioManager, the SQLManager, the GeoService, and the PeerService. The view package contains the NavigationManager and the various views. The events package contains the various custom events.

The SQLManager class is static, so it can be accessed from anywhere in the application without instantiation. The other model classes are passed by reference.

Flow

The flow of the application is straightforward. The user goes through a series of simple tasks, one step at a time. In the opening screen, the OpeningView, the user can select a new picture or go to the group menu of images, as shown in Figure 17-2.

From the AddView page, the user can open the Media Gallery or launch the camera, as shown in Figure 17-3. Both choices go to the same CameraView view. An id parameter is passed to the new view to determine the mode.

The image data received from either of the sources is resized to fit the dimensions of the stage. The user can take another picture or accept the current photograph if satisfied. The image url is sent to the SQLManager to store in its class variable current Photo of type Object, and the application goes to the CaptionView.

In the CaptionView, the user can skip the caption-recording step or launch the Audio Manager. The recording is limited to four seconds and automatically plays the caption sound back. The user can record again or choose to keep the audio. The AudioMan ager compresses the recording as a WAV file and saves it on the SD card. Its url is saved in the SQLManager’s currentPhoto object. The next step is to add geographic information.

Figure 17-1. Packages and classes for the Album application
Figure 17-1. Packages and classes for the Album application
Figure 17-2. The OpeningView
Figure 17-2. The OpeningView

In the GeoView, the user can skip or launch the GeoService. This service creates an instance of the GeoLocation, fetches coordinates, and then requests the corresponding city and country from the Yahoo! API. As in the previous steps, the geodata is saved in the SQLManager’s currentPhoto object. These three steps are shown in Figure 17-4.

Figure 17-3. The AddView, native camera application, and Media Gallery
Figure 17-3. The AddView, native camera application, and Media Gallery
Figure 17-4. The CameraView, CaptionView, and GeoView
Figure 17-4. The CameraView, CaptionView, and GeoView

In the SavingView mode, data saving can be skipped or the data can be saved. For the latter, the SQLManager opens an SQL connection and saves the data, then closes the connection. The application goes back to the OpeningView.

Back at our starting point, another navigation choice is the Group menu. The Menu View page requests the number of images saved for the SQLManager and displays it as a list of items. If the list height is taller than the screen, it becomes scrollable. Selecting one of the items takes the user to the PhotoView screen. The SavingView page and MenuView page are shown in Figure 17-5.

The PhotoView displays the image selected in the MenuView. Choosing to connect calls the PeerService to set up a P2P connection using the WiFi network. Once it is established, the data is requested from the SQLManager using the item ID. The data is then sent. It includes the byteArray from the image, a WAV file for the audio, and the city and country as text. These steps are displayed in Figure 17-6.

Figure 17-5. The SavingView and MenuView
Figure 17-5. The SavingView and MenuView
Figure 17-6. The PhotoView and the steps to send the picture information using a P2P connection
Figure 17-6. The PhotoView and the steps to send the picture information using a P2P connection

Permissions

This application needs the following permissions to access the Internet, write to the SD card, and access GPS sensors, the camera, and the microphone:

[code]

<android>
<manifestAdditions>
<![CDATA[
<manifest>
<uses-permission
android:name=”android.permission.INTERNET”/>
<uses-permission
android:name=”android.permission.WRITE_EXTERNAL_STORAGE”/>
<uses-permission
android:name=”android.permission.ACCESS_FINE_LOCATION”/>
<uses-permission
android:name=”android.permission.ACCESS_COARSE_LOCATION”/>
<uses-permission
android:name=”android.permission.CAMERA”/>
<uses-permission
android:name=”android.permission.RECORD_AUDIO”/>
</manifest>
]]>
</manifestAdditions>
</android>

[/code]

Navigation

The ViewManager class discussed here is almost identical. The flow is a step-by-step process whereby the user can choose to skip the steps that are optional.

Images

The CameraView is used to get an image, either by using the media library or by taking one using the camera. The choice is based on a parameter passed from the previous screen. The process of receiving the bytes, scaling, and displaying the image is the same regardless of the image source. It is done by a utility class called BitmapDataSizing and is based on the dimensions of the screen.

To improve this application, check if an image is already saved when the user selects it again to avoid duplicates.

Audio

The audio caption is a novel way to save a comment along with the image. There is no image service that provides the ability to package an audio commentary, but you could build such an application.

WAV files using the Adobe class WAVReader and then extracted using a third-party library. Here, we create an Album directory on the SD card and a mySounds directory inside it to store the WAV files.

Reverse Geolocation

the process of using geographical coordinates to get an address location such as a city and a street address.

In this application, we are only interested in the city name and country. Therefore, coarse data is sufficient. We do not need to wait for the GPS data to stabilize. As soon as we get a response for the Yahoo! service, we move on to the next step.

SQLite

SQLManager is a static class, so it can be accessed from anywhere in the application. The Main class holds an object which stores information related to a photo until it is complete and ready to be saved:

[code]

var currentPhoto:Object = {photo:””, audio:””, geo:””};

[/code]

The photo property stores the path to where the image is saved in the Gallery. The audio property stores the path to where the WAV file is located and the geo property stores a string with city and country information.

From the SavingView view, the object is saved in the myAlbum.db file on the SD card.

P2P Connection

The peer-to-peer connection is used to send the image, audio caption, and location over a LAN. This example is to demonstrate the potential of what you can do more than a proper use case because the transfer is slow, unless the information is sent in packets and reassembled. This technology is feasible for fairly small amounts of data and has a lot of potential for gaming and social applications.

Once the user has selected an image, she can transfer it to a companion desktop application from the SavingView view. The PeerService class handles the communication to the LAN and the posting of data.

Scrolling Navigation

The MenuView that displays the images saved in the database has scrolling capability if the content is larger than the height of the screen.

There are two challenges to address. The first is the performance of a potentially large list to scroll. The second is the overlapping interactive objects. The scrollable list contains elements that also respond to a mouse event. Both functionalities need to work without conflicting.

We only need to scroll the view if the container is larger than the device height, so there is no need to add unnecessary code. Let’s check its dimensions in the onShow method:

[code]

function onShow():void {
deviceHeight = stage.stageHeight;
container = new Sprite();
addChild(container);
// populate container
if (container.height > deviceHeight) {
trace(“we need to add scrolling functionality”);
}
}

[/code]

If the container sprite is taller than the screen, let’s add the functionality to scroll. Touch events do not perform as well as mouse events. Because we only need one touch point, we will use a mouse event to detect the user interaction. Note that we set cacheAsBit map to true on the container to improve rendering:

[code]

function onShow():void {
if (container.height > deviceHeight) {
container.cacheAsBitmap = true;
stage.addEventListener(MouseEvent.MOUSE_DOWN,
touchBegin, false, 0, true);
stage.addEventListener(MouseEvent.MOUSE_UP,
touchEnd, false, 0, true);
}
}

[/code]

To determine if the mode is to scroll or to click an element that is part of the container, we start a timeout. We will see later why we need this timer in relation to the elements:

[code]

import flash.utils.setTimeout;
var oldY:Number = 0.0;
var newY:Number = 0.0;
var timeout:int;
function touchBegin(event:MouseEvent):void {
oldY = event.stageY;
newY = event.stageY;
timeout = setTimeOut(startMove, 400);
}

[/code]

When the time expires, we set the mode to scrollable by calling the startMove method. We want to capture the position change on MOUSE_MOVE but only need to render the change to the screen on ENTER_FRAME. This guarantees a smoother and more consistent motion. UpdateAfterEvent should never be used in mobile development because it is too demanding for devices:

[code]

function startMove(event:MouseEvent):void {
stage.addEventListener(MouseEvent.MOUSE_MOVE,
touchMove, false, 0, true);
stage.addEventListener(Event.ENTER_FRAME, frameEvent, false, 0, true);
}

[/code]

When the finger moves, we update the value of the newY coordinate:

[code]

function touchMove(event:MouseEvent):void {
newY = event.stageY;
}

[/code]

On the enterFrame event, we render the screen using the new position. The container is moved according to the new position. To improve performance, we show and hide the elements that are not in view using predefined bounds:

[code]

var totalChildren:int = container.numChildren;
var topBounds:int = -30;
function frameEvent(event:Event):void {
if (newY != oldY) {
var newPos = newY – oldY;
oldY = newY;
container.y += newPos;
for (var i:int = 0; i < totalChildren; i++) {
var mc:MovieClip = container.getChildAt(i) as MovieClip;
var pos:Number = container.y + mc.y;
mc.visible = (pos > topBounds && pos < deviceHeight);
}

}
}

[/code]

On touchEnd, the listeners are removed:

[code]

function touchEnd(event:MouseEvent):void {
stage.removeEventListener(MouseEvent.MOUSE_MOVE, touchMove);
stage.removeEventListener(Event.ENTER_FRAME, frameEvent);
}

[/code]

As mentioned before, elements in the container have their own mouse event listeners:

[code]

element.addEventListener(MouseEvent.MOUSE_DOWN, timeMe, false, 0, true);
element.addEventListener(MouseEvent.MOUSE_UP, clickAway, false, 0, true);

[/code]

On mouse down, the boolean variable isMoving is set to false and the visual cue indicates that the element was selected:

[code]

var isMoving:Boolean = false;
var selected:MovieClip;
function timeMe():void {
isMoving = false;
selected = event.currentTarget as MovieClip;
selected.what.textColor = 0x336699;
}

[/code]

On mouse up and within the time allowed, the stage listeners and the timeout are removed. If the boolean isMoving is still set to false and the target is the selected item, the application navigates to the next view:

[code]

function clickAway(event:MouseEvent):void {
touchEnd(event);
clearTimeOut(timeout);
if (selected == event.currentTarget && isMoving == false) {
dispatchEvent(new ClickEvent(ClickEvent.NAV_EVENT,
{view:”speaker”, id:selected.id}));
}

[/code]

Now let’s add to the frameEvent code to handle deactivating the element when scrolling. Check that an element was pressed, and check that the selected variable holds a value and that the motion is more than two pixels. This is to account for screens that are very responsive. If both conditions are met, change the boolean value, reset the look of the element, and set the selected variable to null:

[code]

function frameEvent(event:Event):void {
if (newY != oldY) {
var newPos = newY – oldY;
oldY = newY;
container.y += newPos;
for (var i:int = 0; i < totalChildren; i++) {
var mc:MovieClip = container.getChildAt(i) as MovieClip;
var pos:Number = container.y + mc.y;
mc.visible = (pos > topBounds && pos < deviceHeight);
}
if (selected != null && Math.abs(newPos) > 2) {
isMoving = true;
selected.what.textColor = 0x000000;
selected = null;
}
}
}

[/code]

There are various approaches to handle scrolling. For a large number of elements, the optimal way is to only create as many element containers as are visible on the screen and populate their content on the fly. Instead of moving a large list, move the containers as in a carousel animation and update their content by pulling the data from a Vector or other form of data content.

If you are using Flash Builder and components, look at the Adobe lighthouse package (http://www.adobe.com/devnet/devices/fpmobile.html). It contains DraggableVertical Container for display objects and DraggableVerticalList for items.

Desktop Functionality

The AIR desktop application, as shown in Figure 17-7, is set to receive the data and display it. Seeing a high resolution on a large screen demonstrates how good the camera quality of some devices can be. The image can be saved on the desktop as a JPEG.

Figure 17-7. AIR desktop companion application to display images received from the device
Figure 17-7. AIR desktop companion application to display images received from the device

Another technology, not demonstrated here, is Pixel Bender, used for image manipulation. It is not available for AIR for Android but is for AIR on the desktop. So this would be another good use case where devices and the desktop can complete one another.

P2P Over a Local Network

If your local network supports broadcasting, you can create peer-to-peer direct routing. All the clients need to be on the same subnet, but you do not need to manage them. Verify that your devices have WiFi enabled and are using the same network.

The code to create a peer-to-peer application with RTMFP is quite simple but introduces new concepts. Let’s go over all the steps one at a time.

The connection is established using the flash.net.NetConnection class. Set a listener to receive a NetStatusEvent event. Create the connection by calling the connect function and passing rtmfp as an argument:

[code]

import flash.net.NetConnection;
import flash.events.NetStatusEvent;
var connection:NetConnection = new NetConnection();
connection.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
connection.connect(“rtmfp:”);

[/code]

Wait for the connection to be established. Then several objects need to be created:

[code]

function onStatus(event:NetStatusEvent):void {
switch(event.info.code) {
case “NetConnection.Connect.Success” :
trace(“I am connected”);
// object creation can now happen
break;
}
}

[/code]

NetGroup is the group of peers. Its capabilities are defined in the GroupSpecifier. The IPMulticastAddress property stores the IPv4 multicast address. It needs to be in the range 224.0.0.0 through 239.255.255.25. The UDP port should be higher than 1024. A group name is passed in its constructor. Try to make it unique. The IPMulticast MemberUpdatesEnabled property must be set to true for clients to receive updates from other clients on a LAN. The postingEnabled property allows clients to send messages to the group:

[code]

import flash.net.GroupSpecifier;
var groupName:String = “com.veronique.simple/”;
var IPMulticastAddress:String = “230.0.0.1:3000″;
var groupSpec:GroupSpecifier = new GroupSpecifier(groupName);
groupSpec.addIPMulticastAddress(IPMulticastAddress);
groupSpec.ipMulticastMemberUpdatesEnabled = true;
groupSpec.postingEnabled = true;

[/code]

Now create the NetGroup. Pass the connection and the GroupSpecifier in its construction. The latter is passed with an authorization property to define the communication allowed: groupspecWithAuthorizations to post and multicast, or groupspecWithout Authorizations to only receive messages. Note that this setting is only relevant if a posting password is set (as defined by your application):

[code]

import flash.net.NetGroup;
var netGroup = new NetGroup
(connection, groupSpec.groupspecWithAuthorizations());
netGroup.addEventListener(NetStatusEvent.NET_STATUS, onStatus);

[/code]

The group is composed of neighbors, you as well as others. Using the same Net StatusEvent event, check for its info.code. Wait to receive the NetGroup.Connect.Suc cess event before using the functionality of NetGroup to avoid getting an error.

When a user joins or leaves the group, the code is as follows:

[code]

function onStatus(event:NetStatusEvent):void {
switch(event.info.code) {
case ” NetGroup.Connect.Success” :
trace(“I joined the group”);
break;
case “NetGroup.Connect.Rejected” :
case “NetGroup.Connect.Failed” :
trace(“I am not a member”);
break;
}
}

[/code]

Others in the group receive the following events. Note that if the group is large, only a subset of members is informed that a new peer has joined or left the group:

[code]

function onStatus(event:NetStatusEvent):void {
switch(event.info.code) {
case “NetGroup.Neighbor.Connect” :
trace(“neighbor has arrived”, neighborCount);
break;
case “NetGroup.Neighbor.Disconnect” :
trace(“neighbor has left”);
break;
}
}

[/code]

To send a message, use the NetGroup.post method. It takes an Object as an argument. Messages are serialized in AMF (binary format for serialized ActionScript objects), so a variation of data types can be used, such as Object, Number, Integer, and String types:

[code]

var message:Object = new Object();
message.type = “testing”;
message.body = {name:”Véronique”, greeting:”Bonjour”};
group.post(message);

[/code]

To receive messages, check for an info.code equal to a NetGroup.Posting.Notify event. The message is received as event.info.message. The message is not distributed to the sender:

[code]

function onStatus(event:NetStatusEvent):void {
switch(event.info.code) {
case “NetGroup.Posting.Notify” :
trace(event.info.message); // [Object]
trace(event.info.message.body.greeting); // Bonjour
break;
}
}

[/code]

Identical messages are not re-sent. To make each message unique, store the current time as a property of the object:

[code]

var now:Date = new Date();
message.time = now.getHours() + “_” + now.getMinutes() +
“_” + now.getSeconds();
group.post(message);

[/code]

If the message only goes in one direction and there will be no overlap between clients, you could use a counter that gets incremented with every new message:

[code]message.count = count++;[/code]

When disconnecting, it is important to remove all objects and their listeners:

[code]

function onStatus(event:NetStatusEvent):void {
switch(event.info.code) {
case “NetConnection.Connect.Rejected” :
case “Connect.AppShutdown” :
trace(“I am not connected”);
onDisconnect();
break;
}
}
function onDisconnect():void {
group = null;
netGroup.removeEventListener(NetStatusEvent.NET_STATUS, onStatus);
netGroup = null;
connection.removeEventListener(NetStatusEvent.NET_STATUS, onStatus);
connection = null;
}

[/code]

Color Exchange

Let’s create a simple example. The hueMe application starts with a shape of a random color. Each client can send a color value to the other client’s application. On the receiving end, the shape changes to the new color (see Figure 15-1).

Figure 15-1. The hueMe application
Figure 15-1. The hueMe application

Draw the initial colored sprite:

[code]

var sprite:Sprite = new Sprite();
var g:Graphics = sprite.graphics;
g.beginFill(Math.round(Math.random()*0xFFFFFF));
g.drawRect(20, 20, 200, 150);
g.endFill();

[/code]

Create the connection for the P2P communication:

[code]

var connection:NetConnection = new NetConnection();
connection.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
connection.connect(“rtmfp:”);

[/code]

Once the connection is established, create the group and check that the user has successfully connected to it:

[code]

function onStatus(event:NetStatusEvent):void {
if (event.info.code == “NetConnection.Connect.Success”) {
var groupSpec:GroupSpecifier = new GroupSpecifier(“colorGroup”);
groupSpec.addIPMulticastAddress(“225.0.0.1:4000”);
groupSepc.postingEnabled = true;
groupSepc.ipMulticastMemberUpdatesEnabled = true;
group = new NetGroup(connection,
groupSpec.groupspecWithAuthorizations());
group.addEventListener(NetStatusEvent.NET_STATUS, onStatus);
} else if (event.info.code == “NetGroup.Connect.Success”) {
trace(“I am part of the group “);
}
}

[/code]

Send a random color value to the group when clicking the sprite:

[code]

g.addEventListener(MouseEvent.CLICK, hueYou);
function hueYou(event:MouseEvent):void {
var randomHue:int = Math.round(Math.random()*0xFFFFFF);
var object:Object = {type:”color”, hue:randomHue};
group.post(object);
}

[/code]

Finally, add the functionality to receive the value from other members of the group and color the sprite:

[code]

import flash.geom.ColorTransform;
function onStatus(event:NetStatusEvent):void {

if (event.info.code == “NetGroup.Posting.Notify”) {
if (event.info.message.type == “color”) {
applyColor(Number(event.info.message.hue));
}
}
}
function applyColor(hue:int):void {
var colorTransform:ColorTransform = new ColorTransform();
colorTransform.color = hue;
sprite.transform.colorTransform = colorTransform;
}

[/code]

Companion AIR Application

To make your application unidirectional, as in a remote control-style application, have one client sending messages and the other receiving messages. Only the networked clients registered for the NetGroup.Posting.Notify event receive data.

Mihai Corlan developed an Android remote control for a desktop MP3 player; read about it at http://corlan.org/2010/07/02/creating-multi-screen-apps-for-android-and -desktop-using-air/.

Tom Krcha created a remote controller to send accelerometer, speed, and brake information to a car racing game (see http://www.flashrealtime.com/game-remote-device-con troller/).

Multicast Operation

As noted above, the backbone may consist of (i) a pure IP network or (ii) a mixed satellite transmission link to a metropolitan headend that, in turn, uses a metropolitan (or regional) telco IP network. Applications such as video are very sensitive to end-to-end delay, jitter, and (uncorrectable) packet loss; QoS considerations are critical. These networks tend to have fewer hops and pruning may be somewhat trivially implemented by a making use of a simplified network topology.

At the logical level, there are three types of communication between systems in a(n IP) network:

  • Unicast: Here, one system communicates directly to another system.
  • Broadcast: Here, one system communicates to all systems.
  • Multicast: Here, one system communicates to a select group of other systems.

In traditional IP networks, a packet is typically sent by a source to a single destination (unicast); alternatively, the packet can be sent to all devices on the network (broadcast). There are business- and multimedia (entertainment) applications that require a multicast transmission mechanism to enable bandwidthefficient communication between groups of devices where information is transmitted to a single multicast address and received by any device that wishes to obtain such information. In traditional IP networks, it is not possible to generate a single transmission of data when this data is destined for a (large) group of remote devices. There are classes of applications that require  distribution of information to a defined (but possibly dynamic) set of users. IP Multicast, an extension to IP, is required to properly address these communications needs. As the term implies, IP Multicast has been developed to support efficient communication between a source and multiple remote destinations.

Multicast applications include, among others, datacasting—for example, for distribution of real-time financial data—entertainment digital television over an IP network (commercial-grade IPTV), Internet radio, multipoint video conferencing, distance-learning, streaming media applications, and corporate communications. Other applications include distributed interactive simulation, cloud/grid computing, and distributed video gaming (where most receivers are also senders). IP Multicast protocols and underlying technologies enable efficient distribution of data, voice, and video streams to a large population of users, ranging from hundreds to thousands to millions of users. IP Multicast technology enjoys intrinsic scalability, which is critical for these types of applications.

As an example in the IPTV arena, with the current trend toward the delivery of HDTV signals, each requiring the 12 Mbps range, and the consumers’ desire for a large number of channels (200–300 being typical), there has to be an efficient mechanism of delivering a signal of 1–2 Gbps1 aggregate to a large number of remote users. If a source had to deliver 1 Gbps of signal to, say, 1 million receivers by  transmitting all of this bandwidth across the core network, it would require a petabit per second network fabric; this is currently not possible. On the other hand, if the source could send the 1 Gbps of traffic to (say) 50 remote distribution points (for example, headends), each of which then makes use of a local distribution network to reach 20,000 subscribers, the core network only needs to support 50 Gbps, which is possible with proper design. For such reasons, IP Multicast is seen as a bandwidth-conserving technology that optimizes traffic management by simultaneously delivering a stream of information to a large population of recipients, including corporate enterprise users and residential customers. IPTV uses IP-based basic transport (where IP packets contain MPEG-4 TSs) and IP Multicast for service control and content acquisition (group membership). See Fig. 5.1 for a pictorial example.

One important design principle of IP Multicast is to allow receiver-initiated attachment (joins) to information streams, thus supporting a distributed informatics model. A second important principle is the ability to support optimal pruning such that the distribution of the content is streamlined by pushing replication as close to the receiver as possible. These principles enable bandwidth-efficient use of underlying network infrastructure.

The issue of security in multicast environments is addressed via Conditional Access Systems (CAS) that provide per-program encryption (typically, but not always, symmetric encryption; also known as inner encryption) or aggregate IP-level encryption (again typically, but not always, symmetric encryption; also known as outer encryption).

Carriers have been upgrading their network infrastructure in the past few years to enhance their capability to provide QoS-managed services, such as IPTV. Specifically, legacy remote access platforms, implemented largely to support basic DSL service roll-outs—for example, supporting ATM aggregation and DSL termination—are being replaced by new broadband network gateway access technologies optimized around IP, Ethernet, and VDSL2 (Very High Bitrate

Bandwidth advantage of IP Multicast.

Digital Subscriber Line 2). These services and capabilities are delivered with multiservice routers on the network edge. Viewer-initiated program selection is achieved using the IGMP, specifically with the Join Group Request message. (IGMP v2 messages include Create Group Request, Create Group Reply, Join Group Request, Join Group Reply, Leave Group (LG) Request, LG Reply, Confirm Group Request, and Confirm Group Reply.) Multicast communication is based on the construct of a group of receivers (hosts) that have an interest in receiving a particular stream of information, be it voice, video, or data. There are no physical or geographical constraints, or boundaries to belong to a group, as long as the hosts have (broadband) network connectivity. The connectivity of the receivers can be heterogeneous in nature, in terms of bandwidth and connecting infrastructure (for example, receivers connected over the Internet), or homogeneous (for example, IPTV or DVB-H users). Hosts that are desirous of receiving data intended for a particular group join the group using a group management protocol: hosts/receivers must become explicit members of the group to receive the data stream, but such membership may be ephemeral and/or dynamic. Groups of IP hosts that have joined the group and wish to receive traffic sent to this specific group are identified by multicast addresses.

Multicast routing protocols belong to one of two categories: Dense-Mode (DM) protocols and Sparse-Mode (SM) protocols.

  • DM protocols are designed on the assumption that the majority of routers in the network will need to distribute multicast traffic for each multicast group. DM protocols build distribution trees by initially flooding the entire network and then pruning out the (presumably small number of) paths without active receivers. The DM protocols are used in LAN environments, where bandwidth
    considerations are less important, but can also be used in WANs in special cases (for example, where the backbone is a one-hop broadcast medium such as a satellite beam with wide geographic illumination, such as in some IPTV applications).
  • SM protocols are designed on the assumption that only few routers in the network will need to distribute multicast traffic for each multicast group. SM protocols start out with an empty distribution tree and add drop-off branches only upon explicit requests from receivers to join the distribution. SM protocols are generally used in WAN environments, where bandwidth considerations are important.

For IP Multicast there are several multicast routing protocols that can be employed to acquire real-time topological and membership information for active groups. Routing protocols that may be utilized include the Protocol-Independent Multicast (PIM), the Distance Vector Multicast Routing Protocol (DVMRP), the MOSPF (Multicast Open Shortest Path First), and Core-Based Trees (CBT). Multicast
routing protocols build distribution trees by examining routing a forwarding table that contains unicast reachability information. PIM and CBT use the unicast forwarding table of the router. Other protocols use their specific unicast reachability routing tables; for example, DVMRP uses its distance vector routing protocol to determine how to create source-based distribution trees, while MOSPF utilizes its link state table to create source-based distribution trees. MOSPF, DVMRP, and PIM-DM are dense-mode routing protocols, while CBT and PIM-SM are sparse-mode routing protocols. PIM is currently the most-widely used protocol.

As noted, IGMP (versions 1, 2, and 3) is the protocol used by Internet Protocol Version 4 (IPv4) hosts to communicate multicast group membership states to multicast routers. IGMP is used to dynamically register individual hosts/receivers on a particular local subnet (for example, LAN) to a multicast group. IGMP

IGMP v2 message format.

version 1 defined the basic mechanism. It supports a Membership Query (MQ) message and a Membership Report (MR) message. Most implementations at press time employed IGMP version 2; it adds LG messages. Version 3 adds source awareness, allowing the inclusion or exclusion of sources. IGMP allows group membership lists to be dynamically maintained. The host (user) sends an IGMP “report,” or join, to the router to be included in the group. Periodically, the router sends a “query” to learn which hosts (users) are still part of a group. If a host wishes to continue its group membership, it responds to the query with a “report.” If the host does not send a “report,” the router prunes the group list to delete this host; this eliminates unnecessary network transmissions. With IGMP v2, a host may send an LG message to alert the router that it is no longer participating in a multicast group; this allows the router to prune the group list to delete this host before the next query is scheduled, thereby minimizing the time period during which unneeded transmissions are forwarded to the network.

The IGMP messages for IGMP version 2 are shown in Fig. 5.2. The message comprises an eight octet structure. During transmission, IGMP messages are encapsulated in IP datagrams; to indicate that an IGMP packet is being carried, the IP header contains a protocol number of 2. An IP datagram includes a Protocol Type field, that for IGMP is equal to 2 (IGMP is one of many protocols that can be specified in this field). An IGMP v2 PDU consists of a 20-byte IP header and 8 bytes of IGMP.

Some of the areas that require consideration and technical support to develop and deploy IPTV systems include the following, among many others:

  • content aggregation;
  • content encoding (e.g., AVC/H.264/MPEG-4 Part 10, MPEG-2, SD, HD, Serial Digital Interface (SDI), Asynchronous Serial Interface (ASI), Layer 1 switching/routing);
  • audio management;
  • digital right management/CA: encryption (DVB-CSA, AES or Advanced Encryption StandardAdvanced Encryption Standard); key management schemes (basically, CAS); transport rights;
  • encapsulation (MPEG-2 transport stream distribution);
  • backbone distribution such as satellite or terrestrial (DVB-S2, QPSK, 8-PSK, FEC, turbo coding for satellite—SONET (Synchronous Optical Network)/SDH/OTN (Synchronous Digital Hierarchy/Optical Transport Network) for terrestrial);
  • metro-level distribution;
  • last-mile distribution (LAN/WAN/optics, GbE (Gigabit Ethernet), DSL/FTTH);
  • multicast protocol mechanisms (IP multicast);
  • QoS backbone distribution;
  • QoS, metro-level distribution;
  • QoS, last-mile distribution;
  • QoS, channel surfing;
  • Set-Top Box (STB)/middleware;
  • QoE;
  • Electronic Program Guide (EPG);
  • blackouts;
  • service provisioning/billing, service management;
  • advanced video services (e.g., PDR and VOD);
  • management and confidence monitoring;
  • triple play/quadruple play.