The Display List

The structure of your display list is fundamental in this process for three reasons: memory consumption, tree traversal, and node hierarchy.

Memory Consumption

Memory consumption is the trade-off for better performance in GPU rendering because every off-screen bitmap uses memory. At the same time, mobile development implies less GPU memory caching and RAM.

To get a sense of the memory allocated, you can use the following formula:

[code]

// 4 bytes are required to store a single 32 bit pixel
// width and height of the tile created
// anti-aliasing defaults to high or 4 in Android
4 * width * height * anti-aliasFactor
A 10 × 10 image represents 1600 bytes

[/code]

Be vigilant about saving memory in other areas.

Favor the DisplayObject types that need less memory. If the functionality is sufficient for your application, use a Shape or a Sprite instead of a MovieClip. To determine the size of an object, use the following:

[code]

import flash.sampler.*;
var shape:Shape = new Shape();
var sprite:Sprite = new Sprite();
var mc:MovieClip = new MovieClip();
trace(getSize(shape), getSize(sprite), getSize(mc));
// 224, 412 and 448 bytes respectively in the AIR runtime

[/code]

The process of creating and removing objects has an impact on performance. For display objects, use object pooling, a method whereby you create a defined number of objects up front and recycle them as needed. Instead of deleting them, make them invisible or remove them from the display list until you need to use them again.

You should give the same attention to other types of objects. If you need to remove objects, remove listeners and references so that they can be garbage-collected and free up precious memory.

Tree Structure

Keep your display list fairly shallow and narrow.

The renderer needs to traverse the display list and compute the rendering output for every vector-based object. Matrices on the same branch get concatenated. This is the expected management of nested objects: if a Sprite contains another Sprite, the child position is set in relation to its parent.

Node Relationship

This is the most important point for successful use of caching. Caching the wrong objects may result in confusingly slow performance.

The cacheAsBitmapMatrix property must be set on the moving object, not on its container. If you set it on the parent, you create an unnecessarily larger bitmap. Most importantly, if the container has other children that change, the bitmap needs to be redrawn and the caching benefit is lost.

Let’s use an example. The parent node, the black box shown in the following figures, has two children, a green circle and a red square. They are all vector graphics as indicated by the points.

In the first scenario (depicted in Figure 14-2), cacheAsBitmapMatrix is set on the parent node. The texture includes its children. A bitmap is created and used for any transformation, like the rotation in the figure, without having to perform expensive vector rasterization. This is a good caching practice:

[code]

var box:Sprite = new Sprite();
var square:Shape = new Shape();
var circle:Shape = new Shape();
// draw all three items using the drawing API
box.cacheAsBitmap = true;
box.cacheAsBitmapMatrix = new Matrix();
box.rotation = 15;

[/code]

Figure 14-2. Caching and transformation on the parent only
Figure 14-2. Caching and transformation on the parent only

In the second scenario (depicted in Figure 14-3), cacheAsBitmapMatrix is still on the parent node. Let’s add some interactivity to make the circle larger when clicked. This is a bad use of caching because the circle needs to be rerasterized along with its parent and sibling because they share the same texture:

[code]

// change datatype so the display object can receive a mouse event
var circle:Sprite = new Sprite();
// draw items using the drawing API
circle.addEventListener(MouseEvent.CLICK, bigMe);
function bigMe(event:MouseEvent):void {
var leaf:Sprite = event.currentTarget as Sprite;
leaf.scaleX += .20;
leaf.scaleY += .20;
}

[/code]

Figure 14-3. Caching on the parent, but transformation on the children
Figure 14-3. Caching on the parent, but transformation on the children

In the third scenario (depicted in Figure 14-4), cacheAsBitmapMatrix is set, not on the parent, but on the children. When the circle is rescaled, its bitmap copy can be used instead of rasterization. In fact, both children can be cached for future animation. This is a good use of caching:

[code]

// change datatype so they can receive mouse events
var square:Sprite = new Sprite();
var circle:Sprite = new Sprite();
// draw items using the drawing API
square.addEventListener(MouseEvent.CLICK, bigMe);
circle.addEventListener(MouseEvent.CLICK, bigMe);
var myMatrix:Matrix = new Matrix();
square.cacheAsBitmap = true;
square.cacheAsBitmapMatrix = myMatrix;
circle.cacheAsBitmap = true;
circle.cacheAsBitmapMatrix = myMatrix;
function bigMe(event:MouseEvent):void {
var leaf:Sprite = event.currentTarget as Sprite;
leaf.scaleX += .20;
leaf.scaleY += .20;
}

[/code]

Figure 14-4. Caching and transformation on each individual child
Figure 14-4. Caching and transformation on each individual child

The limitation with using GPU rendering occurs when a parent and its children need to have independent animations as demonstrated earlier. If you cannot break the parent- child structure, stay with vector rendering.

MovieClip with Multiple Frames

Neither cacheAsBitmap nor cacheAsBitmapMatrix works for a MovieClip with multiple frames. If you cache the art on the first frame, as the play head moves, the old bitmap is discarded and the new frame needs to be rasterized again. This is the case even if the animation is a rotation or a position change.

GPU rendering is not the technique for such situations. Instead, load your MovieClip without adding it to the display list. Traverse through its timeline and copy each frame to a bitmap using the BitmapData.draw method. Then display one frame at a time using the BitmapData.copyPixels method.

Interactivity

Setting cacheAsBitmapMatrix to true does not affect the object’s interactivity. It still functions as it would in the traditional rendering model both for events and for function calls.

Multiple Rendering Techniques

On Android devices, you could use traditional rendering along with cacheAsBitmap and/ or cacheAsBitmapMatrix. Another technique is to convert your vector assets as bitmaps, in which case no caching is needed. The technique you use may vary from one application to the next.

Remember that caching is meant to be a solution for demanding rendering. It is helpful for games and certain types of animations (not for traditional timeline animation). If there is no display list conflict, as described earlier, caching all assets makes sense. There is no need to use caching for screen-based applications with fairly static UIs.

At the time of this writing, there seems to be a bug using filters on a noncached object while the GPU mode is set in the application descriptor (as in the example below). It should be fixed in a later release:

[code]

var sprite:Sprite = new Sprite();
sprite.graphics.beginFill(0xFF6600, 1);
sprite.graphics.drawRect(0, 0, 200, 100);
sprite.graphics.endFill();
sprite.filters = [new DropShadowFilter(2, 45, 0x000000, 0.5, 6, 6, 1, 3)];
addChild(sprite);

[/code]

Maximum Texture Memory and Texture Size

The maximum texture size supported is 1,024×1,024 (it is 2,048×2,048 for iPhone and iPad). This dimension represents the size after transformation. The maximum memory is not part of the memory consumed by the application, and therefore is not accessible.

2.5D Objects

A 2.5D object is an object with an additional z property that allows for different types of transformation.

If an object has cacheAsBitmapMatrix on and a z property is added, the caching is lost. A 2.5D shape does not need cacheAsBitmapMatrix because it is always cached for motion, scaling, and rotation without any additional coding. But if its visibility is changed to false, it will no longer be cached.

How to Test the Efficiency of GPU Rendering

There are various ways to test application performance beyond the human eye and perception. Testing your frame rate is your best benchmark.

Other Advocacy Entities

This section provides a short survey of industry advocacy and activities in support of 3DTV.

[email protected] Consortium

Recently (in 2008) the [email protected] Consortium was formed with the mission to speed the commercialization of 3D into homes worldwide and provide the best possible viewing experience by facilitating the development of standards, roadmaps, and education for the entire 3D industry—from content, hardware, and software providers to consumers.

3D Consortium (3DC)

The 3D Consortium (3DC) aims at developing 3D stereoscopic display devices and increasing their take-up, promoting expansion of 3D contents, improving distribution, and contributing to the expansion and development of the 3D market. It was established in Japan in 2003 by five founding companies and 65 other companies including hardware manufacturers, software vendors, contents vendors, contents providers, systems integrators, image producers, broadcasting agencies, and academic organizations.

European Information Society Technologies (IST) Project ‘‘Advanced Three-Dimensional Television System  Technologies’’ (ATTEST)

This is a project where industries, research centers, and universities have joined forces to design a backwards-compatible, flexible, and modular broadcast 3DTV system. The ambitious aim of the European Information Society Technologies (IST) project ATTEST is to design a novel, backwards-compatible, and flexible broadcast 3DTV system. In contrast to former proposals that often relied on the basic concept of “stereoscopic” video, that is the capturing, transmission, and display of two separate video streams (one for the left eye and one for the right eye), this activity focuses on a data-in-conjunction-with-metadata approach. At the very heart of the described new concept is the generation and distribution of a novel data representation format that consists of monoscopic color video and associated per-pixel depth information. From these data, one or more “virtual” views of a real-world scene can be synthesized in real-time at the receiver side (i.e., a 3DTV STB) by means of the DIBR techniques. The modular architecture of the proposed system provides important features, such as backwards-compatibility to today’s 2D DTV, scalability in terms of receiver complexity, and adaptability to a wide range of different 2D and 3D displays.

3D Content Creation. For the generation of future 3D content, novel three-dimensional material is created by simultaneously capturing video and associated per-pixel depth information with an active range camera such as the so-called ZCamTM developed by 3DV Systems. Such devices usually integrate a high-speed pulsed infrared light source into a conventional broadcast TV camera and they relate the time of flight of the emitted and reflected light walls to direct measurements of the depth of the scene. However, it seems clear that the need for sufficient high-quality, three-dimensional content can only partially be satisfied with new recordings. It will therefore be necessary (especially in the introductory phase of the new broadcast technology) to also convert already existing 2D video material into 3D using so-called “structure from motion” algorithms. In principle, such (offline or online) methods process one or more monoscopic color video sequences to (i) establish a dense set of image point correspondences from which information about the recording camera, as well as the 3D structure of the scene can be derived or (ii) infer approximate depth information from the relative movements of automatically tracked image segments. Whatever 3D content generation approach is used in the end, the outcome in all cases consists of regular 2D color video in European DTV format (720 × 576 luminance pels, 25 Hz, interlaced) and an accompanying depth-image sequence with the same spatiotemporal resolution. Each of these depth-images stores depth information as 8-bit gray values with the gray level 0 specifying the furthest value and the gray level 255 defining the closest value. To translate this data representation format to real, metric depth values (that are required for the “virtual” view generation (and to be flexible with respect to 3D scenes with different depth characteristics, the gray values are normalized to two main depth clipping planes.

3DV Coding. To provide the future 3DTV viewers with threedimensional content, the monoscopic color video and the associated per-pixel depth information have to be compressed and transmitted over the conventional 2D DTV broadcast infrastructure. To ensure the required backwards-compatibility with existing 2D-TV STBs, the basic 2D color video has to be encoded using the standard MPEG-2 as MPEG-4 Visual or AVC tools currently required by the DVB Project in Europe.

Transmission. The DVB Project, a consortium of industries and academia responsible for the definition of today’s 2D DTV broadcast infrastructure in Europe, requires the use of the MPEG-2 systems layer specifications for the distribution of audiovisual data via cable (DVB-C), satellite (DVB-S), or terrestrial (DVB-T) transmitters.

‘‘Virtual’’ View Generation and 3D Display. At the receiver side of the proposed ATTEST system, the transmitted data is decoded in a 3DTV STB to retrieve the decompressed color video- and depth-image sequences (as well as the additional metadata). From this data representation format, a DIBR algorithm generates “virtual” left- and right-eye views for the three-dimensional reproduction of a real-world scene on a stereoscopic or autostereoscopic, singleor multiple-user 3DTV display. The backwards-compatible design of the system ensures that viewers who do not want to invest in a full 3DTV set are still able to watch the two-dimensional color video without any degradations in quality using their existing digital 2DTV STBs and displays.

3D4YOU

3D4YOU7 is funded under the ICT Work Programme 2007–2008, a thematic priority for research and development under the specific program “Cooperation” of the Seventh Framework Programme (2007–2013). The objectives of the project are

  1. to deliver an end-to-end system for 3D high-quality media;
  2. to develop practical multi-view and depth capture techniques;
  3. to convert captured 3D content into a 3D broadcasting format;
  4. to demonstrate the viability of the format in production and over broadcast chains;
  5. to show reception of 3D content on 3D displays via the delivery chains;
  6. to assess the project results in terms of human factors via perception tests;
  7. to produce guidelines for 3D capturing to aid in the generation of 3D media production rules;
  8. to propose exploitation plans for different 3D applications.

The 3D4YOU project aims at developing the key elements of a practical 3D television system, particularly, the definition of a 3D delivery format and guidelines for a 3D content creation process.

The 3D4YOU project will develop 3D capture techniques, convert captured content for broadcasting, and develop 3D coding for delivery via broadcast that is suitable to transmit and make public. 3D broadcasting is seen as the next major step in home entertainment. The cinema and computer games industries have already shown that there is considerable public demand for 3D content but the special glasses that are needed limits their appeal. 3D4YOU will address the consumer market that coexists with digital cinema and computer games. The 3D4YOU project aims to pave the way for the introduction of a 3D TV system. The project will build on previous European research on 3D, such as the FP5 project ATTEST that has enabled European organizations to become leaders in this field.

3D4YOU endeavors to establish practical 3DTV. The key success factor is 3D content. The project seeks to define a 3D delivery format and a content creation process. Establishing practical 3DTV will then be demonstrated by embedding this content creation process into a 3DTV production and delivery chain, including capture, image processing, delivery, and then display in the home. The project will adapt and improve on these elements of the chain so that every part integrates into a coherent interoperable delivery system. A key project’s objective is to provide a 3D content format that is independent of display technology, and backward compatible with 2D broadcasting. 3D images will be commonplace
in mass communication in the near future. Also, several major consumer electronics companies have made demonstrations of 3DTV displays that could be in the market within two years. The public’s potential interest in 3DTV can be seen by the success of 3D movies in recent years. 3D imaging is already present in many graphics applications (architecture, mechanical design, games, cartoons, and special effects for TV and movie production).

In recent years, multi-view display technologies have appeared that improve the immersive experience of 3D imaging that leads to the vision that 3DTV or similar services might become a reality in the near future. In the United States, the number of 3D-enabled digital cinemas is rapidly growing. By 2010, about 4300 theaters are expected to be equipped with 3D digital projectors with the number increasing every month. Also in Europe, the number of 3D theaters is growing. Several digital 3D films will surface in the months and years to come and several prominent filmmakers have committed to making their next productions in stereo 3D. The movie industry creates a platform for 3D movies, but there is no established solution to bring these movies to the domestic market. Therefore, the next challenge is to bring these 3D productions to the living room. 2D to 3D conversion and a flexible 3D format are an important strategic area. It has been recognized that multi-view video is a key technology that serves a wide variety of applications, including free viewpoint and 3DV applications for the home entertainment and surveillance business fields. Multi-view video coding and transmission systems are most likely to form the basis for next-generation TV broadcasting applications and facilities. Multi-view video will greatly improve the efficiency of current video coding solutions performing simulcasts of independent views. This project builds on the wealth of experience of the major players in European 3DTV and intends to bring the date of the start of 3D broadcasting a step closer by combining their expertise to define a 3D delivery format and a content creation process.

The key technical problems that currently hamper the introduction of 3DTV to the mass market are as follows:

  1. It is difficult to capture 3DV directly using the current camera technology. At least two cameras need to operate simultaneously with an adjustable but known geometry. The offset of stereo cameras needs to be adjustable to
    capture depth, both close by and far away.
  2. Stereo video (acquired with 2-cameras) is currently not sufficient input for glasses-free, multi-view autostereoscopic displays. The required processing, such as disparity estimation, is noise-sensitive resulting in low 3D picture quality.
  3. 3D postproduction methods and 3DV standards are largely absenterimmature.

The 3D4YOU project will tackle these three problems. For instance, a creative combination of two or three high-resolution video cameras with one or two lowresolution depth range sensors may make it possible to create 3DV of good quality without the need for an excessive investment in equipment. This is in contrast to installing, say, 100 cameras for acquisition where the expense may hamper the introduction of such a system.

Developing tools for conversion of 3D formats will stimulate content creation companies to produce 3DV content at acceptable cost. The cost at which 3DV should be produced for commercial operation is not yet known. However, currently, 3DV production requires almost per frame user interaction in the video, which is certainly unacceptable. This immediately indicates the issue that needs to be solved: currently, fully automated generation of high 3DV is difficult; in the future it needs to be fully, or at least semi-automatic with an acceptable minimum of manual supervision during postproduction. 3D4YOU will research how to convert 3D content into a 3D broadcasting format and prove the viability of the format in production and over broadcast chains.

Once 3DV production becomes commercially attractive because acquisition techniques and standards mature, then this will impact the activities of content producers, broadcasters, and telecom companies. As a result, one may see that these companies may adopt new techniques for video production just because the output needs to be in 3D. Also, new companies could be founded that focus on acquiring 3DV and preparing it for postproduction. Here, there is room for differentiation since, for instance, the acquisition of a sport event will require large baselines between cameras and real-time transmission, whereas the shooting of narrative stories will require both small and large baselines and allows some manual postproduction for achieving optimal quality. These activities will require new equipment (or a creative combination of existing equipment) and new expertise.

3D4YOU will develop practical multi-view and depth capture techniques. Currently, the stereo video format is the de facto 3D standard that is used by the cinemas. Stereo acquisition may, for this reason, become widespread as an acquisition technique. Cinemas operate with glasses-based systems and can therefore use a theater-specific stereo format. This is not the case for the glasses-free autostereoscopic 3DTV that 3D4YOU foresees for the home. To allow glassesfree viewing with multiple people at home, a wide baseline is needed to cover the total range of viewing angles. The current stereo video that is intended for the cinema will need considerable postproduction to be suitable for viewing on a multi-view autostereoscopic display. Producing visual content will therefore, become more complex and may provide new opportunities for companies currently active in (3D) movie postproduction. According to the Networked and Electronic Media (NEM) Strategic Research Agenda, multi-view coding will form the basis for next-generation TV broadcast applications. Multi-view video has the advantage that it can serve different purposes. On the one hand, the multi-view input can be used for 3DTV. On the other hand, it can be shown on a normal TV where the viewer can select his or her preferred viewpoint of the action. Of course, a combination is possible where the viewer selects his or her preferred viewpoint on a 3DTV. However, multi-view acquisition with 30 views for example, will require 30 cameras to operate simultaneously. This initially requires a large investment. 3D4YOU therefore sees a gradual transition from stereo capture to systems with many views. 3D4YOU will investigate a mixture of 3DV acquisition techniques to produce an extended center view plus depth format (possibly with one or two extra views) that is, in principle, easier to produce, edit, and distribute. The success of such a simpler format relies on the ease (read cost!) at which it can be produced. One can conclude that the introduction of 3DTV to the mass market is hampered by (i) the lack of highquality 3DV content; (ii) by the lack of suitable 3D formats; and (iii) lack of appropriate format conversion techniques. The variety of new distribution media further complicates this.

Hence, one can identify the following major challenges that are expected to be overcome by the project:

  1. Video Acquisition for 3D Content: Here, the practicalities of multi-view and depth capture techniques are of primary importance, the challenge is to find the trade off such as number of views to be recorded, and how to
    optimally integrate depth capture with multi-view. A further challenge is to define which shooting styles are most appropriate.
  2. Conversion of Captured Multi-View Video to a 3D Broadcasting Format: The captured format needs new postproduction tools (like enhancement and regularization of depth maps or editing, mixing, fading, and compositing of V+D representations from different sources) and a conversion step generating a suitable transmission format that is compatible with used postproduction formats before the 3D content can be broadcast and displayed.
  3. Coding Schemes for Compression and Transmission: A last challenge is to provide suitable coding schemes for compression and transmission that are based on the 3D broadcasting format under study and to demonstrate their feasibility in field trials under real distribution conditions.

By addressing these three challenges from an end-to-end systems point of view, the 3D4YOU project aims to pave the way to the definition of a 3D TV system suitable for a series of applications. Different requirements could be set depending on the application, but the basic underlying technologies (capture, format, and encoding) should maintain as much commonality as possible so as to favor the emergence of an industry based on those technologies.

3DPHONE

The 3DPHONE project aims to develop technologies and core applications enabling a new level of user experience by developing end-to-end all-3D imaging mobile phone. Its aim is to have all fundamental functions of the phone—media display, User Interface (UI), and personal information management (PIM) applications—realized in 3D. We will develop techniques for all-3D phone experience: mobile stereoscopic video, 3D UIs, 3D capture/content creation, compression, rendering, and 3D display. The research and development of algorithms for 3D audiovisual applications including personal communication, 3D visualization, and content management will be done.

The 3DPhone Project started on February 11, 2008. The duration of the project is 3 years and there are six participants from Turkey, Germany, Hungary, Spain, and Finland. The partners are Bilkent University, Fraunhofer, Holografika, TAT, Telefonica, and University of Helsinki. 3DPhone is funded by the European Community’s ICT programme in Framework Programme Seven.

The goal is to enable users to

  • capture memories in 3D and communicate with others in 3D virtual spaces;
  • interact with their device and applications in 3D;
  • manage their personal media content in 3D.

The expected outcome will be simpler use and a more personalized look and feel. The project will bring state-of-the-art advances in mobile 3D technologies with the following activities:

  • A mobile hardware and software platform will be implemented with both 3D image capture and 3D display capability, featuring both 3D displays and multiple cameras. The project will evaluate different 3D display
    and capture solutions and will implement the most suitable solution for hardware–software integration.
  • UIs and applications that will capitalize on the 3D autostereoscopic illusion in the mobile handheld environment will be developed. The project will design and implement 3D and zoomable UI metaphors suitable for autostereoscopic displays.
  • End-to-end 3DV algorithms and 3D data representation formats, targeted for 3D recording, 3D playback, and real-time 3DV communication will beinvestigated and implemented.
  • Ergonomics and experience testing to measure any possible negative symptoms, such as eye strain created by stereoscopic content, will be performed. The project will research ergonomic conditions specific to the mobile handheld usage: in particular, the small screen, one hand holding the device, absence of complete keyboard, and limited input modalities.

In summary, the general requirements on 3DV algorithms on mobile phones are as follows:

  • low power consumption,
  • low complexity of algorithms,
  • limited memory/storage for both RAM and mass storage,
  • low memory bandwidth,
  • low video resolution,
  • limited data transmission rates and limited bitrates for 3DV signal.

These strong restrictions derived from terminal capabilities and from transmission bandwidth limitations usually result in relatively simple video processing algorithms to run on mobile phone devices. Typically, video coding standards take care of this by specific profiles and levels that only use a restricted and simple set of video coding algorithms and  low-resolution video. The H.264/AVC Baseline Profile for instance, uses only a simple subset of the rich video coding algorithms that the standard provides in general. For 3DV, the equivalent of such a low-complexity baseline profile for mobile phone devices still needs to be defined and developed. Obvious requirements of video processing and coding apply for 3DV on mobile phones as well, such as

  • high coding efficiency (taking bitrate and quality into account);
  • requirements specific for 3DV that apply for 3DV algorithms on mobile phones including
    • flexibility with regard to different 3D display types,
    • flexibility for individual adjustment of 3D impression.

 

 

Device Information

Windows Phone 7 includes the DeviceExtendedProperties class that you can use to obtain information about the physical device. The information you can retrieve includes the following:

  • The device manufacturer, the device name, and its unique device ID
  • The version of the phone hardware and the version of the firmware running on the phone
  • The total physical memory (RAM) installed in the device, the amount of memory currently in use, and the peak memory usage of the current application

However, you must be aware that the phone will alert the user when some device information is retrieved and the user can refuse to allow the application to access it. You should access device information only if it is essential to your application. Typically, you will use device information to generate statistics or usage data, and to monitor memory usage of your application. You can use this data to adjust the behavior of your application to minimize impact on the device and other applications.

You retrieve device information using the GetValue or the TryGetValue method, as shown in the following code example.

C#
using Microsoft.Phone.Info;

string manufacturer = DeviceExtendedProperties.GetValue(
“DeviceManufacturer”).ToString();
string deviceName = DeviceExtendedProperties.GetValue(
“DeviceName”).ToString();
string firmwareVersion = DeviceExtendedProperties.GetValue(
“DeviceFirmwareVersion”).ToString();
string hardwareVersion = DeviceExtendedProperties.GetValue(
“DeviceHardwareVersion”).ToString();
long totalMemory = Convert.ToInt64(
DeviceExtendedProperties.GetValue(“DeviceTotalMemory”));
long memoryUsage = Convert.ToInt64(
DeviceExtendedProperties.GetValue(
“ApplicationCurrentMemoryUsage”));
object tryValue;
long peakMemoryUsage = -1;
if (DeviceExtendedProperties.TryGetValue(
“ApplicationPeakMemoryUsage”,out tryValue))
{
peakMemoryUsage = Convert.ToInt64(tryValue);
}
// The following returns a byte array of length 20.
object deviceID = DeviceExtendedProperties.GetValue(
“DeviceUniqueId”);

The device ID is a hash represented as an array of 20 bytes, and is unique to each device. It does not change when applications are installed or the firmware is updated.

When running in the emulator, the manufacturer name returns “Microsoft,” the device name returns “XDeviceEmulator,” and (in the initial release version) the hardware and firmware versions return 0.0.0.0.

For more information, and a list of properties for the Device ExtendedProperties class, see “Device Information for Windows Phone” on MSDN (http://msdn.microsoft.com/en-us/library/ff941122(VS.92).aspx).

 

The Gallery Application and the CameraRoll Class

The Gallery application is the display for the repository of images located on the SD card and accessible by various applications. Launch it, choose an image, and then select Menu→Share. A list of applications (such as Picasa, Messaging, and Email) appears, a convenient way to upload media from the device to another destination (see Figure 9-1).

The flash.media.CameraRoll class is a subclass of the EventDispatcher class. It gives you access to the Gallery. It is not supported for AIR desktop applications.

The Gallery application

Selecting an Image

You can test that your device supports browsing the Gallery by checking the supports BrowseForImage property:

import flash.media.CameraRoll;
if (CameraRoll.supportsBrowseForImage == false) {
trace(“this device does not support access to the Gallery”);
return;
}

If your device does support the Gallery, you can create an instance of the CameraRoll class. Make it a class variable, not a local variable, so that it does not lose scope:

var cameraRoll:CameraRoll = new CameraRoll();

You can add listeners for three events:

  • A MediaEvent.SELECT when the user selects an image:
    import flash.events.MediaEvent;
    cameraRoll.addEventListener(MediaEvent.SELECT, onSelect);
  • An Event.CANCEL event if the user opts out of the Gallery:
    import flash.events.Event;
    cameraRoll.addEventListener(Event.CANCEL, onCancel);
    function onCancel(event:Event):void {
    trace(“user left the Gallery”, event.type);
    }
  • An ErrorEvent.ERROR event if there is an issue in the process:
    import flash.events.ErrorEvent;
    cameraRoll.addEventListener(ErrorEvent.ERROR, onError);
    function onError(event:Event):void {
    trace(“Gallery error”, event.type);
    }

Call the browseForImage function to bring the Gallery application to the foreground:

cameraRoll.browseForImage();

Your application moves to the background and the Gallery interface is displayed, as shown in Figure 9-2.

The Gallery interface

When you select an image, a MediaEvent object is returned. Use its data property to reference the image and cast it as MediaPromise. Use a Loader object to load the image:

import flash.display.Loader;
import flash.events.IOErrorEvent;
import flash.events.MediaEvent;
import flash.media.MediaPromise;
function onSelect(event:MediaEvent):void {
var promise:MediaPromise = event.data as MediaPromise;
var loader:Loader = new Loader()
loader.contentLoaderInfo.addEventListener(Event.COMPLETE, onImageLoaded);
loader.contentLoaderInfo.addEventListener(IOErrorEvent.IO_ERROR,onError);
loader.loadFilePromise(promise);
}

The concept of MediaPromise was first introduced on the desktop in a drag-and-drop scenario where an object doesn’t yet exist in AIR but needs to be referenced. Access its file property if you want to retrieve the image name, its nativePath, or its url.

The url is the qualified domain name to use to load an image. The nativePath refers to the hierarchical directory structure:

promise.file.name;
promise.file.url;
promise.file.nativePath;

Let’s now display the image:

function onImageLoaded(event:Event):void {
addChild(event.currentTarget.content);
}

Only the upper-left portion of the image is visible. This is because the resolution of the camera device is much larger than your AIR application stage.

Let’s modify our code so that we can drag the image around and see all of its content. We will make the image a child of a sprite, which can be dragged around:

import flash.events.MouseEvent;
import flash.display.DisplayObject;
import flash.geom.Rectangle;
var rectangle:Rectangle;
function onImageLoaded(event:Event):void {
var container:Sprite = new Sprite();
var image:DisplayObject = event.currentTarget.content as DisplayObject;
container.addChild(image);
addChild(container);
// set a constraint rectangle to define the draggable area
rectangle = new Rectangle(0, 0,
-(image.width – stage.stageWidth),
-(image.height – stage.stageHeight)
);
container.addEventListener(MouseEvent.MOUSE_DOWN, onDown);
container.addEventListener(MouseEvent.MOUSE_UP, onUp);
}
function onDown(event:MouseEvent):void {
event.currentTarget.startDrag(false, rectangle);
}
function onUp(event:MouseEvent):void {
event.currentTarget.stopDrag();
}

It may be interesting to see the details of an image at its full resolution, but this might not result in the best user experience. Also, because camera resolution is so high on most devices, there is a risk of exhausting RAM and running out of memory.

Let’s now store the content in a BitmapData, display it in a Bitmap, and scale the bitmap to fit our stage in AIR. We will use the Nexus One as our benchmark first. Its camera has a resolution of 2,592×1,944. The default template size on AIR for Android is 800×480. To complicate things, the aspect ratio is different. In order to preserve the image fidelity and fill up the screen, you need to resize the aspect ratio to 800×600, but some of the image will be out of bounds.

Instead, let’s resize the image to 640×480. The image will not cover the whole stage, but it will be fully visible. Take this into account when designing your screen.

First, detect the orientation of your image. Resize it accordingly using constant values, and rotate the image if it is in landscape mode:

import flash.display.Bitmap;
import flash.display.BitmapData;
const MAX_HEIGHT:int = 640;
const MAX_WIDTH:int = 480;
function onImageLoaded(event:Event):void {
var bitmapData:BitmapData = Bitmap(event.target.content).bitmapData;
var bitmap:Bitmap = new Bitmap(bitmapData);
// determine the image orientation
var isPortrait:Boolean = (bitmapData.height/bitmapData.width) > 1.0;
if (isPortrait) {
bitmap.width = MAX_WIDTH;
bitmap.height = MAX_HEIGHT;
} else {
bitmap.width = MAX_HEIGHT;
bitmap.height = MAX_WIDTH;
// rotate a landscape image
bitmap.y = MAX_HEIGHT;
bitmap.rotation = -90;
}
addChild(bitmap);
}

The preceding code is customized to the Nexus One, and it will not display well for devices with a different camera resolution or screen size. We need a more universal solution.

The next example shows how to resize the image according to the dynamic dimension of both the image and the stage. This is the preferred approach for developing on multiple screens:

function onImageLoaded(event:Event):void {
var bitmapData:BitmapData = Bitmap(event.target.content).bitmapData;
var bitmap:Bitmap = new Bitmap(bitmapData);
// determine the image orientation
var isPortrait:Boolean = (bitmapData.height/bitmapData.width) > 1.0;
// choose the smallest value between stage width and height
var forRatio:int = Math.min(stage.stageHeight, stage.stageWidth);
// calculate the scaling ratio to apply to the image
var ratio:Number;
if (isPortrait) {
ratio = forRatio/bitmapData.width;
} else {
ratio = forRatio/bitmapData.height;
}
bitmap.width = bitmapData.width * ratio;
bitmap.height = bitmapData.height * ratio;
// rotate a landscape image and move down to fit to the top corner
if (!isPortrait) {
bitmap.y = bitmap.width;
bitmap.rotation = -90;
}
addChild(bitmap);
}

Beware that the browseForImage method is only meant to load images from the Gallery. It is not for loading images from the filesystem even if you navigate to the Gallery. Some devices bring up a dialog to choose between Gallery and Files. If you try to load an image via Files, the application throws an error. Until this bug is fixed, set a listener to catch the error and inform the user:

cameraRoll.browseForImage();
cameraRoll.addEventListener(ErrorEvent.ERROR, onError);
function onError(event:Event):void {
if (event.errorID == 2124) {
trace(“you can only load images from the Gallery”);
}
}

If you want to get a list of all the images in your Gallery, you can use the filesystem as follows:

var gallery:File = File.userDirectory.resolvePath(“DCIM/Camera”);
var myPhotos:Array = gallery.getDirectoryListing();
var bounds:int = myPhotos.length;
for (var i:uint = 0; i < bounds; i++) {
trace(myPhotos[i].name, myPhotos[i].nativePath);
}

Adding an Image

You can add an image to the Gallery from within AIR. To write data to the SD card, you must set permission for it:

<uses-permission android:name=”android.permission.WRITE_EXTERNAL_STORAGE” />

Check the supportsAddBitmapData property to verify that your device supports this feature:

import flash.media.CameraRoll;
if (CameraRoll.supportsAddBitmapData == false) {
trace(“You cannot add images to the Gallery.”);
return;
}

If this feature is supported, create an instance of CameraRoll and set an Event.COM PLETE listener. Call the addBitmapData function to save the image to the Gallery. In this example, a stage grab is saved.

This feature could be used for a drawing application in which the user can draw over time. The following code allows the user to save his drawing, reload it, and draw over it again:

var cameraRoll:CameraRoll;
cameraRoll = new CameraRoll();
cameraRoll.addEventListener(ErrorEvent.ERROR, onError);
cameraRoll.addEventListener(Event.COMPLETE, onComplete);
var bitmapData:BitmapData =
new BitmapData(stage.stageWidth, stage.stageHeight);
bitmapData.draw(stage);
cameraRoll.addBitmapData(bitmapData);
function onComplete(event:Event):void {
// image saved in gallery
}

Remember that the image that is saved is the same dimension as the stage, and therefore it has a much smaller resolution than the native camera. At the time of this writing, there is no option to specify a compression, to name the image, or to save it in a custom directory. AIR follows Android naming conventions, using the date and time of capture.

 

Application Certification Requirements

To ensure the quality and usability of applications distributed through Windows Marketplace, every application must pass through a validation process after submission and before it is made available for download. The following are the core tenets of this validation process:

  • Applications must be reliable. This is reflected by the implementation of published best practices for creating reliable
    applications.
  • Applications must use resources efficiently. They must execute efficiently on the phone and not have a negative impact on performance.
  • Applications must not interfere with the functionality of the phone. They must not modify settings, or they must prompt the user before changing any user preferences or the behavior of other applications.
  • Applications are free from viruses and malicious software. They must be safe for users to install and run.

In addition, applications must meet several other requirements. These include complying with the Microsoft policies for acceptable content and ensuring that localized metadata and content are accurately represented for the application’s supported geographic regions. The full set of requirements falls into four main categories: application code restrictions; run-time behavior, performance and metrics; user opt-in and privacy; and media and visual content. The following sections provide an overview of the requirements within each of these categories.

For full documentation of all the requirements, download Windows Phone 7 Application Certification Requirements from the Microsoft Download Center (http://go.microsoft.com/?linkid=9730556).

Application Code Restrictions

This section describes the requirements for your Windows Phone 7 applications in terms of the code you write, the way the application works, and the way that you compile it.

Your application must run on any Windows Phone 7 device, regardless of model, screen size, keyboard hardware, or manufacturer. The application must use only the Windows Phone Application Platform documented APIs, and it must not use PInvoke or COM interoperability. If you use controls from the System.Windows. Controls namespace, you must not call any APIs in the Microsoft. Xna.Framework.Game or Microsoft.Xna.Framework.Graphics assemblies.

The application runs in a sandbox on the device. Application code must be type-safe when compiled to Microsoft intermediate language (MSIL) code. You cannot use unsafe code or security critical code. You also cannot redistribute the Windows Phone assemblies; however, you can redistribute the Panorama, Pivot, and Map control assemblies if these are used in your application.

The application must also handle all exceptions raised by the .NET Framework, must not terminate unexpectedly or become unresponsive to user input, and it must display user-friendly error messages. A visual progress indicator must be displayed when executing time-consuming activities, such as downloading data over network connections, and you must provide an option to cancel this type of activity.

When you compile your application for distribution, you must do so using the Release or Retail configuration. It must not be compiled using the debug configuration, and it must not contain debugging symbols or output. However, you can submit an application with obfuscated code if you want.

Finally, the application must not use reflection to discover and invoke types that require security permissions. The repackaging process that takes place before distribution does not detect invoking types in this way, so the generated manifest will not request the required permissions. This will cause the application to fail at run time.

Run-time Behavior, Performance, and Metrics

To maintain a consistent user experience, the Back button must only be used for backward navigation in the application. Pressing the Back button must return the application to the previous page, or—if the user is already on the initial page—must close the application. If the current page displays a context menu or a dialog box, pressing the Back button must close the menu or dialog box and cancel the backward navigation to the previous page. The only exceptions to this are in games, where the Back button may display a context menu or dialog box, navigate to the previous menu screen, or close the menu.

The certification requirements for Windows Phone 7 applications specify both the startup performance of applications and some limitations on the use of resources. For example, the application must render the main user interface (UI) startup screen (not the splash screen) within five seconds on a typical physical device, and it must be responsive to user input within twenty seconds. The application must also complete execution in response to both the Activated and Deactivated events within ten seconds or it will be terminated by the operating system.

In addition, the application must not use more than 90 MB of RAM memory on the device unless it first queries the properties of the DeviceExtendedProperties class, and this class reports that more than 256 MB of memory is installed. The application must be able to run whether or not this extended memory is available, and it should vary its behavior only at run time to take advantage of additional memory, if required.

Your application should, as far as is appropriate, minimize its use of hardware resources, services, and device capabilities to preserve battery power. For example, it should use the location service and the push notification service only when required (which also minimizes the load on these Microsoft services).

An application in the foreground can continue to run when the phone screen is locked by setting the  PhoneApplicationService. ApplicationIdleDetectionMode property (you must prompt the user to allow this). When an application is running under a locked screen, it must minimize power usage as much as possible.

User Opt-in and Privacy

Your application must maintain user privacy. In particular, it must protect data provided by the user and must not pass any user-identifiable or personal information to another service or application unless you provide full details of how the information will be used and the user agrees to this (the opt-in approach).

If the application includes person-to-person communication such as chat features, and these features can be set up from the device, you must take steps to ascertain that the user is at least 13 years old before enabling this feature.

You must also obtain the user’s consent before your application uses the location service to identify the user’s location or if it uses the Microsoft Push Notification Service (MPNS) to send notifications to the user’s phone. For example, the application must ask the user for explicit permission to receive a toast notification before it first calls the BindtoShellToast method for the first time.

For information about the different types of notifications, see “Types of Push Notifications for Windows Phone” on MSDN (http://msdn.microsoft.com/en-us/library/ff941124(VS.92).aspx).

If your application relies on the location feature and passes the location information to another application or service (such as a social community site), you must periodically remind users by displaying a message or including a visual indication on screen that location data is being sent to another service.

Users must also be able to turn off the location feature and push notifications within the application, and the application must either continue to work or present the user with a suitable message if it cannot work with the feature turned off. It must not simply fail when the feature is disabled.

An application in the foreground can continue to run when the phone screen is locked by setting the ApplicationIdleDetection Mode property. If you design your application with this capability, you must first obtain consent from the user to run in this mode and provide them with an option to disable it.

Your application must also ask users for permission before performing large downloads. Typically, this means asking for permission for data downloads greater than 50 MB that the application requires or for the use of services or communication protocols that may incur excessive communication costs.

Application Media and Visual Content

The visual appearance and content of an application must conform to a series of requirements. For example, in terms of styling, the UI must be readable and usable if the user changes the color scheme from the default white on black.

Windows Phone 7 applications and Windows Marketplace initially support localization into only English, French, Italian, German, and Spanish. However, it is likely that other languages will be added over time. All Windows Marketplace materials and application content must be correctly localized into the chosen language. For example, an application that is submitted in French must have a product description, UI text, screen shots, and icons in French. You can submit an application for multiple languages as a single package. If you submit separate language versions, each is validated and certified separately.

For information about the localization features in Windows Phone 7, see “Globalization and Localization Overview for Windows Phone” on MSDN (http://msdn.microsoft.com/en-us/library/ff462083(VS.92).aspx).

Your applications must also abide by a series of restrictions on the content and links to content located elsewhere. These restrictions include the appropriate use of licensed content and trademarks; they also include limitations based on preventing illegal content such as libel and slander, threatening behavior, violence, pornography, and discrimination. There are very strict limits that prevent an application from containing content related to hate speech and defamation; the use or promotion of alcohol, tobacco, weapons, and drugs; certain adult-related content; and excessive profanity.

Finally, there are a series of restrictions that apply specifically to music-related or photo-related applications, and to advertising. These specify how users must be able to launch your application, and the types of advertising content that is acceptable. Although your applications can contain most types of advertising, they cannot, for example, promote other phone service plans or phone application marketplaces.

However, your application can play music in the background, even when its primary function is not related to music or video experiences.

Closing the Application

If another application is selected, yours is moved to the background but continues to play and does not close. Typically, NativeApplication dispatches an exiting event when it starts the closing process. You can register for that event and be prepared to act quickly according to your application needs:

import flash.desktop.NativeApplication;
import flash.events.Event;
NativeApplication.nativeApplication.addEventListener(Event.EXITING, onExit);
function onExit(event:Event):void {
// save application’s current state
}

However, the Android OS may choose to terminate background processes immediately if RAM is needed. In some cases, such as receiving a phone call, there is no guarantee that NativeApplication will have a chance to dispatch the exiting event before being shut down. It is therefore a good practice to save critical data often while the application is active.

The Android OS closes applications that were inactive the longest first. If you want to immediately shut down a particular application, select Settings→Applications→Manage applications→Running, click the desired icon, and press “Force stop.” You can also use third-party task killer applications, which conveniently display all the applications currently
running in one selection.

The Android UX guidelines recommend using the back or home button as an exit button. But you may choose to shut down your AIR application via code when it moves to the background, or make it a user option while it is running to run the following method:

NativeApplication.nativeApplication.exit();

 

Evaluating Device Capabilities and Handling Multiple Devices

The premise of mobile AIR is to enable you to create one concept, one code base, and a set of assets to deploy an application on multiple devices effortlessly. AIR for mobile accomplishes this for the most part, but not without challenges. There is a disparity in performance and capabilities among devices, and deploying for a range of screen resolutions requires effort and time even if your art is minimal.

In the near future, mobile devices will approach, and perhaps surpass, the performance of desktop machines. Until then, you must adapt your development style to this limited environment. Every line of code and every asset should be scrutinized for optimization. The AIR runtime has been optimized for devices with limited resources, but this is no guarantee that your code will run smoothly. This part is your responsibility.

Hardware

The multitude of Android devices available today makes it difficult to evaluate them all. The subsections that follow discuss the major factors to examine. If you want to know about a specific phone, or if you would like a complete list of Android devices, visit these websites:

http://pdadb.net/
http://en.wikipedia.org/wiki/Category:Android_devices
http://phandroid.com/phones/

The Processor

The CPU, or central processing unit, executes software program instructions. Its speed on mobile devices varies from 500 MHz to 1 GHz. For comparison, the average speed of a desktop computer is around 2.5 GHz.

The instruction set must be ARMv7 or higher to run AIR for Android.

The GPU, or graphics processing unit, is a high-performance processor dedicated to performing geometric calculations on graphics. Its speed is evaluated in millions of triangles processed per second (mt/s), and it ranges on mobile devices from 7 mt/s to 28 mt/s. AIR uses OpenGL ES 2.0-based processors.

The graphics card, display type, and color depth affect display quality.

Memory and Storage

RAM on mobile devices varies from 128 MB to 768 MB, averaging at 512 MB. ROM is an important complementary memory type. On Android, if memory runs out, applications are terminated.

Different types of memory affect the GPU’s speed and capacity.

Storage includes the internal memory, expanded by the SD card.

The Camera

Camera quality is often evaluated by the megapixels it can store. The quality of the LED flash and the auto-focus option on the device is also a factor. Some devices include a front camera of lesser quality to use for video telephony.

Sensors

A built-in accelerometer is becoming standard on Android devices, but it is not yet universal. The GPS antenna and driver is used for satellite navigation. Touch technology is a combination of the screen touch overlay, the controller, and the software driver. The number of simultaneous touch points varies, although Android officially only supports two.

The Battery

Battery capability on an Android device is measured in milliamp hours (mAh), and is commercially evaluated by the number of hours of movie play capable on the device. Removable batteries can be replaced by stronger ones if needed.

The Display

Screen size on Android devices is measured as a diagonal, usually in inches.

Resolution is the number of pixels on the screen. The resolution varies between 800×480 and 854×480 on phones, and between 1,024×600 and 1,024×800 on tablets.

The PPI (pixels per inch) or DPI (dots per inch), also called pixel density, is the number of pixels on the screen in relation to the screen’s physical size. A device has a defined number of pixels it can display in a limited space. A higher pixel density is preferable on devices viewed at close range.

Table 5-1 lists the screen size, resolution, and PPI on some popular Android and Apple devices.

Feature comparison of popular Android devices

Software

At the time of this writing, Android’s latest operating system is 3.0 and is called Honeycomb.

The process for upgrading the Android operating system is very different from the Apple model, which requires the device to be synced with iTunes. Android uses an “over the air approach” to push upgrades. Manufacturers slowly push upgrades to phones, sometimes over several months. Not all devices receive upgrades, so developing for established versions instead of recently released versions may guarantee you a broader audience.

If you are not sure what version is installed on your device, select Settings→About phone→Android version (on some devices, this information is available under “Software information”). Your phone system must have at least Android 2.2, called Froyo, to run AIR for Android. The Adobe AIR runtime is not accessible on the Android Market for earlier versions.

Performance

The device’s performance is measured by its speed of execution and how much it can hold in memory. Methods and tools can be used to perform a benchmark. For a quick analysis, try the AIRBench application developed by Christian Cantrell, from Adobe, and read his blog at http://blogs.adobe.com/cantrell/archives/2010/10/using-airbench-to-test-air-across-android-phones.html.

AIRBench tests devices’ capabilities, but also runs performance analyses. I used it on three different devices. Table 5-2 shows the performance results I achieved in milliseconds (lowest numbers show best performance).

AIRBench performance results for the Samsung Galaxy Tab, Droid 2, and Nexus One devices

Capabilities

You can request information on the device at runtime to automate some of the presentation of your application. The Android Market, unlike the Apple Store, does not separate applications between phones and tablets. Be prepared for all devices.

Rather than trying to make your application look identical on all devices, adapt the layout for the resolution used.

The flash.system.Capabilities class provides information on the environment. To determine that your application is running on a mobile device, use cpuArchitecture:

import flash.system.Capabilities;
if (Capabilities.cpuArchitecture == “ARM”) {
trace(“this is probably a mobile phone”);
}

Alternatively, you can compare screenDPI and screenResolutionX. Here is the code to calculate the diagonal dimension of the screen:

var dpi:int = Capabilities.screenDPI;
var screenX:int = Capabilities.screenResolutionX;
var screenY:int = Capabilities.screenResolutionY;
var diagonal:Number = Math.sqrt((screenX*screenX)+(screenY*screenY))/dpi;
if (diagonal < 5) {
trace(“this must be a mobile phone”);
}

Orientation

To control the look of your application, you can define the application’s scaling and alignment. The scaleMode as set in the following code prevents automatic scaling and the align setting always keeps your application at the upper-left corner. When you choose auto-orient, it is added automatically in Flash Builder Mobile:

import flash.display.StageScaleMode;
import flash.display.StageAlign;
stage.scaleMode = StageScaleMode.NO_SCALE;
stage.align = StageAlign.TOP_LEFT;

Android devices equipped with an accelerometer can determine device orientation. As the user moves the hardware, your content rotates. Define a custom background color to prevent distracting white borders from appearing while in transition:

[SWF(backgroundColor=”#999999″)]

If you always want your application in its original aspect ratio, in Flash Professional select File→AIR Android settings, and under the General tab, deselect “Auto orientation”. In Flash Builder, under Mobile Settings, deselect “Automatically reorient”.

To get the initial orientation, use the following:

var isPortrait:Boolean = getOrientation();
function getOrientation():Boolean {
return stage.stageHeight > stage.stageWidth;
}

To listen for a device’s orientation and set a stage resize on the desktop, set autoOr ients to true:

<initialWindow>

<autoOrients>true</autoOrients>
</initialWindow>

and register for the Event.RESIZE event:

import flash.events.Event;
stage.addEventListener(Event.RESIZE, onResize);
stage.dispatchEvent(new Event(Event.RESIZE);
function onResize(event:Event):void {
trace(stage.stageWidth, stage.stageHeight);
}

The event is fired when your application first initializes, and then again when the device changes orientation. If you create an application to be deployed on the desktop, the event fires when the browser window is resized.

Another API is available for orientation changes. It uses StageOrientationEvent and detects changes in many directions. It is not as universal as RESIZE, but it does provide information on the orientation before and after the event.

The default orientation is up and right:

import flash.events.StageOrientationEvent;
if (stage.supportsOrientationChange) {
stage.addEventListener(StageOrientationEvent.ORIENTATION_CHANGE,
onChange);
}
function onChange(event:StageOrientationEvent):void {
trace(event.beforeOrientation);
trace(event.afterOrientation);
// default
// rotatedLeft
// rotatedRight
}

The goal is to use this information to position and perhaps resize your assets as needed. We will discuss this next.