Rendering, or How Things Are Drawn to the Screen


As a developer, you trust that an object is represented as your code intended. From your ActionScript commands to the matching pixels appearing on the screen, a lot is done behind the screens.

To better understand Flash Player and AIR for Android rendering, I recommend watching the MAX 2010 presentations “Deep Dive Into Flash Player Rendering” by Lee Thomason and “Developing Well-Behaved Mobile Applications For Adobe AIR” by David Knight and Renaun Erickson. You can find them at

Flash is a frame-based system. The code execution happens first, followed by the rendering phase. The concept of the elastic racetrack describes this ongoing process whereby one phase must end before the other one begins, and so on.

The current rendering process for Flash Player and the AIR runtime comprises four steps, outlined in the subsections that follow. Annotations regarding Enhanced Caching/ GPU Model indicate the process, if different, for AIR for Android using cacheAsBit mapMatrix and GPU rendering.


Traditional and Enhanced Caching/GPU Model

The renderer, qualified as retained, traverses the display list and keeps a state of all objects. It looks at each object matrix and determines its presentation properties. When a change occurs, it records it and defines the dirty region, areas that have changed, and the objects that need to be redrawn, to narrow down changes to a smaller area. It then concatenates matrices, computes all the transformations, and defines the new state.

Edge and Color Creation

Traditional Model

This step is the most challenging mathematically. It applies to vector objects and bitmaps with blends and filters.

An SObject is the displayObject equivalent in binary code, and is not accessible via ActionScript. It is made of layers with closed paths, with edges and colors. Edges are defined as quadratic Bezier curves and are used to prevent two closed paths from intersecting. Abutment is used to seam them together. More edges mean more calculation. Color applies to solids and gradients, but also bitmaps, video, and masks (combined color values of all display objects to determine a final color).

Enhanced Caching/GPU Model

No calculation is needed for cached items. They may be rotated, scaled, and skewed using their bitmap representation.


Traditional Model

This step is the most demanding in terms of memory. It uses multithreading for better performance. The screen is divided into dynamically allocated horizontal chunks. Taking one dirty rectangle at a time, lines are scanned from left to right to sort the edges and render a span, which is a horizontal row of pixels. Hidden objects, the alpha channel of a bitmap, and subpixel edges are all part of the calculation.

Enhanced Caching/GPU Model

Transformation cached rendering, also called processing, is used. It converts each vector graphic as a bitmap and keeps it in an off-screen buffer to be reused overtime. Once that process it done, scene compositing takes all the bitmaps and rearranges them as needed.


Traditional Model

Blitting, or pixel blit, is the transfer of the back buffer to the screen. Only the dirty rectangles are transferred.

Enhanced Caching/GPU Model

The process is similar to the traditional model, but the GPU is much faster at moving pixels.