# Raw Data and the Sound Spectrum

With the arrival of digital sound, a new art form quickly followed: the visualization of sound.

A sound waveform is the shape of the graph representing the amplitude of a sound over time. The amplitude is the distance of a point on the waveform from the equilibrium line, also called the time-domain. The peak is the highest point in a waveform.

You can read a digital signal to represent sound in real time using amplitude values.

Making Pictures of Music is a project run by mathematics and music academics that analyses and visualizes music pieces. It uses Unsquare Dance, a complex multi-instrumental piece created by Dave Brubeck. For more information, go to http://www.uwec.edu/walkerjs/PicturesOfMusic/MultiInstrumental%20Complex%20Rhythm.htm.

In AIR, you can draw a sound waveform using the computeSpectrum method of the SoundMixer class. This method takes a snapshot of the current sound wave and stores the data in a ByteArray:

[code]SoundMixer.computeSpectrum(bytes, false, 0);[/code]

The method takes three parameters. The first is the container ByteArray. The second optional parameter is FFTMode (the fast Fourier transform); false, the default, returns a waveform, and true returns a frequency spectrum. The third optional parameter is the stretch factor; 0 is the default and represents 44.1 kHz. Resampling at a lower rate results in a smoother waveform and a less detailed frequency. Figure 11-1 shows the drawing generated from this data.

A waveform spectrum contains 512 bytes of data: 256 values for the left channel and 256 values for the right channel. Each byte contains a floating-point value between ‒1 and 1, which represents the amplitude of the points in the sound waveform.

If you trace the length of the ByteArray, it returns a value of 2,048. This is because a floating-point value is made of four bytes: 512 * 4 = 2,048.

Our first approach is to use the drawing API. Drawing a vector is appropriate for a relatively simple sound like a microphone audio recording. For a longer, more complex track, we will look at a different approach after this example.

We are using two loops to read the bytes, one at a time. The loop for the left channel goes from 0 to 256. The loop for the right channel starts at 256 and goes back down to 0. The value of each byte, between ‒1 and 1, is multiplied by a constant to obtain a value large enough to see. Finally, we draw a line using the loop counter for the x coordinate and we subtract the byte value from the vertical position of the equilibrium line for the y coordinate.

The same process is repeated every Enter_Frame event until the music stops. Don’t forget to remove the listener to stop calling the drawMusic function:

[code]

const CHANNEL_LENGTH:int = 256; // channel division
// equilibrium line y position and byte value multiplier
var PEAK:int = 100;
var bytes:ByteArray;
var sprite:Sprite;
var soundChannel:SoundChannel;
bytes = new ByteArray();
sprite = new Sprite();
var sound:Sound = new Sound();
soundChannel = new SoundChannel();
soundChannel = event.target.play();
}
function drawMusic(event:Event):void {
var value:Number;
var i:int;
SoundMixer.computeSpectrum(bytes, false, 0);
// erase the previous drawing
sprite.graphics.clear();
// move to the far left
sprite.graphics.moveTo(0, PEAK);
// left channel in red
sprite.graphics.lineStyle(0, 0xFF0000);
for (i = 0; i < CHANNEL_LENGTH; i++) {
// increase the x position by 2 pixels
sprite.graphics.lineTo(i*2, PEAK – value);
}
// move to the far right
sprite.graphics.lineTo(CHANNEL_LENGTH*2, PEAK);
// right channel in blue
sprite.graphics.lineStyle(0, 0x0000FF);
for (i = CHANNEL_LENGTH; i > 0; i–) {
}
}
function onPlayComplete(event:Event):void {
soundChannel. removeEventListener(Event.SOUND_COMPLETE, onPlayComplete);
sprite.removeEventListener(Event.ENTER_FRAME, drawMusic);
}

[/code]

On most Android phones, which have a width of 480 pixels, the waveform will draw off-screen on the right to pixel 512 (256 * 2). Consider presenting your application in landscape mode and positioning the sprite container centered on the screen.

For better performance, let’s draw the vector into a bitmap. As a general rule, on mobile devices, you should avoid the drawingAPI, which is redrawn every frame and degrades performance.

The Sprite is not added to the display list, and therefore is not rendered to the screen. Instead, we create a BitmapData and draw the sprite inside its rectangle:

[code]

import flash.display.Bitmap;
import flash.display.BitmapData;
var sprite:Sprite;
var bitmap:Bitmap;
sprite = new Sprite();
// draw a BitmapData to draw the waveform
var bitmapData = new BitmapData(480, PEAK*2, true, 0x000000);
// store it in a Bitmap
bitmap = new Bitmap(bitmapData);
// position and add Bitmap to displayList
bitmap.y = 200;
function drawMusic(event:Event):void {
var value:Number;
var i:int;
SoundMixer.computeSpectrum(bytes, false, 0);
// use the sprite.graphics as before
// but does not render it to the screen
sprite.graphics.clear();
sprite.graphics.moveTo(0, PEAK);
sprite.graphics.lineStyle(0, 0xFF0000);
for (i = 0; i < CHANNEL_LENGTH; i++) {
sprite.graphics.lineTo(i*2, PEAK – value);
}
sprite.graphics.lineTo(CHANNEL_LENGTH*2, PEAK);
sprite.graphics.lineStyle(0, 0x0000FF);
for (var i:int = CHANNEL_LENGTH; i > 0; i–) {