3. Sound

What is sound? In our high school physics classes we learn that fundamentally a "sound" is a sine wave that travels through the air at a certain frequency and amplitude, causing our ear drums to vibrate and resulting in the perception of noise of a certain pitch and loudness. How this theory of sound is actually translated and represented on the computer by programs like Audition and Audacity is a little more complicated than that, but if we were to play a single, steady note and record it in one of these programs, the "waveform" that appears in the program would indeed appear as a single, steady sine wave, like the one pictured here.

The waveform is a simplified visual representation of what's happening in your audio file. Intuitively, the x-axis (left to right) of the waveform is time or duration, and the y-axis (up and down) is amplitude (which roughly equates to the "loudness" of the file, though this isn't 100% accurate).

If we were to load up a typical .mp3 (like one of the examples from the last section on formatting) into Audition or Audacity, we'd see a much complex waveform, consisting of peaks and valleys of different heights and depths. If we zoomed out so that the entire audio clip were visible in our program, we'd see louder sections and quieter sections, which, in the case of most popular music that you'd find on the radio, match up with the segments of the song that might be described as verse, chorus, bridge, etc.

If you zoom in a little further, you might see peaks that appear rhythmically and which match up with the song's drum track. If you zoomed way in on these sections, you'd see that our perfect sine wave is gone! Instead, our lines are unevenly wavy and distorted. This is because all our different instruments and tracks interact in ways that cause constructive and destructive interference (more terms you may have at one point been acquainted with!).

How different sounds interact is hard to pin down and has a lot to deal with which parts of the audible spectrum they occupy (for example, a bass guitar and piccolo played together should both be distinctly audible, but two violins playing the same notes would be hard to differentiate). All you really need to know is that two identical sounds played perfectly out-of-phase—that is, with one sound's peaks matching the other sound's valleys—will cancel each other out.

The last important thing to understand about sound is that, though we're used to looking at sound as a two-dimensional phenomenon on paper and in our recording software, the reality is that sound actually moves in four dimensions—it bounces off hard surfaces, gets absorbed into soft surfaces, and takes time to travel from point A to point B. All of these behaviors will have an impact on how sound gets recorded in the studio, in the Arts Cafe, filming on set, and filming on location.

Readings and Resources