IMP Homework Podcast Episode Two Transcript

Introduction

Hello, I'm Wes Perdue from the East Bay area of San Francisco, California.

Welcome to the second episode of my podcast series that fulfills my weekly homework assignment for the online course from Coursera titled Introduction to Music Production.

Remember last week that we looked into my digital audio workstation's audio signal path. Well, this week we will be looking at one particular component of that signal path: the analog to digital converter, or ADC.

From the air to the ADC

You might remember that sound waves traveling through the air hit the microphone, which, as a transducer, converts the sound waves into an analog audio signal. That signal is an alternating current electrical signal, where the voltage represents the pressure waves in the air; the compressions and rarefactions (high and low pressure waves, respectively) are represented as positive and negative voltages.

The electrical signal created by the microphone flows through the XLR cable to the audio interface, where it enters a preamp that brings the signal up to line level.

From the preamp, the signal flows into the ADC. where the analog audio signal is converted into a digital audio signal, represented by a flow of zeroes and ones.

That's the purpose of the ADC: as its name says, it converts an analog signal to digital.

On the surface, the task of the ADC is simple: it measures the incoming voltage, converts that level to binary, and outputs the binary representation.

Digging in

As you dig in, things get deep very quickly.

An analog signal is a continuously variable signal. Say you measure the amplitude of a signal at two close instances. If you have sufficiently sophisticated gear, you can always measure the amplitude of the signal at a point in between the first two samples. This fact goes on to infinity: that is, it would take an infinite number of samples to completely describe the analog signal.

This is not feasible in the digital domain, as a digital audio stream is made up of bits - zeroes and ones - that, when grouped into words, describe the amplitude of individual samples. An infinite number of samples is clearly not feasible: it would take an infinite amount of time to process and an infinite amount of storage space to store.

Thankfully, audio in the range of human hearing can be adequately described with an achievable sampling rate. The sampling rate is the number of samples per second taken of the signal coming into the ADC.

According to conventional ADC theory, we need only take twice as many samples per second as the highest frequency in the analog signal to adequately describe the signal. Since the frequency range for most human hearing is 20 Hz to 20,000 Hz, a sampling rate of 40 kHz or higher adequately captures an audio signal. This highest representable frequency, half the sampling rate, is called the Nyquist frequency, after the man who discovered it in the 1920's.

CDs are encoded at a sampling rate of 44.1 kHz, and DATs (digita audio tapes) use a 48 kHz sampling rate. It's not unusual these days for an audio engineer to work on a project with a sampling rate of 96 kHz.

There are two variables we must understand that directly affect the analog to digital conversion: sample rate and bit depth. We've already talked about sample rate. The bit depth is the number of bits used to describe the amplitude of a sample. The bit depth of an audio CD was standardized to 16 bits; that means a sample can have up to 2 to 16th or about 65 thousand values. This seems like a lot, but according to Andy Farnell on page 121 of his book Designing Sound it only gives you a dynamic range of about 98 dB. This is adequate for a final recording, but we need more headroom - that is, space between the loudest sound in a recording and the loudest representable sound - when we are recording, mixing, or mastering. The industry has settled on 24 bits of amplitude per sample in DAWs, which gives us about 144 dB of dynamic range.

Further exploration

I've only just scratched the surface of this topic. There are two more things I'd like to cover, but don't have time to do so adequately.

First, there are many algorithms and processes for converting an analog signal into a digital signal. Many are briefly treated on the analog-to-digital converter entry in Wikipedia, but the Data Conversion Handbook published by the company Analog Devices covers them in much more depth.

The other topic I'd like to research further is error handling. Because we're taking an continuous signal and sampling it, we're inherently throwing away data between those samples. This process alone introduces errors called quantization errors. The equipment taking the samples may occasionally not perform perfectly, introducing more errors. There are other errors that can happen as well, and there are ways to deal with these errors. Farnell briefly covers some of these topics in chapter 7 of his book, and Analog Devices goes into great depth on these topics.

Summary

I hope you've found this a good introduction, and I hope I've given you a few pointers to resources to help you help you investigate this topic further if you would like to.

In summary, the work of an analog to digital converter seems simple: it just digitizes an analog audio signal, outputting a binary stream describing that signal. But as you investigate it more closely, you'll find it's a sophisticated piece of equipment in the digital audio signal path with a very interesting history.

If you'd like to submit feedback for this podcast, please use the contact form at wesperdue.net. Thank you for listening.