IMP Homework Podcast Episode Five Transcript

Introduction

Hello, I'm Wes Perdue from the East Bay area of San Francisco, California.

Welcome to the fifth episode of my podcast series that fulfills my weekly homework assignment for the online course from Coursera titled Introduction to Music Production.

This week we will be looking at algorithmic and convolution reverb effects. I've written this little drum riff in Geist that I'll be running through an algorithmic reverb and a convolution reverb to see the basic differences.

dry drum riff

As always, definitions for all key terms can be found in the show notes at wesperdue.net.

A reverberation, or reverb, is the collection of natural echoes we hear in a space. It has two components: early reflections, and reverb tail. The early reflections are the distinct echoes we hear bouncing off the walls, and the reverb tail is the wash of indistinct echoes that come after and dissipate as they are absorbed by the room.

Reverbs clearly function in the propagation domain of sound, as we interpret sound refections as cues to the size and makeup of a space. For instance, a forest is open and yet dense, while a bathroom is very bright because of it hard surfaces.

There are two primary types of reverb effects, and they use different methods to create the reverberation. The two types are algorithmic reverbs and convolution reverbs.

Algorithmic reverbs

As per its name, the algorithmic reverb uses math to model a space and calculate the reverb. There are a wide variety of algorithmic reverbs. Apple's Logic includes four, and they range from simple to rather sophisticated. Many more are made by third parties, and also range in sophistication. For this demonstration, I'll be using ValhallaVintageVerb by ValhallaDSP. It is quite versatile and unique, and is a great value.

As algorithmic reverbs are based on mathematical models, they can do almost anything, based on their model. They can be as simple as specifying the size and reflectivity of a room, as in Logic's AVerb. Or they can be as sophisticated as Valhalla's VVV, with its nine algorithms and three color modes. Or, at an extreme, they can be "a modular dual-engine, non-linear spatial processor" as 2C Audio's B2 is.

Convolution reverbs

Convolution reverbs are a different beast. they are more rooted in reality, as their engine is driven by a sample of reality.

Convolution reverbs calculate a reverberation via an impulse response. An impulse response is a careful recording of a reverberation in a space. A convolution reverb then takes the impulse response and convolves the signal into it, creating the reverb.

Convolution reverbs are the sample player to the algorithmic reverb's synthesizer. That is, they are a recording of reality, where algorithmic reverbs are purely synthetic.

Comparison/contrast

Convolution reverbs sound realistic, since they are based on an impulse response. If you want a completely different type of reverb, you can't just switch the the algorithm or model as you can on an algorithmic verb. That is the advantage of the algorithmic reverb: flexibility.

However, a convolution reverb can create any reverb for any available impulse response.

Historically, convolution reverbs have been considered quite cpu-heavy, where algorithmic reverbs have been considered lighter on cpu. However, in my experience, as algorithmic models get more sophisticated, they seem to be about equal in cpu use to convolution reverbs. For a simple algorithm, it seems algorithmic reverbs can still be every light on CPU.

For the convolution reverb in this example, I'm using Logic's included convolution reverb Space Designer.

Convolution reverbs have a unique advantage in sound design versus algorithmic reverbs because they can load any impulse response. I read a very interesting article this week about synthesizing impulse responses and then running sounds through the convolution reverb with the synthesized IR to get very interesting sound effects. You'll find the link to the article in the show notes.

Examples

Now for a few examples.

Here again is the dry pre-reverb recording. It's some Goldbaby samples played by FXpansion Geist. It has two effects on it: saturation courtesy of FabFilter Saturn and compression by Cytomic's The Glue. dry drum riff

Here is the soloed VVV channel. The factory preset I'm using is called Fat Snare Hall. VVV solo

Here is VVV in the mix. VVV mix

Now, the convolution reverb, Logic's Space Designer. I'm using the preset "Big bathroom". I chose it because it sounds good on the drums. Here it is solo. SD solo

And here is Space Designer in the mix. SD mix

Conclusion

I think whether one type of reverb is preferred to the other is subjective; personal preferences and the situation both are taken into account. In my research, different reverbs are suitable for different situations, and the mix is just as important as the reverb itself.

I personally prefer the variety of algorithmic reverbs I've collected, and the amount of control they allow. My podcasting partner, much more of a musician than I, uses Space Designer almost exclusively.

There are advantages to each, and the disadvantages are diminishing as developers keep putting out better and better plugins, and computers keep getting more powerful.

It's a very good time to be learning and enjoying music production.

Next week we will be looking at synthesis, one of my favorite subjects. Be sure to tune in.

I hope you've enjoyed this adventure into reverb. If you'd like to submit feedback for this podcast, please use the contact form at wesperdue.net. Thank you for listening.