– Or: This has nothing to do with paint. Except when it does.
Audio engineering is an interesting subject with an equally interesting name. It doesn’t seem like the two terms should go together because Audio involves intangible* sound things that go into your ears. Engineering, on the other hand, is generally related to the design of complex products. Structures, buildings, electrical systems, tangible stuffs and things.
According to Merriam-Webster (no relation to the sitcom Webster. that confused me as a kid), there is still yet another definition of engineering to take into consideration! Quoted as “the application of science and mathematics by which the properties of matter and the sources of energy in nature are made useful to people.”
Huh?
Given that different definition, it probably makes more sense than the notion of constructifying small buildings in our ear canals.
Audio engineers are literally applying science and mathematics (except we call it “art”) to sound waves to make them more useful to people than they are in their native state.
With that introduction, I bring to you: Everything you’ve never ever wanted to know about audio engineering and had zero fears regarding asking because … well … as established, ya didn’t wanna know in the first place.
Later blog posts are going to go more in-depth into some of the tools that we use but this post is going to be a bit of a more broad overview. And high level. Deep and wide. Mountaneous or something.
* intangible in the sense that you can’t see sound waves unless you are gifted with synesthesia. You can -feel- them physically if you’re next to a good subwoofer at a concert. Try it sometimes. Your innards may thank you.
Some concepts and terms (maybe not conditions)
For the sake of ease, I’m going to stick mostly to the realm of audio engineering in music. Some of this bleeds over into my obsession with voiceover (we don’t talk about radio imaging, no~o wo~o wo~o) but primarily sticking with music. A bit of simplification is in order but I think it will help.
When recording a live instrument, there is usually at least one microphone involved at bare minimum. The microphone is going to be the device that records the sound. Microphones can take on all shapes and sizes. Sometimes they’re fairly large objects like the Neumann U87 microphone which is larger than my fist. It records sound better than my fist does too. Sometimes they’re moderately small like those little bendy stick ones you see glued onto motivational speakers faces. Other times they are so small you can’t even see them. If you’re reading this on mobile, there’s one under a tiny slit at the bottom of your phone or tablet. If you’re on a modern laptop. it’s a teensy-weensy circle next to the equally teensy-weensy camera circle.
Some recordings are accomplished without the need of a microphone. Digital pianos, as an example, can have audio cables plugged directly into an interface of some form to record directly to a computer. No need for microphones.
The two types of recording above are where we’re going to focus this entry: Microphonic and direct.
Okee. Now that we have that covered, let’s get into some meat and potatoes*. I’ll explain more terms as we go along.
(*by which I mean more substantial topic coverage. not actual meat and potatoes. i still haven’t figured out how to share those things through a screen. if you have, call me. number’s at the top of the page)
Why does audio need engineering in the first place?
That’s a really good question. Some audio doesn’t at all. In some contexts, there is absolutely nothing that needs to be done to an audio signal for it to be usable in its intended use.
If you’re a fan of Gary Vaynerchuk and his podcast, there is no “post processing” involved. What is post processing? That is when a completed audio production is then taken and modified in various ways by the engineer after the fact for different use cases. Many podcasts are unedited, unfiltered, unmodified. Whole grain, natural, SPF 50 audio signals directly to your ears via your listening medium of choice (ear buds, speakers, car, tin can + string).
They’re like Bruno. We’re not gonna talk about those today.
(can you tell my kids are obsessed with Encanto?)
Today we’re going to discuss the need for actual post production of music for human consumption. Let’s bring up a specific example.
Simple Band Recording
If you blend all your colors (remember the paint metaphor above?) together on a piece of paper, you get something that looks like mud. The colors need to be separated out.
In your average rock band you will have a singer, guitarist(s), bassist, and drummer. Maybe even a keyboardist if you’re super fancy. Each of those instruments represents a color in this example. If all of the colors are blended together unaltered, you also get mud. Sonic mud. No, not the hedgehog drunk on the side of the road (again), audio. Details are lost, definition is difficult to make out, and things just don’t sound quite right.
If you’ve ever been in the midst of a “garage band” rehearsing, you might have an idea of what I’m talking about. There’s no audio engineering at all; everyone is playing LOUD and not exactly worrying about separation of colors. Don’t get me wrong, this is fun. It’s an absolute blast. Literally if it’s too loud and an amp explodes. But for the purposes of human consumption after the fact, it doesn’t fly very easily. At least not in the context of standards that have been adopted for music listening today.
Separation of church and state
Some separation exists. Bass guitarists tend to sit in the “basement” of sonic earscape. The lower ones. The ones that rumble nicely and make your seat vibrate. They also rattle the frame of the car next to you when blasted too loud but that’s another topic for another time. Rhythm guitarists in this context will generally stick to their lower range of chords and notes and try to leave room for a lead guitarist to show off higher and faster notes. Keyboardists don’t have much room left but because they cover a wide range, they can still fit in. Drums are just loud and shorter bursts of sound (cymbals last longer but still have a sharp “attack” and a decline of sound) which is the nature of hitting an object instead of strumming a string.
What next?
Audio engineers will take the recordings of each of those instruments and manipulate them (engineer) in ways that will give them better color definition and separation. Things can be nipped and tucked in the available palette that will “leave room” for everyone to coexist. Shoehorn the bass into the lower end of sounds, rhythm guitars into the middle range and modify to make sure there’s room for organs or pianos, remove unnecessary sound range from the lead guitars, and sometimes shorten how long the drums make sound so that everything else has room to breathe.
That’s a very specific example.
Difference between a good and bad recording
Raise of hands, how many of you have been on a Zoom call in the last two years?
Higher. Raise them high.
Actually, I still can’t see them because I’m not right there but almost every one of you got a good stretch out of it. You’re welcome.
We won’t do any more hand raising exercises, but how many of those zoom calls had voices that sounded kinda wonky, right? Far away from a microphone, sounds like a muted stairwell. Ok. There’s a baseline.
Now, imagine if you’re watching Encanto (three!) and every time Mirabel says something on the screen it sounds like it was recorded in a bathroom. Except she’s not in a bathroom, she’s next to the river. Something doesn’t quite gel there, does it? Stephanie Beatriz was of course recording all of her voices into a high quality microphone in a very well tuned room where sound doesn’t bounce off the walls and back into the microphone like it does on the Zoom meeting living-room-turned-office environment.
Folks in the audio engineering field are responsible for making certain that this all works swimmingly.
Just like when you turn on a light and it fills the room because it reflects off the surfaces of all your walls, sound is also a waveform and it too will bounce off of the walls creating echos and reverberation. For a voice recording in a production, you don’t want this. If you listen to podcasts, I’m positive you can tell the difference between someone with a microphone in a living room vs someone with a microphone in a silent and well insulated room. You may never have thought about it before but now you’re in the know. I’ll teach you the secret handshake later. Bring hand sanitizer. Maybe baby wipes. You’re not allergic to wool, are you?
Exceptions
There are times when it’s actually really good to have a room with a “live” sound. Reflections, a sense of sounding alive. The converse to this is when recording intimate things like the voice or an acoustic guitar, it helps for the room to be “dead”; no or few reflections.
Drums sound killer when recorded in a live room. Drum recording is a tricky thing to explain in most conventional situations but the short version is: most studios will capture close up recordings of individual drums as well as the overall room and mix them together to get the drum sound you hear on your favorite album.
Brass instruments can go either way. Sometimes you need the intimacy that a dead room affords, sometimes you need the presence of a live room.
I’m rambling a bit but there we are.
What can audio engineering do with a bad recording?
Not a lot, frankly. If the goal is to put a voice recording into a song and the voice recording sounds like it was recorded on the far end of a warehouse, there isn’t much that can be done with it. That’s the tightrope of pleasantry where we ask as gently as possible if a re-recording could be done in a more suitable location. And brace for the impact of knowing that it was the best possible performance and will never be as good again ever.
The space that a recording is done in is one of the most important components of the entire process. A bad space sounds like a bad space no matter which microphone you put in front of it.
Conversely, the microphone on your newer generation Apple iPhones sounds REALLY GOOD when recorded in a decent space. I mixed up a song using an impromptu recording from an iPhone and you would never tell it was from an iPhone. Good recordings.
If that same recording had been done in a bathroom, there’s virtually nothing I could do. There are some tricks that can reduce all of the sound of the room reverberation but the overall sound is still “meh.” Best to record in the walk-in closet instead of the garage!
Psychological witchcraftery
In today’s edition of Did You Know, the keto-friendly fatty computer made of meat that we call the HUMAN BRAIN can be easily tricked and manipulated by something as simple as different volume levels. Volume is measured in “decibels”. The “bel” is named after Alexander Graham Bell, and “deci” in front of it implying the logarithmic measurement. Symbolized as “dB”. Totally not getting into the math of it today – or ever – but that’s just for the sake of using that term going forward.
If I sat you down and played you part of a song over a stereo at a specific volume level, then played that exact same part of a song at One Decibel louder and asked you which one sounds better, you would instinctively believe the second and louder version to be the better. Our brains are tuned to “louder is better.” Audio engineering in action.
Ever wondered why for the longest time when watching television the commercial breaks were significantly louder than the program you were watching? Because louder is better. It gets your attention, holds your disinterest at bay, provides contrast, and plays on the psychology of volume to get you to buy things that will help you allegedly smell better. At least in the United States, that practice of deceptive psychological advertising warfare has been by and large rendered illegal. I still feel like they find ways to make that happen though…
And then there was the “Loudness War”
So if louder is better and advertisers use it, you better believe that record labels who want to sell albums were going to take advantage of that too. Hence, the “loudness wars.”
The short version: audio engineering witchcraftery was done to make Song X sound louder than Song Y in order to get Consumer Alpha to purchase Album Theta with Song X instead of Album Gamma with Song Y.
Over the course of a couple decades, the witchcraftery continued to escalate because it worked and sold albums. Data points were able to show effectiveness.
The sacrifice? That witchcraftery took away some of the feel of the music by removing some of the contrasting louds and softs. Peak loudness war is very often referenced to the Death Magnetic album by Metallica; irreverently referred to as having been “Squashed like a pancake” there is almost no dynamic range. It’s just loud. Almost overbearingly so.
The deescalation of the loudness war has been slow but seems to be progressing. Internet music streaming services have been working on that front by establishing volume guides to help normalize the listening experience. It’s not perfect by any stretch but it can definitely cause for a more pleasant listening experience.
Unless you go from listening to Beethoven’s 5th to something by Slipknot in your playlist randomizer. Then it’s still jarring.
Summary and conclusion
I’m uhh gonna stop here for now. That is a lot to take in. In my perfect world there would be a single blog post about each section above but now that it’s all covered, we don’t have to! Now that I’ve written it out, audio engineering is a much more complicated topic than I realized.
In subsequent postings, I’ll bring up some of the equipment that is used for making audio recordings play nicely with each other and fit into your ears in a way that is pleasant. I’ll try a bit harder to make it all understandable too; went a bit off the rails on this one. My goal is to help explain some of this voodoo stuff in a way that actually makes a bit of sense if you, as the reader, don’t have much or any background in the field and just have a passing interest.
If you have a specific question that you’d like answered, hit me up! I’ll do my best to not lead you astray.
Thanks for coming along for the ride!
-= george =-