Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Four Audio Myths

Misconceptions you need to know.

Four Audio Myths

Jun 6, 2011 2:51 PM,
By Bob McCarthy

Misconceptions you need to know.

Figure 1: The phase cycle of acoustic addition/subtraction for two sources of matched level. The level transitions gradually, but asymmetrically around the 360 degree circle. As level between the source rises, the addition/subtraction values will decrease. See larger image.

The world of audio has an air of mystery about it. Our auditory sense is a solitary experience. We hear sound waves, but we can’t see, touch, smell, or taste them. Contrast this to our experience of food, which includes all five senses. The solitary nature of our auditory experience leaves it particularly prone to misunderstanding and misconception. This has led to many popular myths regarding the nature of sound and our perception thereof. This article will explore a few of the pervasive myths and unicorn quests that still lurk in our audio world. Most often, the myths spring from the foggy area between the world we can measure and the one we experience in our heads.

1. It’s a Phase Problem

In the final scene of the classic movie Casablanca, the police chief orders his men to “round up the usual suspects.” In our world of speaker systems, the usual suspect is phase. You don’t like the sound? Blame it on phase. Don’t have a clue about why it sounds so strange? Announce that it’s a phase problem.

Why is it so easy to blame phase? Sound is invisible, but at least we can hear it. Phase is double encrypted: We can’t see it and we can’t hear it—directly, at least. What we know about phase is how it modifies our experience of amplitude, and that unfortunately is a complex issue.

Figure 2: The effect of level on acoustic addition and subtraction. The extent of the addition and and subtraction decreases as the level difference between two combined sources increases. See larger image.

You heard it here: There is no such thing as a phase problem, so please stop bullying phase. Sound ridiculous? Well, here is the caveat: There are phase + amplitude problems, and plenty of them. But there are not phase problems when there is no amplitude. A simple example: If a speaker is muted, it does not matter if it is wired reverse polarity.

Our concern about phase is relative phase, not absolute phase. Relative phase has to relate to something if it is to matter. I just listened to Abbey Road. It is 30 million degrees out of phase but still sounds fine because the amplitude from the 1968 recording sessions has long faded away.

Relative phase matters whenever two copies of the same original signal come into contact, such as direct sound and a reflection or two speakers in an array. The relevance of the phase relationships between any two sources is directly in proportion to the amplitude relationship. If they are close in level, then phase is the tiebreaker. We can gain a lot or lose a lot. If they are far apart, then the stronger signal becomes increasingly immune to the relative phase of the weaker partner. A reflection is always late, and therefore always out of phase at some frequencies and in phase at others. A reflection of equal strength to the direct sound is a worstcase scenario phase (+ amplitude) problem, but not one that we need just throw up hands and surrender to. Either reduce the level of the reflection or reduce the phase discrepancy.

So it is with speaker arrays as well. Let’s take one that is wired correctly and aimed to create uniform level all over the arena. Do you think there is any seat in the house where all 16 of your main array boxes arrive at the same time? Not likely. That means the relative phase is not matched at all frequencies at any given seat. Sounds like we have a “phase problem,” eh? And if you did get one seat to have all the path lengths to match, then what about the next seat?

How do we achieve success with a sound system that has inherent “phase problems”? With control of the amplitude. People at the top of the arena don’t mind the lower boxes in the array being late. The boxes at the top of the array are the dominant source at the high end, which is the range that is most out of phase. The high frequencies are way out of phase, but also way out of amplitude and therefore irrelevant. The low frequencies are a shared resource, since the individual cabinets have minimal directional control in the low end. The levels are nearly equal at the low end and yet close enough in phase to add constructively. This would seem to be a recipe for a “phase problem,” but we can win this one because the phase differential shrinks as we go down in frequency and the amplitude differential rises with frequency.

Is this difference between calling something a “phase problem” and a “phase + amplitude” problem just a semantic game? Not really. These issues require both parties to be involved. When we think of amplitude without phase or phase without amplitude, we are destined to make very poor choices in our quest to solve “phase problems.”

1 234Next

Four Audio Myths

Jun 6, 2011 2:51 PM,
By Bob McCarthy

Misconceptions you need to know.

We characterize a room’s acoustic excitability with well-known metrics such as reverb time (RT) and others. The standard value for reverb time (RT60) is the amount of time it takes to decay 60dB. For example, if it takes 2 seconds to fall from 100dB SPL after the sound is stopped to 40dB SPL, we have an RT60 value of 2.0 seconds. The RT60 of a room is constant over level and requires some change in room volume or amount of absorption to assume a new value. See larger image.

2. Turn it up so we can excite the room

As a specialist in the measurement of sound systems in rooms, I am often asked how loud I run the speakers in order to do tuning. Folks want to be sure to “excite the room.” So how much sound does it take to excite the room? Does a room of glass and plaster get excited more easily than one full of fiberglass? Imagine this excerpt from the conversation between two bricks on the wall: “It takes about 50dB SPL to wake me up and remind me to reflect the sound. Less than that, I absorb it.” As we know, rooms are inanimate objects. Walls have an absorption coefficient and reflect a fixed proportion of incident energy back into the room. So where does such an idea come from?

We characterize a room’s acoustic excitability with metrics such as reverb time (RT) and others. The standard value for reverb time (RT60) is the amount of time it takes to decay 60dB. For example, if it takes 2 seconds to fall from 100dB SPL after the sound is stopped to 40dB SPL, we have an RT60 value of 2.0 seconds—as illustrated in the opening image. The RT60 of a room is constant over level and requires some change in room volume or amount of absorption to assume a new value.

The root behind this myth is that we perceive more of the reflections when louder sounds are pumped into the room and conversely less of the room when the input level is decreased. This is because we hear sound until it reaches the noise floor, rather than ceasing to listen after 60dB of decay (like the RT value does). If the noise floor in our example hall happens to be 40dB SPL (60dB below the input level), then we would experience the same 2.0 seconds of decay found in the RT test. If the noise levels are higher, the experienced decay time is shortened even though the acoustic properties of the room remain static. The experienced sound is drier. Conversely, a louder sound is perceived as having a longer decay, and we are able to localize more of the room’s surfaces because we have extended the time above the noise.

How can we disprove this theory? Take a frequency response reading of your sound system with a high-resolution audio analyzer. Turn it up 10dB and compare. Rinse and repeat. You will see the same response as long as you are above the noise and below the limits of the speakers.

It is important to know the difference between fact and fiction here because understanding the actual mechanism at play may lead to better decisions in the field. For example, we can add some electronic reverb to quiet songs and leave it off for loud ones, keeping the same perception of reverb for our listeners.

Previous1 2 34Next

Four Audio Myths

Jun 6, 2011 2:51 PM,
By Bob McCarthy

Misconceptions you need to know.

3. You’re overdriving the room

This is a close myth cousin to animated room acoustics. This myth goes: “The system was so loud that it overdrove the room.” The idea is that the room got so full of sound that we could not fit any more in, or that the room acoustics reached a saturation point like the output tubes on a Marshall amp. While it is theoretically possible to get enough SPL to tear the air as a medium, you will need to get your sound system past the level of a Saturn IV rocket to get there. Even the most macho sound system does not have the power capability to significantly modify the acoustical properties of a room. If this were true, we would have meetings with the structural engineers to discuss how loud the system could go before the roof caved in.

Again, the saturation is in your head and (very probably) in your sound system—not the room. As level rises, distortion and compression increases in every link of the audio chain: amplifiers, speakers, the air, and our ears. The result is a reduction in the dynamic range—both real and perceived. Let’s assume we have a sound system of unlimited power. Even so, the air, as a transmission medium, becomes increasingly nonlinear as we reach high SPLs. Extremely high SPLs encounter the elasticity limits of the air medium and the waveforms become distorted.

Once high-level sound makes it to our ears, it is a matter of time before our internal limiter, the tensor timpani, goes into action. The first peaks will get through, but then the eardrum’s tiny muscle tightens and mechanically reduces the dynamic range of our aural system. Plenty of sound gets through to the inner ear, but the basilar membrane has mechanical and electrical (neurological saturation) limits as well. Increased distortion and compression are the products of overloading the receptive transducer system, just as they are with speakers and air on the transmission side.

Another of the aspects that leads to the perception of saturation is the extension of the perceived reverberation time (as described above). If we combine high level and fast tempo, the music can become a sonic soup where we lose the individual transient events. There is simply not enough time between the musical transients for the signal to decay enough to make room for the next arrival.

The importance here is in the assumption of responsibility. If a mix engineer mistakenly believes the room is saturated, the responsibility moves away from them. When we understand that saturation is in the limits of the air and in our heads, then the solution moves back into the mix engineer’s court: Turn it down.

Previous12 3 4Next

Four Audio Myths

Jun 6, 2011 2:51 PM,
By Bob McCarthy

Misconceptions you need to know.

4. “You are too close to measure the low frequencies”

My process of tuning a sound system involves the measurement of every speaker in the system. Oftentimes people are concerned when I move my measurement mic inches away from a giant subwoofer. “Aren’t you too close in to see the low frequencies? Doesn’t it take a long distance for the low frequencies to ‘develop?'” My answer is to invite them to put their ear up against the cabinet while I send in a 30Hz tone. I get few takers. Then I show them the analyzer’s frequency response, which inevitably reveals a fully developed 30Hz.

Air is the medium through which sound passes. There is no way for 30Hz to get to the back of the arena without passing through the air in front of the speaker. The misunderstanding seems to stem from a variation of the cosmological construct known as “string theory”—in our case, “guitar string theory.” A vibrating string has fixed points (such as the bridge of a guitar) that don’t move. If we visualize our subwoofer as the end of a vibrating string, then it would seem that we would have to get away from the box to experience the large wavelength vibrations. In reality, the subwoofer is not a pressure node but is quite free to move, and putting your face in front of it removes all such doubt.

There are instances where the sound waves take some distance to fully develop their character, which helps to keep this myth alive. This happens in speaker arrays, which require an extended transmission distance before the patterns of the multiple elements have overlapped and assumed the fully combined coverage pattern. Low frequencies can be steered in such arrays. We might find ourselves in a quiet zone close by an array, but this is about directional control—not being too close to hear. The sound is somewhere around the array.

There are lots more audio urban legends out there. Personally, I love to hear the theories, but what I love most is putting them to the test under my analyzer.

Bob McCarthy is president of Alignment and Design. McCarthy specializes in the design and tuning of sound reinforcement systems and conducts trainings around the world. His book, Sound Systems: Design and Optimization, was
named “2007 Sound Product of the Year” by Live Design. Read his blog.

Previous123 4

Featured Articles

Close