Apr 1, 2005 12:00 PM,
By Steve Barbar
Good sound at any seat in the house.
For some time now, the use of acoustic enhancement systems has been on the rise, due in part to the improvement in the quality of components used in modern systems and advancements in digital signal processing. The increase is also due to ongoing research that has provided greater insight into what these systems require to perform successfully. Contractors are implementing acoustic enhancement systems in venues of all types, both indoors and outdoors. And the applications are equally varied — from concert halls, opera houses, and performing arts venues to houses of worship, sports arenas, sound stages, recording facilities, and even private homes. These systems are a new tool that offers architects, acoustic consultants, and sound designers greater latitude in providing a more enjoyable listening experience.
Yamaha AFC installation at the Tokyo International Forum.
One of the greatest challenges for any venue where sound delivery will take place is meeting the expectations of today’s listeners. Modern audio production has forever changed the way we interpret sound. These days it is hard to find even a low-cost television (where audio takes a back seat) that does not have stereo sound. In slightly larger units, surround sound of some kind is the norm. Radio has changed as well — 5.1-capable receivers are commonplace and provide a wider soundscape than two-channel playback. Gaming takes surround even further by adding motion transducers to controls to simulate environmental feedback from the game.
The personal computer has altered the paradigm for acquiring and storing music. With portable digital storage units that play back compressed audio files, we can alter our music library at a whim and take it with us. Most new cars now provide up to seven channels of surround sound, and some even allow the digital storage units to dock and play. Let’s face it — we’re being surrounded.
The advent of multichannel audio raises our expectations for an immersive and enveloping listening experience, and this carries over to live sound of all types. More than ever, performers and audiences alike expect to hear clear and intelligible speech and music with good tonal balance that has impact, liveliness, and a rich enveloping sound field. And they expect this at their seats — no matter where they are sitting. Meeting these expectations has become important to the success of venues of all types.
So why isn’t this the experience in every room? What makes achieving this so difficult? The acoustic requirements for effective speech communication directly contrast with the conditions needed to provide the richness, envelopment, and reverberation essential to the listening experience for acoustic music. The important elements for optimizing an environment for either condition are well known. Moreover, they were successfully used long before the advent of electro-acoustic sound systems. The following guidelines are important in creating an effective space for speech communication:
Yamaha AFC installation at Hamamatsu Arena.
- Reduce the distance between the sound source (talker) and the audience.
- Favor reflections that come from the direction of the sound source.
- Use surface treatments to reduce reflections and reverberation that detract from intelligibility.
- Pay careful attention to minimizing noise intrusion.
If a venue needs to support acoustic music, then consider how to:
- Provide the early reflections that increase impact and blend sound sources;
- Contour the later reflected energy from the sides and rear; and
- Incorporate surface treatments that produce reverberation that envelops the listener.
The ideal space for sound will not necessarily meet all the needs of the users or audience. For example, a space with ideal intimacy for drama may not have enough seating to provide the income necessary to sustain it. A small church may have clarity for the spoken word but lack the reverberation for a successful music program. A cathedral may have ample reverberation for a pipe organ but make speech unintelligible. And what about spaces that need to accommodate different types of programming? For example, programming might include opera one week, symphony the next, and then a week of a Broadway play. What about a large church that needs to successfully support both modern and traditional services? Even in the best venues, the listening experience can vary widely from seat to seat. Those seated closer to the sound source have a different experience than those seated in the middle or rear of a hall. Architectural features, such as balconies, that are used to maintain intimacy create decoupled environments that sound different from the main volume.
Is there a way to have your cake and eat it too (or at least eat more of it)? Recent developments in digital signal processing have led to the evolution of electronic acoustic enhancement. Modern systems are practical, and their numbers are multiplying. These advancements, combined with a better understanding of how we hear and what fuels our expectations for sound delivery, can allow us to provide improved sound quality for a wider range of programming.
Figure 1: Sabine’s equation provides a way to determine reverberation time and level.
A DOSE OF REALITY
In theory, constructing the optimal space for either speech or music should be a matter of simply using architectural elements that provide the most favorable ratios of direct energy, reflected energy, and reverberation. The physics of closed architectural spaces, however, yield several linked relationships. Reverberation time, the energy that persists after a sound event excites the environment, and reverberation level are determined by the cubic volume of the space and the reflectivity or absorption of the surface treatments. This relationship is defined in Sabine’s equation, shown in Figure 1.
Changing the surface treatments, volume, or geometry to alter one acoustic parameter can impact others, sometimes negatively.
Another important acoustic parameter is running reverberation (RR). RR is the level of reflected energy and reverberation compared to the direct sound while it is running. It is the acoustic support that is perceived in the gaps between words and phrases in speech and notes in music. RR represents the most delicate balance in music and acoustics, and it influences every other parameter. Recent studies have determined that the optimal value for RR changes with the type of music being performed. Speech and solo performance require less. Ensembles and symphonic music require more. According to the studies, optimal ratios fall between 1:6 and 1:10 for listeners. Higher ratios are required for musician self-support. (See Figure 2.)
Generally speaking, spaces with larger internal cubic volume have longer reverberation time and lower reverberation level. Smaller spaces have lower reverberation time, but reflections are louder because the surfaces are closer to the listener. In between are spaces that have artifacts of both.
Geometry and surface treatments also determine the quality of the reflected energy and reverberation. For instance, curved walls can form disturbing focused reflections. Parallel walls promote hard reflections and flutter echo. The two seconds or more of reverberation generated by yelling down a sewer pipe might not be optimal for acoustic music in a hall. (See Figure 3.)
Figure 2: Running reverberation (RR) is a ratio of direct to reflected and reverberant sound.
Click here for a pdf version.
For many years, electro-acoustic sound-reinforcement systems have been used successfully to alter and improve the delivery of direct sound. The dream of electronically variable architecture is as old as the development of the first sound systems. Over time, there have been numerous attempts to make this a reality — most with questionable results. The greatest difficulty — both then and now — is the physics involved in providing both adequate level as well as stable, color-free operation.
In any system that uses mics and loudspeakers in the same acoustic environment, the mics pick up some of the energy generated by the loudspeakers and recirculate it through the system. The surfaces of the venue also reflect sound generated by both the sound source and the loudspeaker. The mics circulate this sound through the system as well.
When the direct and reflected sound sum in phase, amplitude at that frequency increases. Likewise, when the direct and reflected sound sum out of phase, amplitude at that frequency decreases. Thus, the transfer function between the loudspeaker and the mic has many peaks and valleys as a function of frequency due to interference between the numerous reflections in the sound path. If the electronic gain of the system is continually increased, the system will begin to oscillate at the frequency with the highest statistical gain or the path of least acoustic impedance.
The probability of oscillation at a given frequency depends on the reverberation time of the space and the gain of the system. The amount of acoustic feedback can be quantified as a ratio of total sound energy that the mic picks up from the loudspeaker divided by the total sound energy that the mic picks up from the sound source. This is defined as average loop gain:
Avg. loop gain = [AV mic pickup from loudspeaker/AV mic pickup from source]
Critical distance (Dc) is defined as the distance from a sound source where the sound-pressure level of the sound source and the reverberant sound field are equal. If at least the critical distance of the room separates a mic and a loudspeaker, then one can predict the average loop gain equal to the onset of oscillation in the system for a given frequency. The maximum feedback loop gain is always less than unity. For a broadband system with a reverberation time of two seconds, the maximum loop gain is about -12dB. In addition, loop gain should be reduced by an additional 8dB (feedback stability margin) to avoid coloration. Therefore, a stable single-channel system has a loop gain of approximately -20dB.
Coloration and feedback can be minimized by placing mics closer to sound sources, keeping loudspeakers and mics as far apart as possible, using highly directional loudspeakers to help focus sound to the listener, and placing loudspeakers closer to the listener. Although these methods are effective in typical sound-reinforcement applications, using electro-acoustics to generate a sound field that sounds like natural room acoustics requires ignoring all of these guidelines.
Acoustic feedback can also be minimized by increasing the number of independent channels. In essence, this means increasing the number of independent single mic and loudspeaker systems in operation. Each mic must be separated from each loudspeaker by the critical distance of the room. Since each channel needs to operate with feedback loop gain below -20dB to maintain stability, increasing the number of independent channels can increase the natural reverberation time by about 1 percent. The level of the reverberation in such systems is proportional to the square root of the number of independent channels used in the system.
Thus, most early forms of acoustic enhancement employed a relatively large number of independent channels to reduce acoustic feedback. In the late 1950s, Philips created a system called MCR that in practice used 50 to 100 independent channels. A system called Assisted Resonance took this a step further. In this system, each channel included a resonating cavity containing a mic tuned to a pre-determined frequency. The cavity limited the bandwidth of the channel, thereby reducing the incidence of mutual interference. The increase in reverberation came from the response of the “tuned” channels, which augmented only the dominant modes and prolonged their decay. These systems were costly, and the electronic components available at the time had dubious reliability. As you might suspect, their success was marginal.
Figure 3: Comparison of RT60 and Dc in small and large volumes with surfaces that have the same absorption coefficient.
Click here for a pdf version.
THE NEXT GENERATION
Early multichannel systems had three significant drawbacks. First, the relationship of mics and loudspeakers was linked. As the volume of the space increased, the number of independent channels needed to increase as well. Second, acoustic feedback limited the sound-pressure level that these systems could produce. They could increase the terminal reverberation time — the reverberation that is exposed by making an impulsive sound, such as hitting a drum when no other sound is present. But the level of reverberation was low enough that it was masked by direct sound. Thus, while music was playing, there was no perceived improvement in the acoustics. Finally, since these systems did not incorporate digital reverberation, the quality of the signal relied entirely on the acoustic signature of the venue.
Acoustic Control Systems (ACS) use a large number of highly directional mics placed near the stage and fed through a matrix of delays. The matrix is calculated for each installation and based on image sources that would exist in a larger “ideal” hall drawn as an overlay on the existing space. This, in theory, replicates the first sound wavefront of the larger ideal space moving throughout the existing venue. Directional loudspeakers are used for the early energy. Reverberation is generated as a separate process and fed through an independent loudspeaker array with different directional properties. Feedback is minimized by using highly directional mics in close proximity to the sound source and a large number of independent channels fed from the delay matrix.
A practical problem with this system is that wavefront synthesis requires nearly anechoic conditions. Every electro-acoustic system using mics and loudspeakers in the same environment has some amount of acoustic feedback — which the ACS doesn’t account for. One of the biggest drawbacks is system complexity. Even the smallest systems require 12 or more mics, and larger systems require 18 to 48. ACS literature indicates the importance of maintaining accurate mic placement for system stability.
Jaffe Acoustics developed the Electronic Reflected Energy System (ERES) in the 1970s. ERES uses a small number of miniature mics located in the stage area (preferably built into the shell), each connected to a multi-tap digital delay. The first tap feeds full-range signals that are fed to speakers in the proscenium, generating early energy. Subsequent taps are connected to low-pass filters that feed loudspeakers in the ceiling. ERES does not generate reverberant energy. In theory, it provides supplemental energy that was modeled on a reflector of a specific size and mass. No provision was made to minimize feedback except decoupling between the mics inside the stage shell and the loudspeakers outside of the shell. Reverberation-on-Demand (RODS), developed by Peter Barnett of Acoustic Management Systems, has been incorporated in several ERES systems. In essence, these systems are a series of gates that connect mics to a delay line when signals are rising in level and connect the output of the delay line to loudspeakers when level is falling, thereby increasing terminal reflected energy. Unfortunately, this is not audible while music is playing.
The Vernon & District Performing Arts Center in Vernon, British Columbia, has a four-zone VRAS system with 32 mics and 72 loudspeakers. The photo shows the electronic stage shell in its maintenance position.
THE NEXT ADVANCEMENT
The System for Improved Acoustic Performance (SIAP), marketed by RPG Diffusor Systems, also uses a small number of highly directional mics located near the performers. Mic signals are routed to loudspeakers through a large matrix mixing and routing system. Early SIAP systems claimed to use some form of time variance. Current literature indicates that no time variance is used and instead positions the system as being able to add missing reflections to the signature of the room. To do this, it incorporates digital reverberation and operates at lower output levels.
Industrial Research’s Dr. Mark Poletti in New Zealand developed the Variable Room Acoustics System (VRAS), and Level Control Systems licensed the system. VRAS provides enhancement of both reverberation and early reflections. For reverberation enhancement, VRAS uses a system of decorrelated mic and loudspeaker channels distributed throughout the room, each separated by a distance equal to or greater than the critical distance of the room. VRAS incorporates a 16-channel reverberator between the mics and loudspeakers. Each reverberator in the system uses between eight and 16 mics and typically 16 or more loudspeakers. The early-reflection algorithm uses eight to 16 mics and is coupled to a set of delays via a patented matrix. Early reflections are matrixed back to the stage for musician support, as well as to lateral and overhead loudspeakers in the room. The generated delay sequence is time-aligned with the direct sound from the stage. Depending on the application, the room will be enhanced in multiple zones, with each zone supported by early reflection and/or reverberation enhancement.
Yamaha introduced its Active Field Control (AFC) system in the 1980s. The system uses an array of four or eight omnidirectional mics connected to arrays of dedicated early reflection and reverberation-field loudspeakers. The system includes equalization and an FIR filter but does not incorporate a reverberator (it relies entirely on recirculation of the natural acoustics of the venue). For reverberation enhancement, the mics are “rotated” — i.e., selectively switched on and off — which changes the acoustic path between one mic and the loudspeakers to a different mic in realtime. This provides a means of decorrelation to improve system stability. Mics are usually located at the ceiling. Graphic equalizers in the system vary the frequency characteristics of the increased reverberation time to compensate for the exciting of the acoustic environment. Microphones are located at or beyond critical distance from the performers. However, the system needs only four independent lines because it uses time-varying functions including rotated mics.
Lexicon Acoustic Reinforcement and Enhancement System (LARES) was designed in the early 1990s and uses advancements in digital signal processing to overcome the problem of coloration from feedback. The system generates a time-variant signal that decorrelates the path between the mics and loudspeakers in realtime. In practice, the system requires two to four mics, which can be placed either at or beyond critical distance, as well as in close proximity to the loudspeakers. LARES incorporates digital reverberation and uses independent decorrelation for early and late energy, without requiring dedicated speakers for each. It independently times and equalizes early and late energy, then mixes them as required to each loudspeaker in order to produce the desired results throughout the venue.
LARES system for the trellis at the Jay Pritzker Pavilion at Millenium Park in Chicago.
MAKING IT WORK
One of the most important considerations for any form of acoustic enhancement is that these systems can only add energy to the space. None of them can do anything about existing energy that is already too loud. Nor can they fight noise intrusion of any kind. Architectural or mechanical conditions known to be problems must be independently treated before an enhancement system can be successfully implemented.
The attributes of these systems vary significantly, and not all of them are well suited for all applications. In addition, the architecture, geometry, and natural acoustic conditions required to produce the desired results vary with each system. Since all of these systems add energy, they all produce artifacts. The degree to which these artifacts are masked depends on the suitability of the system for the application, as well as the optimal system design and implementation.
Hardware and software for these systems differ markedly among manufacturers. Several, like ACS and SIAP, manufacture custom-built hardware that uses a fully centralized processing system integrated in a card cage with plug-in modules. ACS also includes signal processing and amplification in the card cage. This is manufactured in small quantities. Others, like VRAS and LARES, use off-the-shelf DSP hardware platforms that are sold in greater quantities for other purposes. These systems run proprietary software that enables the hardware to deliver the desired results.
There are also significant differences in the way these systems are integrated. Some manufacturers produce only the acoustics processing and use off-the-shelf components manufactured by others to complete the system. Most incorporate signal processing that may include EQ, delay, level control, mixing, etc. Not all of these processes are equally transparent or flexible. For instance, some systems have fixed audio routing or rigid DSP device algorithms (certain types of EQ, etc.). Although all systems provide analog I/O, some systems do not accommodate digital audio signals. Some systems, like ACS and LARES, enable insertion of direct signals for sound effects or film sound and provide independent processing for them. LCS VRAS is designed specifically to integrate sound effects and sound design mapping. Systems like ACS, LARES, and Yamaha offer complete turnkey electronics packages, and both LARES and Yamaha offer a full line of loudspeakers as well.
The quality and flexibility of acoustics processing also differ significantly among manufacturers. All of the algorithms (for systems that incorporate reverberation) are proprietary. The amount and type of DSP used and the proficiency of the programmer influence sound quality and functionality. For example, VRAS uses a patented reverb that limits gain to avoid feedback. LARES uses Lexicon’s proprietary IC and is built specifically for acoustic enhancement.
The means of controlling the systems also vary widely. Some systems have only one setting and do not provide any way for users to make adjustments. Others have integral control panels that reside with the equipment (this makes it difficult to change settings during an event). Still others accept contact closures or MIDI or use commercially available control systems such as AMX or Crestron, which enable more comprehensive control. In addition, they can easily be reconfigured to accommodate new system settings or upgrades. They can also provide text in the native language of the location where the system will be installed.
Loudspeakers and mics are common to all of these systems, and these components are critical to the resulting sound quality. For loudspeakers, power uniformity is critical. If the off-axis response of a loudspeaker differs markedly from its on-axis response, the aberration in linearity will be noticeable and the system will become unmasked and sound artificial. Equalization cannot help correct this condition, as changes made to the frequency response are global and affect both on-axis and off-axis response. Mics need to have high sensitivity and extremely low noise. Again, the on-axis and off-axis responses need to be as uniform as possible. Almost all modern systems use directional mics and the higher the quality, the better the system performance. Power amplifiers need to have high signal-to-noise, low self-noise, and low distortion. Acoustic enhancement systems use large loudspeaker counts, and even small amounts of residual noise get summed and recirculated through the system. Depending on the size and location of the system, it may also require a power isolation transformer and dedicated cooling system.
Hallelujah Church in Seoul, South Korea, has a five-zone VRAS system with 40 mics and 112 loudspeakers.
The success of these systems depends on careful consideration of a venue’s real acoustic needs, the acoustic conditions that exist, and the resulting design and installation of the enhancement system. Knowing the limitations of the proposed system is critical — whether the limitations are imposed by the system’s electronics makeup, venue aesthetics, or budget constraints. Each system has its own unique requirements for design, implementation, and tuning. The assistance provided by each manufacturer throughout this process differs, as does ongoing support for the system. Acoustic enhancement systems typically last much longer than typical sound-reinforcement systems. This is partially because they are often larger in scope and serve a different purpose. Annual maintenance is an important part of the overall budget.
One of the considerations most often overlooked in system planning is that things can change and often do. For example, suddenly having acoustic conditions optimal for singing can completely alter a music program at a house of worship. The same space can change its interaction with the community — support organ and piano recitals, chamber music concerts, etc. Exploring the future needs prior to and during the design phase helps ensure that the system can be upgraded with little effort.
Acoustic enhancement is a powerful tool that can provide improved sound quality and a more enveloping listening experience. When applied correctly, it can solve problems in ways that traditional architecture alone cannot. In addition, it can provide significantly greater flexibility and control of acoustic delivery. The demand for these systems is growing, and as the cost of DSP becomes more affordable, the use of this technology will become increasingly common.
Steve Barbaris president of LARES Associates.