Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

THE EQUALIZATION TRANSFORMATION

When Bell Labs began the first use of equalizers in the late 1920s, it did so to undo, or "equalize," the damage done by the transmission path. Long lines

THE EQUALIZATION TRANSFORMATION

Dec 1, 2000 12:00 PM,
RAY MILLER & DENNIS BOHN

When Bell Labs began the first use of equalizers in the late 1920s, it did so to undo, or “equalize,” the damage done by the transmission path. Long lines caused serious high-frequency losses, and it was immediately clear that the audio path itself prevented the accurate transmission of the original signal. Nothing has changed since then. The audio path still degrades the production, transmission and reproduction of audio signals – and still requires equalizers to help. In the `30s, Hollywood developed the first use of equalizers for sound improvement, both in production and reproduction. Motion pictures with sound brought audio playback systems into theaters for the first time. Soon, some people’s attention focused on just how bad these reproduction systems sounded, and they installed equalizers to reduce the problems. It wasn’t until the late `50s that real success at room equalization occurred with the Dallas Love Field Airport. (For a complete equalizer history, see “Operator Adjustable Equalizers: An Overview,” The Proceedings of the AES 6th International Conference: SOUND REINFORCEMENT, Nashville, TN, May 5-8, available from the Audio Engineering Society. Web site www.aes.org/publications/conf.cfm.)

USING EQUALIZERSEqualizers do wonders for sound systems, yet estimates suggest that only 25% of the equalizers sold find their way into permanent sound systems. Uses for the remaining 75% are split between program enhancement and sound reinforcement. Program enhancement primarily appears in live performance, recording studios, broadcast and post-production markets, where equalizers do everything from simple band limiting to complex sound manipulation. Sound reinforcement uses equalizers everywhere from small lounge acts to large touring companies. Most applications are for compensating ragged loudspeaker power responses rather than attempting any sort of serious room equalization. This is true for monitor loudspeaker systems as well.

An unfortunate truth regarding budget loudspeakers is they don’t sound very good. Usually this is due to an uneven frequency response or, more correctly, a non-flat power response. (An ideal cabinet has a flat power response.) Equalizers can help these frequency deficiencies. Adding a little here and taking away a little there creates an acceptable power response, and a better sounding system. It is surprising how just a little equalization can change a poor-sounding system into something quite decent.

The next big use of equalizers is to improve the way each venue sounds. Every room sounds different; it is a fact of life and a fact of physics. Even while using exactly the same equipment, playing exactly the same music in exactly the same way, different rooms will still sound different, guaranteed. Each enclosed space treats sound differently, and here are some of the things that account for this.

REFLECTED SOUNDReflected sound is a problem. What the audience hears is made up of the direct sound (what comes straight out of the loudspeaker directly to the listener) and reflected sound (what bounces off everything before getting to the listener). If the room is big, then reverberation results, which is the reflected sound that has traveled for such a (relatively) long time in the space that it arrives and re-arrives at the listener, delayed enough to sound like a second and third source, or even an echo if the room is quite large.

It is basically a geometry problem. Each room differs in its dimensions – not only in its basic length-by-width, but also in its ceiling height, the distance from the performer and the equipment to the audience, what’s hanging (or not) on the walls, the number of windows and doors. Every detail about the space has an effect on the sound, and regretably, there is very little that can be done about most of it. Many of the factors affecting the sound cannot change, since it is not possible to change the dimensions or alter the window and door locations. There are a few things that can be done however, and equalization is one of them.

Equalization can help with some of the room’s more troublesome features. If the room is exceptionally bright, an EQ can roll off some of the highs, or if the room tends to be boomy, an EQ can tone down the low end to reduce the resonance.

Another way EQ is quite effective is in controlling troublesome feedback. Feedback occurs when the audio from the loudspeaker gets picked up by one of the stage microphones, re-amplified through the speaker, and then picked up again by the microphone, and re-amplified, and so on. The problem is an out-of-control, closed-loop, positive-feedback system building up until something breaks, or the audience races out of the room in horror. Proper use of an equalizer cuts those frequencies that want to howl; this not only stops the squeal, but it allows the general level of the system to be increased. The technical phrase for this is maximizing system gain before feedback.

It’s important to understand at the beginning that equalizers cannot fix room-related sound problems, but they can move the trouble spots around. EQs can rearrange things sonically, which helps tame the excesses. Equalizers are also useful in augmenting the timbre of an instrument or the human voice. With practice, musicians and recording engineers use equalizers to enhance sound for the best personal expression: They deepen the lows, fill the middle or exaggerate the highs, whatever is desired. Just as an equalizer can improve the sound of a poor loudspeaker, it can improve the sound of a marginal microphone, or enhance a musical instrument. An equalizer can create that “jump” you want from the drums without sacrificing the whole low end. Equalizers provide that something extra, that edge.

THE DIGITAL DIFFERENCEIn 1987, Yamaha introduced the DEQ7 Digital Equalizer, the first stand-alone equalizer based on digital signal processor (DSP) technology. Also in 1987, Roland previewed a digital parametric equalizer. The world of equalization has never been the same.

Today, digital designs dominate all new equalizer products, whether stand-alone or multifunction units. Many differences between analog and digital equalizers are obvious; some are not. The most apparent difference begins with the physical package with no sliders or control knobs (dead front) and usually a smaller size than its analog counterpart (although, some digital equalizers offer sliders or knobs, as well as computer control). Some software-driven equalizer screens mimic an analog equalizer’s front panel, with its row of sliders on specified centers with fixed bandwidths. Others allow going anywhere and changing anything, a sort of super-parametric with unlimited bands. This is sometimes called arbitrary magnitude response and is usually associated with a click-and-drag setting method familiar to most of us who use Windows- or Mac-based operating systems. Adustments are made simply by clicking on the spot where the change is to be made, then moving it up or down to widen or narrow it, until it is just what is desired, all without any limit on the number of bands used.

With all this power comes the ability to do (essentially) real-time analyzing and synthesizing in a very automatic manner. Many units are beginning to include analysis software, either builtin or with provisions to hook up, and have duplex communication between units. These digital boxes all come with memory, allowing storage of favorite responses for re-use, or allowing a sequence of curves to pop up on demand – maybe by remote switch, time of day, a scene change or any other trigger. A digital equalizer’s ability to accurately repeat its response from unit-to-unit, without regard to component tolerances, is another big plus over analog units.

DIGITAL BACKGROUNDERMost people working in the professional audio industry have a pretty good understanding of what an analog filter is. They know it is made up of resistors, capacitors and inductors (real or synthetic), and that they work by either passing AC signals or shunting them to ground. But what exactly is a digital filter? How does it work?

A digital filter is a little bit of software running at high speed that processes samples of a continuous waveform (the digitized input). It generates an output sample every time it gets a new input sample. A finite impulse response (FIR) filter calculates each output sample from a weighted sum of current and past input samples. An infinite impulse response (IIR) filter does the same, but it also uses its own previous output samples. In both cases, the filter coefficients are the weights for the weighted sum of the samples. These coefficients determine the frequency and magnitude response, so that a given filter can be a lowpass, highpass, bandpass or whatever, depending on the coefficient values. A moving average (i.e., one that is being constantly updated based on each new input) is an example of a very simple FIR filter.

The design of analog filters begins with the desired transfer function. A transfer function is a mathematical description (an equation) of the behavior of the circuit. It models the output as a function of the input. Typically, for analog filters, the transfer function consists of a polynomial numerator over a polynomial denominator. The roots (solutions) for each of these polynomials are called the poles and zeros of the equation. The roots of the numerator are called “zeros,” since they make the equation equal zero. The roots of the denominator are called “poles” because they make the equation become infinitely large (that is, a root, or solution, is the value which makes the denominator equal zero, and a zero in the denominator causes the equation to become infinite). If this is graphed, it produces a cyclone-like peak reaching to infinity.

Circuit designers characterized analog filters in terms of these poles and zeros, whose values determine the filter’s magnitude and phase response. A digital filter also has poles and zeros, and they behave in a similar fashion. IIR digital filters are often modeled after analog filters using clever mathematical transformation wizardry resulting in each analog pole and zero corresponding to a digital pole and zero, and hence, a filter coefficient. This transformation produces a new equation, or transfer function, that can now be viewed primarily as time-dependent, where the analog equation can be viewed as primarily frequency-dependent, which works well with capacitors and inductors since they too are frequency-dependent. (This analogy is oversimplified but serves the example; my apologies to mathematicians everywhere.) Because digital signal processors work on sampled (time-derived) data, these equations become quite easy to implement in electronic digital format. The actual structure of the digital filter consists of a series of multiplication and addition circuits.

Since the filter coefficients determine the pole and zero locations, as well as filter gain, any response possible with a given number of poles and zeros can be had by changing these coefficients. There are ways to very closely match analog response right up to the sampling frequency (see “Digital Parametric Equalizer Design with Prescribed Nyquist-Frequency Gain,” Sophocles J. Orfanidis, JAES Vol 45, No 6, p. 444).

How does the frequency response compare to analog? That is, how well can you duplicate an analog waveform response in the digital domain? Due to the need to sample the input, it will always be less (again, an oversimplification). The bandwidth of the input signal must be restricted to the Nyquist frequency, which is one-half the sampling frequency, because any higher frequencies will be aliased down to lower ones. Once the sampling occurs, there is absolutely nothing of higher frequency, yet its analog counterpart may have significant response above the Nyquist frequency. This means that responses comprising lots of high-frequency harmonics must be done with far fewer harmonics. Increasing the sampling rate to 96 kHz or beyond helps this problem, but leaves less time between samples to do the digital processing. As always, it’s a trade-off.

FIR and IIR filters each have their pros and cons. IIR filters naturally match analog filters, and are capable of high-Q, using few coefficients. They are easy to design and usually (not always) require less computation than a FIR for the same response. A FIR is more readily designed for arbitrary responses and adaptive filters. It has an impulse response that is inherently limited in time, but allows designs with perfect phase linearity, which is equivalent to a pure time delay (independent of frequency). This delay increases as the design frequency and bandwidths are reduced. A linear phase filter is useful for high-frequency EQ and in-phase crossovers, and for low-frequency filtering as well, when the time delay is acceptable.

Practically all digital filters generate noise due to quantization of their output. The noise, therefore, depends in part on the number of bits of precision. The poles resulting from the output feedback of an IIR amplify noise just as analog poles do, and they amplify filter output quantization noise as well as any input noise present in the samples. The coefficients must also be quantized to fit in the DSP, which restricts the possible responses. At low frequencies and bandwidths, the quantization of both coefficients and audio samples adversely affects the response and noise. However, the digital filter can be made as quiet as desired by using higher precision or error-feedback techniques.

It is an intriguing and certainly not obvious fact that digital filter design gets harder for low frequencies. Because digital filters work with samples taken at very short intervals, it takes a lot of samples to represent a low-frequency wave. It also takes a lot of samples to distinguish closely spaced low frequencies and thereby to process them precisely. In the case of a FIR, this means a large filter, while with the smaller IIR there is a lot of reliance on the feedback to provide a “history” of the signal. This reliance on feedback makes the low-frequency IIR filter much more sensitive to errors, both in coefficients and quantization of output samples.

One popular method used to compute large FIR filters efficiently is the fast Fourier transform, or FFT; however, this method increases the audio delay. (There is a clever method for eliminating this delay described in “Efficient Convolution Without Input-Output Delay,” William G Gardner, JAES Vol 43, No 3, p127.)

Calculating filter coefficients is time-consuming, so rapid and smooth changes to frequency, bandwidth or amplitude require extra computing power. This typically requires a complete filter redesign. One way around this is to use a filter design that has independent coefficients for controlling frequency, bandwidth and amplitude.

Another plus of digital filters is that their response is completely repeatable, since there is no dependence on component tolerances. Thus, unit-to-unit variations are eliminated, making preloading programs possible, saving valuable site setup time.

WHAT TO LISTEN FORAnalog and digital equalizers sound very much the same, but there are some important differences. The first one most people notice is residual noise behavior. When you do a lot of boosting with a digital equalizer, it doesn’t increase the noise in the same manner as analog designs (it does increase the noise but not nearly as much as analog). With analog designs, boosting a signal also boosts the noise by the same amount. In the digital domain this doesn’t happen, but it can be disconcerting at first because users are so accustomed to the noise going up that they anticipate it, and when it doesn’t happen, it throws them. But they get used to the new result very quickly.

Another caution area is input overload. Most analog equalizers easily accommodate very large input signals, with maximums above +20 dBu being common. Digital equalizers that use input A/D converters are not as accommodating, so some caution is necessary in setting gain structure.

Overall frequency response is another area. The frequency response of analog equalizers may easily extend to 50 Khz, even 100 Khz, but not so with digital. The need to sample the input at not less than twice the upper frequency limit, coupled with the simultaneous need to have time to do the digital filter DSP calculations, requires the bandwidth to be strictly limited to not much beyond 20 kHz. This means absolutely nothing is passed above this frequency, so whatever psychoacoustic benefits may result from ultrasonic signals are not present with digital equalizers (even with 96 kHz sampling the highest frequency possible is 48 kHz).

1/3-Octave. Term used to describe graphic equalizers with the bands located on standard ISO-recommended 1/3-octave center spacing. (Note that the term is “one-third” not “third,” as is so often mistakenly used. A “third” octave means every three octaves; thus, 8 kHz is a third octave above 1 kHz.) Generally, for boost/cut equalizers, not only are the filters located on 1/3-octave spacing but they are also 1/3-octave wide, measured at the -3dB points referenced from the maximum boost or cut point (symmetrical boost/cut responses assumed). Cut-only (notch or band-reject) equalizers unfortunately offer no such standardization on bandwidth measurement points. If referenced as being 1/3-octave wide, there are two design schools: One uses the same definition as for boost/cut designs, while the other uses a new definition that measures the -3dB points from the 0dB reference line.

Active Equalizer. A variable equalizer requiring power to operate. Pluses include low cost, small size, light weight, loading indifference, good isolation (high input and low output impedance), gain availability (signal boosting possible), and line-driving ability. Minuses are increased noise performance, limited dynamic range, reduced reliability and RFI susceptibility.

Asymmetrical (Non-reciprocal) Response. Term used to describe the comparative shapes of the boost/cut curves for variable equalizers. A peaking boost response and a notch cutting response is an example.

Band-limiting (or Cut) Filters. A lowpass (or high-cut) filter and a highpass (or low-cut) filter in series, usually adjustable, acting together to restrict the overall bandwidth of the signal.

Bandpass Filter. A filter that has a finite passband, neither of the cutoff (corner) frequencies being zero or infinite. The bandpass corner frequencies are normally associated with frequencies that define the half power points; that is, the -3dB points from maximum.

Boost/Cut Equalizer. The most common graphic equalizer. Available with 10 to 31 bands an octave to 1/3-octave spacing. The flat (0dB) position locates all sliders at the center of the front panel. Made up of bandpass filters, all controls start at their center 0dB position and boost signals by raising the sliders, or cut the signal by lowering the sliders on a band-by-band basis. Commonly provide a center-detent feature identifying the 0dB position.

Constant-Q Equalizer (or Constant-Bandwidth). Term applied to graphic and rotary equalizers describing bandwidth behavior as a function of boost/cut levels. Since Q and bandwidth are inverse sides of the same coin, the terms are fully interchangeable. Fig. 1 shows constant-Q response. The bandwidth remains constant for all boost/cut levels. For constant-Q designs, the skirts vary directly proportional to boost/cut amounts. Small boost/cut levels produce narrow skirts, and large boost/cut levels produce wide skirts.

Cut-Only Equalizer. Term used to describe graphic equalizers designed only for attenuation. (Also referred to as notch equalizers or band-reject equalizers). The flat (0dB) position locates all sliders at the top of the front panel. Made up only of notch filters (normally spaced at 1/3-octave intervals), all controls start at 0dB and reduce the signal on a band-by-band basis.

FIR (finite impulse-response) Filter. A commonly used type of digital filter. Digitized samples of the audio signal serve as inputs, and each filtered output is computed from a weighted sum of a finite number of previous inputs. A FIR filter can be designed to have completely linear phase (i.e., constant time delay, regardless of frequency). FIR filters designed for frequencies much lower than the sample rate and/or with sharp transitions are computationally intensive, with large time delays.

Graphic Equalizer. A multiband variable equalizer using slide controls as the amplitude adjustable elements. Named for the positions of the sliders “graphing” the resulting frequency response of the equalizer. Only found on active designs. Center frequency and bandwidth are fixed for each band.

Highpass (or Low-cut) Filter. A filter that passes high frequencies and attenuates low frequencies. Specifically, it has a passband extending from some finite cutoff (corner) frequency (not zero) up to infinite frequency. The corner is usually defined as the -3dB point referenced to the passband.

IIR (infinite impulse-response) Filter. A commonly used type of digital filter. This recursive structure accepts as inputs digitized samples of the audio signal, and then each output point is computed on the basis of a weighted sum of past output (feedback) terms, as well as past input values. An IIR filter is more efficient than its FIR counterpart, but poses more challenging design issues. Its strength is not requiring as much DSP power as FIR, while its weaknesses are not having linear group delay and possible instabilities if not carefully designed.

Interpolating or Combining Response. Describes the summing response of adjacent bands of graphic equalizers. Interpolating means inserting between two points. If two adjacent bands used together produce a smooth response without a dip in the center, they interpolate or combine well. In the case of 1/3-octave equalizers, good combining or interpolating characteristics result from designs that buffer adjacent bands before summing, i.e., they use multiple summing circuits. If only one summing circuit exists for all bands, then the combined output ripples between center frequencies.

Lowpass (or High-cut) Filter. A filter that passes low frequencies and attenuates high frequencies. Specifically, it has a passband extending from DC (0 Hz) to some finite cutoff (corner) frequency (not infinite). The corner is usually defined as the -3dB point referenced to the passband.

Parametric Equalizer. A multiband variable equalizer offering control of all the parameters of the internal bandpass filter sections, i.e., amplitude, center frequency and bandwidth. This allows the user not only to control the amplitude of each band, but also to shift the center frequency and widen or narrow the affected area. Available with rotary and slide controls. Sub-categories of parametric equalizers exist for units allowing control of center frequency but not bandwidth. For rotary control units, the most used term is quasi-parametric. For units with slide controls the popular term is para-graphic. Cut-only parametric equalizers (with adjustable bandwidth or not) are called notch equalizers or band-reject equalizers.

Passive Equalizer. A variable equalizer requiring no power to operate. Consisting only of passive components (inductors, capacitors and resistors), passive equalizers have no AC line cord. Pluses include low-noise performance (no active components to generate noise), high dynamic range (no active power supplies to limit voltage swing), extreme reliability (passive components rarely break), and lack of RFI interference (no semiconductors to detect radio frequencies). Minuses are cost, size, weight and hum (inductors are expensive, bulky, heavy and need careful shielding). Also, signal loss (passive equalizers can only attenuate signals), and the fact that inductors saturate easily with large low-frequency signals, causing distortion, are drawbacks.

Peaking Response. A bandpass shape as opposed to a shelving response.

Proportional-Q Equalizer (or Proportional-Bandwidth). Term applied to graphic and rotary equalizers describing bandwidth behavior as a function of boost/cut levels. Fig. 2 shows proportional-Q response. The bandwidth varies inversely proportional to boost (or cut) amounts, being very wide for small boost/cut levels and becoming very narrow for large boost/cut levels. The skirts, however, remain constant for all boost/cut levels.

Q (Bandwidth). The quality factor, or Q, of a filter is an inverse measure of the bandwidth. To calculate Q, divide the center frequency by the bandwidth measured at the -3dB (half-power) points. For example, a filter centered at 1 kHz that is 1/3-octave wide has -3dB frequencies located at 891 Hz and 1,123 Hz, respectively, yielding a bandwidth of 232 Hz (1,123-891). The quality factor, Q, is therefore 1 kHz divided by 232 Hz, or 4.31. Going the other way is a bit sticky. If Q is known and the bandwidth (expressed in octaves) is desired, direct calculation is neither obvious nor easy. (See “Bandwidth vs. Q” in S&VC August 1999 for a downloadable Excel calculator.)

Shelving Response. A flat or shelf-like end-band shape, as opposed to a bandpass response, i.e., bass and treble tone control responses.

Symmetrical (Reciprocal) Response. Mirror image responses for all boost/cut curves on a variable equalizer.

John Volkman, while working for RCA, is credited with being the first person to use a variable equalizer to improve reproduced sound. He applied this new tool to equalize a motion picture theater playback system in the 1930s.

While Bell Labs used fixed equalizers earlier than this for correcting audio transmission losses, Volkman represents one of the first uses of an external variable equalizer as an added component to an installed system. Telephone applications involved integrating equalization as part of the receiving electronics, as opposed to thinking of the equalizer as a separate entity.

Also during the `30s, while Volkman experimented with equalizers for reproduced sound, Hollywood found uses for them in producing sound. Langevin, Cinema Engineering and others [1], created outboard adjustable equalizers for post-production sound effects and speech enhancement. Langevin Model EQ-251A represents very early use of slide controls. While not a graphic equalizer in today’s sense, it was the forerunner. The EQ-251A featured two slide controls, each with switched frequency points. One slider controlled a bass-shelving network with two corner frequency choices, while the other provided peaking boost/cut with four switchable center frequencies. This passive unit looked and performed as well as anything manufactured today.

Art Davis’ company, Cinema Engineering, developed the first recognizable graphic equalizer [1]. Known as the type 7080 Graphic Equalizer, it featured six bands with boost/cut range of 8 dB, adjustable in 1dB steps. (After Davis moved to Altec, he designed a 7-band successor to the 7080 known as the Model 9062A, a hugely successful graphic equalizer selling into the `70s.) Being an active design, the 7080 allowed signal boosting without loss – a nice feature. (With passive units, boosting of signals requires an initial broadband signal attenuation and then reducing the loss on a band-by-band basis. For example, flat might represent 16dB loss, while a 6dB boost represented only 10dB loss. It was all a matter of reference point.)

Another innovative feature of the 7080 was the first use of staggered mixing amps to aid in smooth combining of the equalized audio signal. Cinema Engineering designed three mixing amplifiers for six bands. Using this approach, no amplifier mixed adjacent bands. The center frequencies were 80 Hz, 200 Hz, 500 Hz, 1.25 kHz (labeled 1.3 kHz), 3.2 kHz (labeled 3 kHz), and 8 kHz. The amplifiers mixed 80 Hz + 1,250 Hz, 200 Hz + 3,200 Hz, and 500 Hz + 8 kHz, respectively. Using separate amplifiers to mix signals spaced four octaves apart resulted in seamless recombination at the output. (Later, Davis would use a similar technique in the design of the first Altec-Lansing active graphic equalizers.)

Not much happened during the `40s and early `50s due to World War II and its aftermath. Most applications of variable equalizers involved post-production work. No serious success at room equalization is known. Then in 1958, Wayne Rudmose (a professor at Southern Methodist University, Dallas) successfully applied new theories about acoustic equalization to the Dallas Love Field Airport. Dr. Rudmose published his monumental work [2], and sound system equalization was born.

In 1962, Texas was the locale for another major contribution to variable equalizer history. This time it was the University of Texas (Austin) and a physics professor named C.P. Boner. Boner, acknowledged by many as the father of acoustical equalization, built organs as a hobby. From his organ/room tuning experiences and acoustical physics knowledge grew a profoundly simple theory. Boner reasoned that when feedback occurs, it did so at one precise frequency, and to stop it all you had to do was install a very narrow notch filter at that frequency. He went to one of his former students whose company made precision filters for instrumentation and asked him to design a narrow-band audio filter. Gifford White agreed and launched White Instruments into the new field of acoustic equalization.

Armed with White equalizers, Boner established the foundation theory for acoustic feedback, room-ring modes, and room-sound system equalizing techniques [3-6]. Expanding on Boner’s work was a student of Rudmose named William Conner. In 1967, Conner published a concise paper [7] still considered among the best to describe the theory and methodology of sound system equalization.

Also in 1967, Davis, along with Jim Noble and Don Davis (not related), developed the industry’s first 1/3-octave variable notch filter set (passive) for Altec-Lansing. Don Davis presented the paper to the Audio Engineering Society in October 1967 [8]. Dubbed the “Acousta-Voice” system, it ushered in the modern age of sound system equalization and represented the ultimate in speed and convenience. The Acousta-Voice system proved that another path existed for the control of room-ring modes. As an alternative to Boner’s narrow-band notching technique, 1/3-octave “broadband” filters produced the same results.

The rest, as they say, is history, a 30-year history that witnessed an explosion of variable equalizer developments. Among the most noteworthy were the 1/3-octave graphic equalizer, the parametric equalizer, use of integrated circuits, development of the gyrator (synthetic inductor), active LC and RC designs, development of constant-Q (bandwidth) graphic equalizers, the application of microprocessors for control and memory, and, finally, digital equalizers using DSP designs.

1. H. Tremaine, Audio Cyclopedia, 2nd. Ed., (H.W. Sams, lndianapolis, 1973).

2. W. Rudmose, “Equalization of Sound Systems,” Noise Control, vol. 24 (Jul. 1958).

3. C.P. Boner, “Sound Reinforcement Systems in Reverberant Churches,” presented at the 67th Meeting of the Acoustical Society of America, New York, May 8, 1964.

4. C.P. Boner and C.R. Boner, “Minimizing Feedback in Sound Systems and Room-Ring Modes With Passive Networks,” J. Acoust. Soc. Am., vol. 37, p. 131 (Jan. 1965).

5. — –, “A Procedure for Controlling Room-Ring Modes and Feedback Modes in Sound Systems with Narrow-Band Filters,” J. Audio Eng. Soc., vol. 13, pp. 297-299 (Oct. 1965).

6. — –, “Behavior of Sound System Response Immediately Below Feedback,” J. Audio Eng. Soc., vol. 14, pp. 200-203 (Jul. 1966).

7. W. Conner, “Theoretical and Practical Considerations in the Equalization of Sound Systems,” J. Audio Eng. Soc., vol. 15, pp. 194-198 (Apr. 1967).

8. D. Davis, “A 1/3-Octave Band Variable Notch Filter Set for Providing Broadband Equalization of Sound Systems,” presented at the 33rd Convention of the Audio Engineering Society, J. Audio Eng. Soc. (Abstracts), vol. 16, p. 84 (Jan. 1968).

Featured Articles

Close