Advanced Instrumentation and Digital Multi-Processing: A learning experience

At the fall 1992 AES convention in San Francisco, TOA demonstrated the remote control of the first DSP-based real-time signal processor in our industry,
Author:
Publish date:

Advanced Instrumentation and Digital Multi-Processing: A learning experience

Dec 1, 1997 12:00 PM, John Murray

At the fall 1992 AES convention in San Francisco, TOA demonstrated theremote control of the first DSP-based real-time signal processor in ourindustry, the SAORI, via a fiber-optic link. Having presented a paper ondriver alignment using the SAORI and Tecron TEF-20 at the '93 AESconvention in NY (Murray, 1993), I had dreamed of controlling both signalprocessing and acoustical analysis simultaneously via one computer.However, the control software for both devices were DOS-based, and MS-DOScouldn't run two simultaneously active programs that required a COM porteach. What I really needed for site tunings was a mobile laptop PC, andonly desk-top PCs could accommodate more than one COM port.

In the fall of 1993, I received my first copy of the beta software fromengineering in Japan for the then-titled Integrated DSP and Control System.It was still MS-DOS-based, but the signal processing control capabilitieswere impressive. Later that winter, before the hardware was finished, Ireceived the first Windows-based beta software for the system. UsingWindows 3.1, I could run the signal processing software and then start upthe latest version of TEF-20 Sound Lab software in a DOS window. Using"alt-tab" to toggle back and forth between the applications, I salivatedabout the future where I could make a settings change, hear it in real-timeand measure the change graphically to see the acoustic results as well.

Spring of '94 brought the NSCA in Las Vegas and the unveiling of TOA'sDACsysII series, as it is called here in the 'States. That summer, Ireceived my first working prototype of the DP-0204 2-in by 4-out DSP unit.I had also recently began using a new laptop PC that had one COM port andtwo PCMCIA slots. I needed just one more item to realize my dream, and itcame in the form of a serial I/O adapter in a PCMCIA card. This adapterfinally gave me the two serial COM ports I needed to have both the TEF'sacoustical analysis and the DACsysII's signal processing workingsimultaneously on one laptop PC. The addition of an RS-232C-to-RS-485converter provided a balanced control line between the PC and up to 30 DSPunits. This was another great leap ahead, because now I could tune thesystem in real-time from the middle of the listening environment, not fromin front of the signal processing racks. No more endless stair climbs toand from the equipment room!

I had been tuning sound systems using TEF analysis since 1981. The originalprocess then required an analog FFT analyzer, a spectrum analyzer, ahigh-quality signal generator with frequency counter, a black box interfacefrom Richard Heyser and a scientific calculator to determine properadjustments of the $40,000 test gear.

During my time at Electro-Voice, I had also spent much time in the anechoicchamber with the engineers on various loudspeaker development projects. Ithad taken two weeks to develop a passive crossover network used in aspecial DeltaMax loudspeaker produced for Mark IV's involvement atEuro-Disney in 1991. What I had now in '94 with DSP-based signal processingand analysis was light-years ahead. I could create an active crossover thatworked much better in just 20 minutes while in my living room, and I couldlisten to the changes in real-time as I made them.

The total cost for this test set-up and all the needed signal processing,including a Toshiba laptop PC, Tecron TEF-20, TOA DP-0204, B&K 4007calibrated test mic and all required software and cabling was roughly$11,500. This total is less than 30% of what was required in 1981 for thecost of the test equipment alone, and it performed much better and fasterand was far easier to use.

The first problem I encountered using this set-up was how to use all thesignal processing functions and how to properly interpret the resultingacoustical measurements. To my knowledge, no earlier system of acousticalanalysis or signal processing had such power or ease of use. I had moretools at my finger-tips than was ever available before. Now that I had allthese tools, what was the proper way to implement them all?

Avoiding acoustical contaminationWhen developing the methodology described in the AES paper mentionedearlier, I had begun to realize that we as professionals in the soundbusiness have no standard method to equalize sound systems. Regenerative(feedback tuning) and tuning to a particular RTA curve are two of the morecommon methods. And no matter which method is employed, much trial anderror, adjusting and listening is always needed. Furthermore, because wegenerally use RTAs to equalize, we must interpret badly contaminatedmeasurements during the equalization process. There must be a better way!

When the real-time analyzer was the most advanced measuring tool we had,Dr. Boner's "house curve" was the rather broad brush that accounted for thedevice's inability to distinguish between the anechoic, or what I will calldirect response of a loudspeaker system, and what is commonly called theroom response. What the RTA measures is a mix of the direct and powerresponse, as affected by the reflective and absorptive nature of theenvironment. If the measuring mic is beyond critical distance, where thereverberant level exceeds the direct level, the power response dominates.With equal level in the direct response, the low-frequency driver, withless directional control, will have a greater power response than that of abetter-controlled, beaming high-frequency horn. This is why the house curveviewed on an RTA has more level in the lower frequencies.

Because the RTA sees all the reflected energy as well as direct energy, aflat direct response will approximate the house curve with the test mic inthe reverberant field. Of course, with varying pattern-controlled devices,the house curve was not very exact. One had to listen and adjust to get thesound "right." With the advent of non-beaming, pattern-controlled horns,their power response was more akin to the low-frequency components,therefore the high-frequency roll-off had to be less severe than theoriginal house curve in order to get the sound "right." This method hasserved well for many years, but it is not exact and is time consuming. Andit is still very subjective.

The method I employ uses a combination of proper microphone placement andtime windowing to totally isolate the direct sound from contamination byany reflections or multiple sources. Placing the loudspeaker system outsidewill work, as will putting the test mic on a very tall stand if theloudspeaker system is flown. In other cases, as described in the AES paper,a combination of near-field mid and high measurements well within eachtransducer's coverage pattern, with a ground plane measurement for thelows, will work as well. The basic idea is to reduce the level of anyreflections at the mic position to the point that they will not affect thedirect sound at all. One should only equalize one transducer per passbandof the loudspeaker system. This is especially true for higher frequencies(shorter wavelengths).

This technique enables the system tuner to quickly equalize for a flatdirect response that produces, as near as possible at the point of themicrophone, the same sound quality that exits the system mixing console.Whether one is sending the mix to a recorder or a loudspeaker system, itshould make no difference in the mix if the loudspeakers are properlytuned. In recording studios, this is exactly the case when the manufacturergoes to great lengths to get a flat direct response from the main monitors.

In a sound-reinforcement system, the same can be true. If you initiallyequalize for a flat direct response, you will have the system 90% tunedright away. This method is hardly new, and if that is all there was to aperfect system tuning, the industry would have standardized on it long ago.However, for a sound-reinforcement loudspeaker system, other issues needcompensation beyond merely tuning for a flat direct response. These issuesaccount for the last 10% of the tuning, and their omission is what I thinkhas lead people to think that this method is flawed.

The issues are as follows:Equalization to attenuate mutual low-frequency driver coupling for arrayedsystems (one loudspeaker sounds great, an array sounds "tubby").

Knowing that room modes render meaningless the use of a test mic in a givenspace for the response below a critical frequency (f[subscript]c),regardless of the measurement system. This problem is caused by theposition-dependent and greatly fluctuating levels encountered in the modalrange of frequencies for a particular room (f[subscript]c in Hz = [3 x1,128 feet/s] -- room's smallest dimension in feet).

Accounting for loudspeaker-to-measurement mic distance and the amount ofhigh-frequency attenuation caused by air absorption of the direct soundlevel. (This issue deals with the fact that the human ear-brain combinationfinds a flat direct response to very high frequencies may sound intimate or"in your face" but unnatural to some listeners if the source's distancewould normally produce an acoustic character with less high-frequencycontent).

Equalization to reduce high-Q mic-and-loudspeaker-supported resonances thatare often too narrow to observe on some analyzers (those little colorationsthat hang on in time when you bark "check-one-two" into a system vocal mic).

Equalization in high-level systems to attenuate frequency ranges where atransducer's distortion components occur most strongly (can be synonymouswith the last point).

Equalization incorporating some loudness contouring or "artistic EQ" (e.g.boosted rock'n'roll low-frequency "haystack" or a boosted 10 kHz range for"airy" vocals).

Often I have seen people try to equalize a system using a test mic on astandard floor stand on a hard floor in the reverberant field while playingpink noise over stereo stacks of multiple loudspeaker systems. Theyequalize for a flat response on the RTA's display. They are disgusted withwhat they hear when music is played over the system and assume RTAs areuseless for equalizing. Except for approximating Boner's house curve andadjusting it until it sounds right, one cannot equalize a sound system thatway.

Others use averaging, either of multiple response curves or of multiplexedmeasurement microphones. If your microphone positions are at null points atsome frequencies, and in front of a system array they usually are, then youwill be averaging good response with bad. Suppose all the mics happen to beat a null point for one frequency? If you could boost an EQ filter to+/-[infinity], would you?

In my opinion, to equalize a sound system quickly and accurately, one mustisolate the direct sound. This means turning off all but one driver perpassband and positioning the mic so that strong reflections either aren'tthere for an RTA to see or can be windowed out by the measurement system.With anything else, you cannot tell whether you are tuning the response ofthe loudspeaker or the effect of a delayed interference.

There are those who are of the opinion that one can perform roomequalization. Don Davis, the founder of Syn-Aud-Con, originally coined thisterm. He has said, "If there's anything I'd like to take back, it's theterm room equalization, because you can't equalize a room."

Let's look at the effect of a room reflection on a loudspeaker's frequencyresponse. Because it is a time-domain effect, it is non-minimum phase andlinear in nature. (See Figures 1 and 2.) This is (sin x)/x interferencenotching, popularly called a comb filter (even though it is not a filterand has nothing to do with a comb). Each notch in the comb-filtering effectthat a reflection causes can be extremely deep, approaching infiniteattenuation at the center, and is non-symmetrical on a log-scale frequencyresponse graph. (See Figure 3.)

Conversely, equalizer filters are minimum-phase and symmetrical on a logscale, and I've yet to encounter any that have infinite boost at the centerfrequency. (See Figure 4.) Clearly, an equalizer's filters are not designedto correct notches caused by delayed reflections, let alone the multitudethat exist at any test mic position relative to each loudspeaker in a soundsystem. Equalizers can only correct minimum-phase problems in the directsound of a loudspeaker. (See Figure 5.)

Acoustical problems must be corrected acoustically. If a reflection off awall is a problem, either re-aim the loudspeaker or add absorption to thesurface. If a null at a particular frequency exists between twoloudspeakers, re-aim, add a loudspeaker, try frequency shading, or livewith it. You cannot fix lobing by averaging between good on-axis responseand the nulls between devices. Any problem that is time-oriented cannot befixed with equalization except at one single position in the room. Thislist includes every reflection in a room and any lobing caused by multiplesources. Techniques have been developed to address this, but they are verymuch a compromise. One must have a thorough understanding of both themeasurement system and the resulting compromise to attempt this type oftuning.

Alignment and crossover networksYears ago I encountered UREI 813 Time-Aligned recording studio monitors.They had an Eminence 15-inch (381 mm) square-magnet subwoofer in anenclosure with an Altec 604 Dupex mid-high assembly consisting of a 1-inch(25.4 mm) compression driver on a very small 60 degrees X 40 degrees horncoaxially mounted through a 15-inch (381 mm) woofer. The entire system waspassively crossed over via Ed Long' s patented Time-Alignment technique.The system sounded good for those days (late '70s and early '80s) as longas your listening position was slightly off-axis vertically. On-axis thehorn was a bit overpowering. I also remember Don and Carolyn Davisdemonstrating "signal alignment" by sliding a horn/driver assembly back andforth on top of a low-frequency assembly while pulsing the system. Eversince then, driver alignment has been a buzz-word in the industry.

When I first began experimenting with the SAORI, this was my first chanceto really dig into driver alignment. I began asking people in the industryjust how one went about aligning drivers. Delaying one driver's signal sothat it arrived simultaneously with the other in the crossover was thefirst answer. However, this voice-coil alignment did not account for thephase shift introduced by the combination of the crossover filter in serieswith the loudspeaker as an acoustical/mechanical filter. The combination,more often then not, produced a dip in response at the crossover frequencythat might be audible. (See Figure 6.)

That method conflicted with the phase-alignment shown to me by Jim Longusing the XEQ-3 during my time at Electro-Voice. This technique adjustedthe phase relationship between drivers at the crossover frequency to avoidthe aforementioned response dip but did not account for different timearrivals. It involved reversing polarity of one driver, tuning for thegreatest null at the crossover frequency using the delay all-pass filtercontrol. Once this 180 degrees point was found (indicated by the deepestnotch at crossover), the polarity would be un-reversed to be in-phase andflat through the crossover region using a 24 dB/octave Linkwitz-Rileynetwork. (See Figure 7.) The technique, employed in the '93 AES paperreferenced earlier, used the SAORI's digital delay to accomplish the phasealignment described earlier.

On Dec. 3, 1994, I presented another paper to the AES (Murray, 1994). Thispresentation was a live demonstration showing the creation of a crossovernetwork and equalization for a small loudspeaker system using theDACsysII/TEF/laptop PC combination. The entire presentation, includingexplanations, took only about 35 minutes. To my knowledge, this was thefirst time real-time, simultaneous remotely computer-controlled signalprocessing and acoustical analysis was demonstrated in public.

The DP-0204 has the digital delay capability of the SAORI plus all-passfiltering like that offered by the phase-alignment-capable analogcrossovers. I used the digital delay feature to align the woofer to thehorn-driver by synchronizing the front edge of their respective broadband,full-range, unfiltered energy time curves. (See Figure 8.) Because theirresponses are unfiltered, the short wavelengths/high frequencies arrivefirst and are essentially voice-coil locators. This front-edge alignmentsynchronizes the voice coils of the woofer and horn-driver so that anunfiltered high-frequency impulse from either transducer reaches themeasurement mic simultaneously. (See Figure 9.) This assures, even withadditional crossover and EQ filtering, that the drivers' acoustic originswill be within a wavelength of each other at the crossover frequency.

Experience tuning systems with Craig Janssen has taught me that the nextstep in the process should be to equalize each driver separately beforeapplying the crossover filters. If possible, the drivers should beequalized flat as much as an octave past the crossover frequency. Doing somakes combining the drivers via the crossover network easier because theyact much more like the line-level signal from which crossover filtertopologies are modeled. For example, if the loudspeakers are flat throughcrossover, after applying 24 dB/octave Linkwitz-Riley filtering, they willalso be 6 dB down, just as a line-level signal would be. (See Figure 10.)

To do this equalization process, filters after the crossover on each outputleg of the signal chain are essential. Any filtering that will affect theamplitude or phase of the signal past the crossover frequency into anotherdriver's passband must use the filters located after the crossover. Systemswith all the filters pre-crossover are useless for this, and those withonly one or two filters after the crossover do not provide enoughcapability to properly tune most drivers.

Today's DSP-based units can provide a virtually unlimited number ofpossible crossover combinations. For choosing crossover slopes, theflexibility that DSP provides allows tricks that were not possible whenonly relatively rigid, symmetrical, analog crossovers were available. Onemust guard against "rapture of the deep" in searching for the perfectcombination. You can rapidly use up all the time that computer-controlledtuning is supposed to save you!

This article is not the proper venue for an in-depth discussion oncrossover filtering, but I can offer the following food for thought. If thechosen crossover slopes provided only a 3 dB down-point at crossover, suchas an 18 dB/octave Butterworth function, a 3 dB hump at crossover wouldoccur if the drivers were in phase at that frequency before the crossoverfiltering was applied. You could choose to spread the frequencyhinge-points by moving the low-frequency crossover frequency down and thehigh-frequency frequency up so that the hump disappears. Or thesehinge-points could be chosen so that each driver's level is 6 dB down atthe chosen crossover frequency. Then an APF or delay could be used toprovide an in-phase summation and a flat response will result regardless ofthe filter type employed.

Although most people I've worked with choose 24 dB/octave Linkwitz-Rileyslopes, other filter types also offer attributes. For example, Besselfilters exhibit minimal group delay compared to other more commonly usedButterworth and Linkwitz-Riley topologies.

One should never lose sight of the fact that the acoustical crossoverslopes and frequencies measured are rarely those chosen electrically in thecrossover network. The mechanical filters (loudspeakers) after theamplifiers change the signal characteristics. It is the acoustical resultthat is important and to which we listen.

Because of the mechanical filtering, the acoustical product usually doesnot closely mimic the amplitude and phase characteristics of a line-levelcrossover filter. As a result, proper summation won't exist at crossoveracoustically. One can either apply an all-pass filter (APF) or readjust thedigital delay to one of the drivers to phase-align them.

When using the APF, set it to the lowest possible Q so that its effect issmooth and gradual to avoid near-crossover phase cancellations with theother driver's idiosyncrasies. When using delay for this, be sure thesummation point has the drivers within a wavelength at the crossoverfrequency. In-phase summation at crossover avoids a deep notch in thefrequency response and provides a smooth phase-response transition throughthe crossover region. The most desirable result exhibits the least phaseshift from the low frequencies to the high frequencies.

Keep in mind that alignment between drastically different wavelengths, suchas 100 Hz at 11.3 feet (3.4 m) and 10 kHz at 0.113 feet (0.03 m), is fairlyacademic. If perfectly aligned, the 10 kHz wave will go through full cyclelong before the 100 Hz wave even begins to rise in level.

Using this method of voice-coil/impulsealignment-equalization-crossover-phase alignment, I've easily tuned manycombinations of drivers to within a +/- 2 dB window, with a smooth phaseresponse, throughout the entire device's passband on axis. (See Figure 11.)Given reasonable choices, this is virtually regardless of the transducersemployed.

For single-device applications, this on-axis frequency response curve maybe averaged with off-axis curves within a device's coverage pattern.However, if drastic response-curve differences are encountered within thepublished coverage angles of the device, go with a flat on-axis directresponse and change your product selection next time. In applicationsemploying multiple transducers per passband, I would recommend tuning onlythe on-axis response. Any off-axis averaging would be compromised by otheradjacent sources in the system.

The futureFirst, one must keep in mind that the function all these DSP-poweredparametric filters, high-pass and low-pass crossovers, digital delays,all-pass filters and compressor-limiters are solely for a loudspeakersystem. If the DSP function occurs post-mixer, its purpose is to properlyroute to, protect, or correct for a loudspeaker. We must always think interms of the effect on the loudspeakers when we employ all these tools.

When the DACsysII series was first introduced, the ability to mimic a1/3-octave equalizer was a high priority. Most people at that time wereusing 1/3-octave EQs in most systems, and FFT-based measurement systems,such as Tecron's TEF or Meyer's SIM system, were not in the majority. RTAswere, by far, the industry's standard method of acoustical analysis. As aresult, analog parametric equalizers were not widely used because of thedifficulty in documenting their settings and the RTAs inability to resolvetheir adjustment parameters.

RTAs are still the most numerous, but Sam Berkow's economical SMAARTsystem, offered by JBL, is quickly gaining popularity, and soon FFTmeasurement systems like this will constitute the industry's morecost-effective standard. Because of the ability of these types ofmeasurement systems to resolve their adjustments, and because of theability of the DSP-based filters to have their settings documented, thecurrent trend is a much wider industry acceptance of parametricequalization.

I believe multiple parametric filter sets located after the crossovernetwork on each output leg in the system signal chain will be the trend forfuture DSP products. Each of these filter sets will feed an individualdriver type within a system. Once you have tuned a system using filtersafter the crossover rather than before it, you realize that after thecrossover is the proper place for equalization. Historically, equalizationhas always been in front of the crossover simply because it was moreeconomical to assemble all the filters in one box, and this dictated aposition in front of the crossover. The flexibility of DSP has freed us ofthat convention, and the industry will be drawn in the opposite directionfrom now on.

Companies offering DSP products in our industry are coming to realize thatthey are software companies and that the control software for theirproducts is the product from the purchaser's point of view. As a result,much better graphical user interfaces (GUIs) will be available forPC-controlled products in the near future. "Better" means designed from theuser's rather than the programmer's point of view. Unfortunately, thefixed-installation industry is moving away from the touring industry's MACuse, but for PC users, things are only going to get better. Additionally,with the availability of spread-spectrum wireless RS-232C links, wirelesscontrol will soon be the norm. For sound system tuning, ease of product usewill soon advance phenomenally.

Since the first time I experienced simultaneous signal analysis andprocessing, I have been dreaming of the day when acoustical analysis wouldbe incorporated into signal processing hardware and control software. Todaywe have both analysis and processing software operating under Windows3.1/3.11 and Windows 95. Perhaps by the time you read this, a combinedproduct will already be on the market. Its time is here.

The crossover networks I've constructed above are, to a great extent,synonymous with their analog counterparts. Their propagation delay isrelatively short because of the infinite impulse response (IIR)-baseddigital filtering. IIR and analog crossover filtering have a predictablephase shift from the low-frequency section to the high-frequency section.For example, a 24 dB/octave Linkwitz-Riley crossover with some equalizationfiltering will have more than 360 degrees of phase lag from the lows to thehighs above the crossover frequency. Other filter functions can provideequally flat frequency response while having less phase shift. Besselfilters, for example, can provide a flat response with only 100 degrees ofphase shift from the lows to the highs.

Finite impulse response (FIR)-based DSP or use of multiple APFs are capableof providing essentially a flat phase response from the first break-up modeof the high-frequency drivers in the 10 to 20 kHz range down to subwooferfrequencies. Although flat phase response does sound subtly better, theprice to be paid is a relatively long propagation delay. Depending on howlow in frequency the flat phase response extends, the delay from input timeof the signal processor to output time can be 30 ms or more. Also, thepresent techniques to apply the tools for flat phase, FIR filtering andmultiple APFs are sufficiently difficult and time-consuming to keep thesetools from being an option in the field at this time.

Some products in the marketplace now attempt automatic equalization. Thoughthese products are not nearly sophisticated enough to be of useprofessionally, we will soon see products that are. Once we see acousticanalysis modules incorporated into the more powerful DSP products wecurrently have, the market will be ready to take the next step, adaptivefiltering. This is a more intelligent version of automatic equalizationthat will use phase-sensitive, FFT-based measurements to align and equalizesound systems, providing a preset compromise between flat-phase and maximumallowable propagation delay.

As outlined by Rob Reams, the creator of Audio Control's Iasys, thesesystems will look at parameters such as maximum level before non-linearity,ambient noise level and highest and lowest frequency of usefulreproduction. These types of systems will then isolate areas of non-minimumphase response. Once these types of information are incorporated into theintelligent control software, the adaptive filtering can begin. Thiscomplex process involves a microprocessor that controls DSP to adjustparameters until a target response at a test mic is realized. We have allthe technology to do this now, and some sophisticated adaptive filteringsystems do exist, but they are cost-prohibitive and do not have a GUIsuitable for the general market yet. This will soon change.

Imagine setting up a sound system, placing test mics for each particularportion of the system's response curve, speaking "equalize" into yourlaptop's built-in mic, and getting a cup of coffee. When you return, yoursystem is perfectly tuned. Now imagine your customers doing this, too.Imagine what job opportunities you'll have once this happens. Hmmmm ...ever thought about a career in lighting?

Author's note:I would like to thank all those who have taught me everything I know aboutaudio, including the individuals mentioned in the body of this text andthose who were not. Most everything in this article has been taught by, orstolen from, others in this industry that I admire.

Featured

Related