Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Acoustical Measurement and Predictive Modeling

Assessing the State of the Art

Acoustical Measurement and Predictive Modeling

Sep 1, 2005 12:00 PM,
By Bruce Borgerson

Assessing the State of the Art

EASE (Enhanced Acoustic Simulator for Engineers) sound system modeling software screen
captures.

The snowballing advances in computer hardware power and the accelerating rate of software development have altered the landscape of sound system design. New generations of software-based measurement and prediction tools are more flexible and more accurate, while at the same time aggregate system costs have tumbled in the past few years. But as we solve old problems are we creating new ones? Are we using the tools properly? Are we forgetting about our ears?

For answers, Sound and Video Contractor assembled a panel of authorities in the field, drawing representatives from a cross-section of users and manufacturer/developers. Those experts include Wolfgang Ahnert (principal, ADA Acoustical Design), Jamie Anderson (product manager, SIA Software, a division of Loud Technologies), Pat Brown (president, Synergetic Audio Concepts), Bengt-Inge Dalenbäck (owner and software developer, CATT-Acoustic), Kevin Day (senior consultant, Wrightson, Johnson, Haddon & Williams), David Kahn (principal consultant, Acoustic Dimensions), Ted Leamy (director, engineered sound, JBL Professional), Bob McCarthy (president, Alignment and Design), Perrin Meyer (software R&D manager, Meyer Sound), Roger Schwenke (staff scientist, Meyer Sound), and Robert Scovill (concert sound mixer/producer, Eldon’s Boy Productions).

(Note: This web version contains the complete responses of all panel members, edited only for clarity and continuity. Some respondents chose to be brief; others opted to elaborate in detail. Also, Meyer Sound chose a “tag team” response with the two representatives answering different questions.)

“By far the biggest advancement is the efficient use of convolution in the measurement process and auralization in the prediction process.”
Pat Brown, Synergetic Audio Concepts

What have been the most significant advancements in acoustical measurement and predictive modeling over the past decade?

Anderson: For measurement, the new computers have changed a huge chunk of the equation, since now we can do in software what we formerly had to do in DSP. Consequently, as we come up with new ways to use a program, we don’t have to re-code everything in firmware. Furthermore, as users come up with new techniques, we can implement them quickly, trying it out in the software, allowing us to quickly push new generations of programs. This is in sharp contrast to the old days of working with, for example, B&K analyzers, where it was clunky to get new code implemented. Now, we can run it in software, making the whole process much more nimble.

For modeling, the use of increased resolution (angular, axial and frequency) loudspeaker response data is the biggest advancement. Getting good, useful predictions relies on starting with using high-resolution data, high-quality data. If you start with low-quality data in speaker response, when you multiply it out, inaccuracies are the inevitable result. Most things in prediction world are vector algebra, and you need to start with high-resolution data in order to predict what the loudspeaker systems are going to do.

Ahnert: In measurements, advances have come with the change from hardware-based measurement devices to sophisticated software-based ones, as well as the change from frequency range measurements with RTAs to time response measurements using FFT implementation.

In simulation, or prediction, the advancement has come with the transfer from estimation tools to calculation routines allowing a prediction rate of 70 to 80 percent. Older versions should be used with caution, or not at all. More and more it happens that we see a separation of marketing-based programs — primarily developed by speaker manufacturers — as opposed to science-based programs developed by acousticians at universities or institutes.

Brown: By far the biggest advancement is the efficient use of convolution in the measurement process and auralization in the prediction process. These allow the investigator to listen to the data, which is the best way to analyze it. Next would be the better algorithms for predicting the decaying tail of the impulse response. This potentially requires a long calculation time, so we still require an algorithm that produces a realistic result and yet is fast enough to be useful.

Also, frequency-dependent diffuse reflection is a step forward. This allows the scattering effect of a surface to be considered. While the coefficients themselves are only estimates, their values are somewhat intuitive and produce much more realistic IRs.

“In prediction, a key advance has been the availability of more software tools that include frequency-dependent diffuse reflection and modeling of the nearfield behavior of arrays.”
Bengt-Inge Dalenbäck, CATT-Acoustic

Dalenbäck: In measurement, most of the current methods are older than the last decade so the advancements are rather in the wider availability and the lower cost that now ought to make measurements a standard procedure. Previously, it required specialized and dedicated hardware or the buildup of various custom-built inhouse tools.

In prediction, a key advance has been in availability of more software tools that include frequency-dependent diffuse reflection and modeling of the near-field behavior of arrays. The importance of frequency-dependent diffuse reflection in prediction has been known for at least 25 years, but it has taken a long time for the knowledge to affect all software. Similarly, with the nearfield of an array, it isn’t until recently that this has been handled even by specialized sound system software.

Day: The most significant advancements in acoustical measurement and prediction this past 10 years followed behind the advancements in personal computer technology. As CPUs got faster and more efficient, the acoustics measurement and prediction programs were able to do more calculations in a fixed amount of time making it practical to measure and model to greater resolution than before.

A decade ago, the better measurement platforms were all hardware based systems that performed the A/D conversion and did most of the processing outboard of the computer, which was basically just an I/O device to enter parameters and display graphs. Now we have software applications that do all the measurement processing in the CPU and allow the user to choose the A/D interface.

A decade ago, the modeling and prediction programs were not able to provide high-resolution data without burdening the user with extremely long processing times, measured in days not hours. The user had to decide what elements to simplify or leave out of the room models in order to get an output within a reasonable time. Now we have modeling programs that take advantage of modern computer processors and utilize all the RAM available to allow the user to create more detailed models for room acoustics prediction. We can now look at ways to make the predictions more accurate by adding coefficients for scattering and diffusion of sound as it reflects off the virtual surfaces to the calculations where we had to settle for calculating with only the absorption coefficients 10 years ago.

Kahn: The most significant advancements relate primarily to the advancements in the speed and power of portable/desktop computers. This has allows both our inhouse programs and commercially available programs to run faster, and allows the user to check on, for example, more orders of reflections, or allows more planes in the model. For acoustical measurements, it has allowed higher resolution (in time or frequency) in data acquisition, and has allowed faster and more in-depth postprocessing of the data.

Leamy: Certainly the most significant advancement has to be tied to the rapid development of the personal computer. There are a number of effective platforms available for measuring and optimizing sound systems today that rely on simple off-the-shelf laptop computers and basic I/O devices. This has put more effective tools in the hands of just about anyone who cares to invest a few thousand dollars.

McCarthy: In both cases, it is the ability for large numbers of people to see high-resolution, complex data: the good, the bad, and the ugly. In acoustical measurement it is the proliferation of Smaart. Previously acoustical measurement was dominated by the placebo measurement tool: the Real Time Analyzer. Superior complex response tools such as MLSSA, SIM, and TEF were the domain of a small minority of dedicated professionals. We had to fight an uphill battle for engineers to comprehend the potential benefits of seeing the system response in precise detail. Now complex data, phase, and the transfer function are available to everyone, and this has sent one dimensional analysis, and one dimensional solutions packing.

For acoustic prediction, we now have high-resolution data (1/12 octave or more) that includes the phase response. This allows us comprehend the give-and-take of every design decision. In the past the access to prediction software, as measurement above, was limited to a small group, in this case large companies and private consultants. The systems are expensive and intimidating. MAPP Online opened things up as Smaart did above, allowing widespread access to prediction data.

Schwenke: I think that measuring loudspeakers in one-degree increments has been an important advance. We have been measuring loudspeakers using that resolution for quite some time, but it was not until line arrays became popular that the benefits of such fine measurement became obvious. Also, self-powered loudspeakers have helped to bring prediction in line with measurement. With separate components, there are so many variables to be set that it is nearly impossible to achieve a high degree of certainty. If the system in use is not configured in exactly the same way as the system measured for the predictive model, the prediction will not prove accurate.

Scovill: From my perspective as a mixer, the main advance has been in the ability to do high-level transforms and phase analysis on smaller and smaller platforms. I can take a very high-powered analysis system in very small package to concert or recording events and have a very powerful compact tool at my disposal. In the past decade, computers have taken considerable leaps in processing power, which enables much more complex measurements in a shorter and shorter amount of time.

What are the relative advantages and disadvantages of manufacturer-specific tools (e.g. Meyer Sound’s MAPP Online) versus open participation tools (e.g. EASE), particularly in regard to integration of predictions with measured results?

Ahnert: Science-based programs have an open database and try to implement tools to support the design work of the user. A manufacturer-specific tool has implemented a specific algorithm to support the application of the product, and also a modern viewer function is often implemented. We, as the developers of EASE, have understood that this idea should be supported because a lot of potential users don’t have EASE or other CAD programs. Therefore, two years ago, we added to our website two new tools: EASE Online and EASE Speaker Reader, allowing a user free access to do similar calculations to MAPP. In our case it’s only 5 degrees resolution, but rather than limited to one manufacturer the feature uses the open database in EASE.

Brown: The obvious advantage is that the manufacturer can streamline the software for their products, making it easier to use and producing the results with a shorter wait time. The disadvantage is that it may not be possible to compare the results with another predictive method, so there is uncertainty regarding the accuracy.

Regardless of the origin of the software, an array design program should always be validated with measured data. A logical course to follow is: 1. Design the array with a general purpose modeling tool (i.e. CATT-A, EASE). 2. Measure the array to verify the prediction accuracy. 3. Build a specialized tool that speeds the design process.

Dalenbäck: I see no contradiction or problem with having both. In fact, I see it as an advantage, the more possibilities for comparison the better, this was also a major reason for the development of the Common Loudspeaker Format (www.clfgroup.org). The problem is, rather, if there is no possibility to independently compare between and check data, software, or prediction methods. However, a problem with manufacturer-specific tools is that they typically do not include the reflected sound, or only use a classical Sabine approach, so they will often tell only a part of the story, at least for indoor cases.

Day: Some of the common manufacturer specific tools for performance prediction such as Meyer’s MAPP or the various manufacturers’ line array calculator applications work quite well to calculate the performance of loudspeakers when arrayed. In MAPP the complex summation of high-resolution Meyer loudspeaker data is calculated and displayed. This information allows the designer to determine optimal configurations for Meyer loudspeaker arrays. The user is not able to add data for custom loudspeakers or those from other manufacturers however. Another disadvantage of MAPP is it currently does not calculate the energy dispersion across a sloped audience plane or a three dimensional room model.

EASE database platform allows manufacturers and users to put in the data for any loudspeaker. Whether that data is an accurate representation of the actual loudspeaker or not is another question. EASE is able to calculate the complex summation of loudspeaker arrays and display the coverage predictions on three-dimensional areas. This provides more usable data for a systems designer in my opinion. The disadvantages of EASE for me include the fact that the data format is not exactly high resolution. The frequency response data does not go below 100Hz and the directivity data is in 10 degrees. All manufacturers that wish to provide data for use with EASE must work with that format. Low-frequency “woofer” arrays must be calculated by other means outside of EASE. Highly directive loudspeakers cannot be well represented with 10 degree resolution for directivity data in EASE.

McCarthy: I do not have experience operating EASE and therefore make no pretense as to a full understanding of its capabilities. In the case of EASE, I have personally found poor correlation between system performance predictions and the measured response. The EASE predictions that clients have given me have been, for lack of a better expression, “extremely optimistic.” The measured system response never approached the smooth predictions. One factor in this is that I never measure systems at low resolutions, such a 1/3 or 1 octave, and therefore the worlds fail to intersect. MAPP online, when used at maximum resolution, gives a very accurate rendering of the speaker/speaker interaction. It is virtually useless for room reflections due to its minimal utility in this regard. Therefore, none of the systems give the response as I find it in the room. MAPP, at least, gives an accurate rendering of part of the equation.

Meyer: As a manufacturer, we have an interest in ensuring that our loudspeakers sound as good as they can. Accurate predictions of performance are key to attaining that goal. As a result, we are highly motivated to provide accurate data, which is why Meyer Sound bases our loudspeaker models on measurements of one degree of resolution. We have advocated this resolution to the industry in general for quite a while.

In contrast, when a program is being sold for profit (as opposed to Meyer Sound MAPP Online, which is free), there is more motivation to be inclusive and have lots of models than to make sure that all of the data supplied from the variety of sources is accurate. Without knowing all of the details of how the data were obtained, how could anyone be certain that the models and, consequently, the predictions, are meaningful?

Schwenke: This is even truer of programs employing DLLs, which contain not only data, but the modeling algorithm itself. Since DLLs are the only way in those programs that use them to show results with resolution finer than five degrees, any such program using high-resolution data achieves it with a proprietary solution, that is, the DLL. So, not only is the testing data unverifiable, but the predictive method is unknown.
Dynamic Link Library. A feature of the Microsoft Windows family of operating systems that supports executable routines—usually serving a specific function or set of functions—to be stored separately as files with the extension .dll, and to be loaded only when called by the program that needs them. This saves memory during program execution and enables code reusability.

Scovill: I think one of the things I most admire about Meyer and their approach and strategy is that they like to challenge the norm and they never seem to take anything at face value. I remember meeting with John Meyer quite a few years back now, and we were discussing the coming prediction software programs and even things like manufacturer specs when using these style of software packages. We both felt that the Achilles heel of any of these style programs is that they are making predictions based on the entry of data by the user. So, if it is not obvious, it should be, that the result will only be as good as the data entered. Enter in suspect or inaccurate data and you are going to get an iffy result. Add to that, if you consider the amount of data that needs to be entered to do a truly accurate acoustical prediction, you are talking about some serious detail and some serious processing power needs, more than you are going to find in your laptop of office PC. So the MAPP online model is a very good one and one that will be with us for some time I predict. I predict? I guess you would call that an acoustic modeling software prediction!

“With technology changing so quickly, software systems demand less capital investment in something that is likely to be replaced in a few years.”
David Kahn, Acoustic Dimensions

What are the tradeoffs between integrated software-hardware measurement systems (dedicated DSP, preamps, switching, etc.) as opposed to software systems using third-party hardware (sound cards, switchers)?

Anderson: I would break this into software versus DSP and dedicated input hardware versus do-it-yourself. DSP certainly has the edge over software in number-crunching speed and in its freedom from the operating systems of Microsoft and Macintosh. However, software has the incredibly important benefit of flexibility, adaptability, and quick upgradeability. It also has the distinction of being the lowest cost — and high costs have been a factor in slowing much of this technology’s widespread use in this industry. As for dedicated versus do-it-yourself input hardware, dedicated is by far preferable. It allows for measurements that rely on input sensitivity calibration to be done more easily and reliably. The advantage of do-it-yourself input hardware is in the low initial system cost and, for some users, the ability to integrate the measurement system into their existing sound systems/mix positions.

Ahnert: The second, software-based solution contains algorithms [that] can be updated or expanded in a simple manner, allowing you to use the latest algorithms. All you need is a notebook and a small AD/DA piece you may carry with you all the time.

Brown: The main advantages of dedicated hardware are regarding calibration and ease of use. If absolute levels are needed, then dedicated hardware is the best way to go. The trade-off is that there is no flexibility in getting the second opinion. In other words, how do I know whether to believe my magic box? Software-only tools with generic hardware are often less expensive. They also have the advantage of being able to select from multiple hardware platforms. I use both types to take advantage of the strengths of each.

Day: The trade-offs can be significant when comparing a well engineered hardware system like SIMM with its multiple processors and excellent front end to a software based system like Smaart using a typical sound card or a laptop’s integrated sound processing chip set.

The hardware based systems are generally a lot more expensive than software solutions. Software system results are highly dependent on the audio interface used. Typical sound cards are not intended for balanced microphone connections and often have terrible performance in regard to noise. However; there are a lot of add-on audio interfaces available for use with audio recording, editing, and measurement software that have good to excellent performance in regards to noise and fidelity. The software-based system user can select the audio interface that is appropriate for their uses. A sound system operator who uses a software based measurement system to tune the loudspeaker rig each night (and monitor the system during performances) can select a low-cost USB interface that fits in the laptop case. More sensitive work such as room acoustics and noise measurement would use audio interfaces intended for audio testing or select from the high-performance recording interfaces which have excellent noise performance.

I use both types. The software-based Smaart Live is my choice for sound system optimization work. The TEF20 is what I’ll use for most room acoustics analysis and to evaluate acoustic properties of various materials. I would not say that it is my preferred platform however due to software issues and outdated processor set. The TEF20 will suit my needs as long as I care to keep a legacy wintel box running or the TSA [Transportation Security Agency] beat it to death.

Kahn: The integrated software-hardware systems tend to be more reliable; however, the software systems have the advantage of being able to take measurements without lugging around additional equipment. Also, with technology changing so quickly, software systems demand less capital investment in something that is likely to be replaced in a few years.

Leamy: Results of sound system optimization will depend directly on the accuracy of the components of the test system and how they are used. Buy all the accuracy you can afford, and learn how to use them well.

McCarthy: “Some assembly required” are three words that strike fear into the hearts of those of us who feel uncomfortable with devices that are not “plug-and-play.” The turnkey systems have the advantage of standardized hardware and, to some degree, methodology. Such systems are more purpose-specific and have to be adapted when applications fall outside of their band of focus. Turnkey systems have a higher cost, as does anything that does not come in kit form. Software systems have a huge price advantage. Adapting them to the specifics of your jobs is not an option: It is a requirement.

For small- to medium-complexity jobs, the systems can be weighed against each other in terms of cost, size, and setup time. The smaller the job, the more it favors the software based system. For big jobs of high complexity, the turnkey system provides the complete integration and system management required to get the job done in the short time allowed. I own a SIM 3 rig with eight mics and 32 channels of processing, and with that I am ready to interface with complex systems, as well as manage the huge library of data without repatching in the middle of the night.

Meyer: Meyer Sound supports hardware/software systems because with a dedicated system where we are in control of the components, we can guarantee parameters like latency and noise performance.

Schwenke: Assembling a system from off-the-shelf parts, such as a sound card, can undermine the system’s credibility and lead to finger-pointing. If a problem or issue arises with the system, component suppliers can simply point disparagingly to other components on the basis of its performance. When the whole system comes from a single source, the situation is simpler for users.

Meyer:Sound cards are not typically designed to instrumentation standards, which is what is needed for accurate measurement. This means that results may not be repeatable, which is a fundamental scientific requirement for showing the validity of any data, especially measurements. The mixing console someone is trying to use as part of their measurement system was built to satisfy a different set of demands and measurement is not a fair task to ask of it.

Scovill: For my money, it is purely a matter of ease of integration and assurance of quality. Certainly one of the appetizing things about SIM and TEF for that matter is the ease and competence of the hardware integration. But that comes at a cost. The lower-cost, primarily software systems can be tough to set up sometimes and suffer all the challenges to get working within the Windows platform and the given computer you are using. I’ve seen the challenge of getting audio working on a PC platform take down some of the most competent sound engineers in the field! So that is certainly part of the tradeoff.

“The availability of prediction and modeling devices will always be ahead of education. The key is knowledge of the fundamentals.”
Ted Leamy, JBL Professional

Is low-cost measurement and prediction technology getting ahead of training? Are some audio practitioners buying tools that they don’t understand how to use, and getting into trouble? Can you cite examples from your experience?

Anderson: This phenomenon is not restricted to measurement and prediction; it is the general operating paradigm for our industry. People don’t read manuals any further than they have to, which is generally just to the point where they’ve got the software booted, or a device turned on and passing signal. People learn best by doing and experimenting — by self-directed learning, by jumping into the deep end and seeing if they can swim. For most people, training courses are an essential part of the learning process. Often, however, the best and most effective time for attending training is after one has spent an initial period of time thrashing around in the deep water. The impact “low-cost” here is that it allows more people have access to quality measurement and prediction technologies. The great benefit here is that people are coming up with some new and pretty damn creative techniques for applying technologies to get their job done. There will always be posers out there, but usually they drown after awhile. The exciting thing is all the new swim strokes that are being created. The real key here is: Is your measurement/prediction platform learning and growing with you? For example, we’ve built features into our Smaart analyzer based on listening techniques, and how people are using spectrograph to spot feedback and resonances, other things like that.

Ahnert: A swallow doesn’t make a summer, and by purchasing a prediction program you don’t become an instant acoustician! We, as EASE developers get a lot of inquiries that demonstrate the user doesn’t have acoustic background or sophisticated technical knowledge.

Brown: Most definitely. Many people buy measurement or prediction software, and think that possessing it now qualifies them to do something that they couldn’t do before. In reality, any measurement or prediction tool is completely limited by the operator. The effective measurement or prediction of a system/room response requires a great deal of understanding on the part of the investigator. We have all been spoiled by the cheap, easy-to-use software that is used around the office. Measurement and prediction programs require a very large time commitment, not just for the software itself, but for learning the theory behind what is being measured or predicted.

Dalenbäck: Yes, it can be a problem. I have experienced several cases where measurement data sent to me for a predicted case was bad, either due to soundcard noise, room noise, or plain misuse—such as overloading the mic preamps. A very good basic check of a measurement is to listen to the impulse response (IR) itself since many common problems will be detected, and of course also looking at a graph of the IR or the ETC. To just use, e.g. a measured STI value, without looking at or listening to the measured IR is not recommended. Many soundcards are also not useful for measurements; developers of PC measurement hardware typically have to check many cards and then can recommend only a few of them that can work well for measurements.

Day: Yes, technology is getting ahead of training, in my opinion. Everyone who already owns a PC or even a PDA can find inexpensive measurement software, while not everyone who owns a PC or PDA with such software is able interpret the data they get. This is not necessarily a bad thing. The software costs are lower due to the larger market— a good thing for both the educated and ignorant user. The eager novice can explore and learn from measurements that they would not have access to if the costs were too high. But, you won’t get an acoustician by giving a novice a decent measurement system. This will provide the novice a useful resource that will certainly be an important part of their education though.

Many contractors are equipping their techs with measurement systems but not providing the education or training to make good use of them. We often see audio technicians and operators who have measurement systems such as Smaart or an Audio Tool Box that do not know what to measure or why.

One example: The installing contractor on a theater project I worked on had the speaker system powered up and was adjusting the house EQ to get as “flat” a response as possible from the main loudspeakers. He was using Smaart as an RTA. The resulting sound was awful. The contractor had some extreme EQ settings. We found that the crossover had not been set prior to the system being turned up and the lows and highs in this two-way system were going to the wrong components damaging the HF drivers.

Kahn: Yes. It is very easy to take measurements that are misleading—not because of the measurement system, but because of the lack of understanding of what the measurement system is actually measuring and how the data is being processed. The same holds true for prediction programs. The results are a strong function of how the room is represented in the model. The results are also dependent on the many ways in which the prediction programs allow the room to be analyzed and modeled. I have seen many examples, but do not feel it is appropriate to share these in such a public forum.

Leamy: The availability of prediction and measurement devices will always be ahead of education. The key is knowledge of the fundamentals. The good news is that the manufacturers of the various platforms offer some great education, not only on the nuances of their platforms, but also on the basics of good sound application engineering in the field. It is easy to point to a laptop computer screen and try to illustrate a single simple explanation for a phenomenon. However, in reality, the interaction of loudspeakers with each other and the acoustic environment is very complex and cannot be described by a single squiggly line.

From my personal experience, I often see people making decisions when optimizing a sound system after making one or two measurements in a single location. This is a common error that often results in people making judgments about a sound system, and worse, acting on those decisions to radically change how the system is configured or optimized.

My advice: If the measurement does not match your common sense experience, examine how you are testing and processing data carefully. I’ll go one step further and say even if you like the measurement and it matches your common sense experience, always carefully review test procedures and how data is being collected.

McCarthy: Yes, measurement and prediction technology are far ahead of training and hopefully will remain so. These are scientific tools, and our process of discovery with them is continuous. I have used FFT analyzers for system tuning for over 20 years and am still behind the analyzer in my training. How else can I explain the fact that each new project furthers my understanding as new levels of complexity are revealed? As an educator in the field, I strive to hasten the process of discovery for new users by teaching them the language of the analyzers and prediction programs, and the benefits of a scientific methodology. But it is only through personal trial and error that the enlightenment really takes hold. Arrogance and foolishness will always take their toll as users with a little knowledge hold forth as instant experts. This paradigm long precedes the advent of our latest technology and promises to continue.

Meyer: The biggest problem we see is using a one microphone in a single location to tune a large, multi-zone system. That reinforces tweaking everything to sound good at the FOH position, where the mic usually is, but that doesn’t mean it will sound good at any other seat in the venue.

One of the major points of training is to show that it is not viable to tune a system to a single point. Ideally, you want good sound to be everywhere.

Scovill: Well let’s face it, you could say the same thing about the pipe wrench. I’m sure there are lot of Saturday afternoon plumbers with water on the floor. My bet is a lot of experienced plumbers who show up on Monday with a mop are shaking their heads. To the point: I would have to answer: If development is getting ahead of training, so be it! Let development drive additional training, rather than allowing the lack of competent users to drag down development. I think the measurement tools can be great ways to bring clarification to what is actually happening in the mysterious world of audio. I have seen this first-hand with the students at the Conservatory of Recording Arts and Sciences where Smaart Training is a part of the curriculum. I experienced that very thing by attending my first TEF and SIM classes quite some time ago now. They changed my whole outlook on what was possible with large scale audio. I think the key to any successful use of these types of software is not necessarily taking them at their word. You have to use them as a verification tool first and have them confirm something you are or are not hearing. By this, I mean you have to walk into a measurement with an element of expectation in the measured result.

“You should always trust your ears, but then verify with a recommended measurement tool.”
Wolfgang Ahnert, ADA Acoustical Design

When are measurement systems more reliable than experienced human evaluation? On the other hand, when is it better to trust your ears?

Anderson: Your ears are the only thing that can tell you how a system sounds. However, analyzers are important in that they tell you what your signals are and what your system is doing to them.

Ahnert: You should always trust your ears, but then afterwards verify with a recommended measurement tool.

Brown: A distinction must be made here. An acoustic analyzer is far more accurate than the human hearing system for collecting data. It is stable, time-invariant, and mostly noise-immune. But the human hearing system is more accurate than the analyzer for evaluating the data. This is why the use of convolution is so important. It bridges the gap between measuring and listening.

Dalenbäck: I would say that it really should be both if time admits. The ears will help interpret/check the measurements, and the measurements may point out potential problems that may not be heard immediately. For example, late echoes or flutters may not be heard with some program material due to masking, but if the IR [impulse response] visually or aurally indicates clear echoes, they are bound to come up sooner or later if a more transient program material is used. Listening to a bare IR acts like a magnifying glass on what later can become a problem with, for example, speech. Also, convolving measured IRs with different types of program material can be revealing. There is a free convolution software, GratisVolver, that will play measured IRs and convolve them with dry program material and play the result.

“Of course, the end result is what we hear. Test and measurement is used to better understand what the ear/brain system is given to process.”
Kevin Day, Wrightson, Johnson, Haddon & Williams

Day: Of course, the end result is what we hear. Test and measurement is used to better understand what the ear/brain system is given to process. Proper use of the test gear allows us to get the desired results much faster by giving us verification of what we hear. The ability to graphically represent sound amplitude in the time domain is quite valuable. The equipment can look at a high resolution and quickly identify the cause of anomalies in the frequency response that the human ear might perceive as a purely a tonal but are actually multiple arrivals from mis-alignment or reflected energy. So, I trust the experienced ear and back it up with measurement.

Measurement systems have the advantage of being able to accurately determine the arrival time of a discrete reflection, and on the level and frequency response of that reflection; however, these valuable results must always be balanced with human evaluation by a trained listener. Both analysis systems are important, powerful, and necessary for a thorough and accurate acoustical evaluation.

Leamy: Properly calibrated and deployed measurement systems are very reliable and create exact results. This must be the starting point of any properly optimized sound system. Music and mixing sound is, however, a visceral experience requiring creative human input that should take over once an accurate starting point for a correctly aligned system is determined. In my many years as a system engineer in the world of tour sound, I often had to reach over a sound mixer and turn off “the analyzer” and quietly advise him it is time to mix the show.

McCarthy: What do you use to pound in a nail? A hammer or your hand? I use both. When it comes to this work I always use both as well. Ears are a sensory organ and subject to subjectivity that comes with my personal biases. These include the ear physiology, but also include personal prejudices, such as how badly I want to be right. Analyzers provide objectivity and are not prone to fatigue. They are likely to give the same answer at 4:00 a.m. as 4:00 p.m. The eyes are also subject to personal agenda bias, so it is possible to find a way to interpret the data toward the desired outcome. It is critical to maintain maximum objectivity at all times, even when it hurts the ego. All this said, I consider these tools to serve to the ultimate purpose of ear-to-eye training. Anytime I hear a response that is interesting — good or bad — I want to see what it looks like on my tools. This connection serves to improve my ability to predict the sound quality by viewing the data, thereby improving the probability of good decisions. It’s a loop. If there is a sound I hear that I cannot explain with the analyzer, then it means I need to look harder. Move the mic. Do something. The answer is there. There are no “sometimes” in physics.

Schwenke: If by “reliable” you mean “repeatable,” then measurements systems are clearly more reliable.

Meyer: The purpose of measuring sound systems is not to tell the FOH engineer how the system should sound, but rather to provide information as to whether it sounds the same everywhere. The FOH engineer should be making aesthetic decisions about how it should sound, and has to trust the system tech to tune the system so that all the loudspeakers work together to provide identical sound at all seats in the house.

Scovill: I’ll revert to my last answer. The human ear always has to be the final judge. But I will say this, I can probably do everything I do with an FFT analyzer, etc. during the day by ear. I just could not do it nearly as fast or nearly as accurately. I simply use them as a way of getting what I want done very quickly and very accurately, and there are times when I flat out argue with the result. As hard as it is to admit, … I think the analyzer is right more often that I am … but I make it work hard to convince me.

When measured results do not agree with predicted results, which do you trust? Does it really matter? Do predicted results always support sound system optimization, or do they sometimes generate confusion?

Anderson: On the surface, this seems like a silly question. In the end, what the system-room combination is actually doing is what matters, not what you thought or hoped it was going to do. Prediction is a useful tool for helping you get it right from the start, to work out your problems ahead of time. However, prediction, whether it be via modeling software or educated guessing, is one of our most important tools for checking our measurements. If I have a mic set up two meters from a speaker, I would be expecting (predicting) a measured delay time of around 6ms for the propagation from the speaker to the mic. If my measurement system shows me a delay time of 400ms, I’d better be scratching my head and checking to see if I’m doing my measurement properly.

Ahnert: Measurements, if done professionally, are always to be preferred. If predicted results are difficult to believe, check again with other tools or correspond with the software designer for assistance.

Brown: First, measured and predicted results will never completely agree, and they don’t have to. Measured data includes an infinite amount of information that cannot be accounted for in a room model. The reason for a measurement is to quantify what is actually happening. The reason for a prediction is to evaluate major system/room parameters before the system is installed, heading off potential problems. Prediction in its current state is more than capable of doing this if the user has a strong background in their understanding of loudspeakers and room acoustics. Prediction platforms are just calculators. You still need a designer to get a good sound system.

Dalenbäck: If the predictions do not match the measurements fairly well, it is a reason to start thinking—especially when it comes to modeling the room itself and the input data. Prediction software programs are tools to be used with knowledge. Many times the input data can be near impossible to estimate, and the only way may be to place the results within limits. A good thing with modeling is that it forces a designer to think about the case in ways otherwise impossible. It may put up as many questions as it does answers, but the questions can be equally important. Also, let us not forget that the prediction methods are all based on a simplification of reality: geometrical acoustics.

On the other hand, wave-based methods like FEM need so much more complex input data, and more exact geometrical models, that even if the methods themselves have no limitations—except calculation time—the results, in all but very “clean” cases, may be less reliable. Current prediction methods, that use proven geometrical acoustics prediction methods including frequency dependent diffuse reflection, work at a good engineering level where input data for the most part is readily available.

Day: When measurements don’t agree with predictions, we typically learn something about our prediction process. After eliminating any problems with the system installation we look at our design calculations and other data used for the predictions. There may be too many compromises in the predictive model or significant elements like reflective surfaces were not correctly accounted for. The manufacturer supplied data is often suspect.

I tend to trust my measurements, and will work with the model after the fact to better emulate the measured results and learn what needs to be done to get closer predictions in future models

Kahn: Whether or not to trust measured results or predicted results, when they do not agree, can almost always be determined from human evaluation by a trained listener.

McCarthy: When I feel raindrops on my head, there is no amount of persuasion that will get me to believe the predictions of sunny weather. The prediction is wrong. Period. The reason might be my fault. Perhaps I read the weather for a different day or a different city. But it is certain that the course of action will be dictated by the physical presence of rain rather than the prediction. Weather prediction is a science, and it will be done accurately. Someday. It currently lacks accuracy because there is insufficient analysis power to see all of the factors that determine the outcome.

The same situation occurs in our field. There is every reason to believe that the two worlds of prediction and measurement will continue to approach agreement. The fact that we have not yet arrived is testimony to the complexity of the calculations. Measurement, provided it is high-resolution, complex analysis, has the final say. If the measurement or the prediction do not contain complex data they should have no say at all. When prediction does not agree with the measured response it is a case of “back to the drawing board.”

Meyer: When you bring up variances between predicted and measured system performance, there is an underlying issue, which is the need for manufacturers to supply accurate data about their loudspeakers. It’s to the advantage of the pro audio industry to produce accurate data, not just optimistic data. We have seen some tendency towards smoothing data, but the reality is that sound is complicated: Loudspeakers have lobes; walls reflect. There’s no point in ignoring the tough facts. It may make a system look better in a presentation, but it won’t sound better if the system design is based on bad data.

Scovill: Again, I will revert to my answer above. It may generate confusion at times, but way more often than not, it will be the tool for eliminating that confusion. I love it when I hear someone say something like “I don’t like the sound of the PA when they use one of those things to tune the PA”. I just chuckle when I hear it. This is the equivalent of blaming Mr. Goodwrench for you car running bad because he actually used tools to tune it up instead of just his hands. If you are not happy with the results, my bet is it’s not the tool that is the problem.

What would you like to see next? What new development or major improvement in existing technology would be most useful to you, or your customers?

Anderson: We need to pursue continued improvements in user interfaces. Greater measurement and modeling power can be negated quickly by a poorly designed UI.

Ahnert: We’re now working on EASE 5.0, but we welcome the wishes of users at any time.

Brown: I would like to see the current tools used more effectively. This means that all of us need to be involved in learning the basics of all aspects of measurement and prediction tools, as well as the theory behind what we are measuring and predicting. Everyone wants that tool that allows them to push a button and get the answer. It will never happen. Let’s use the tools we have to their fullest potential. The software tools aren’t the problem, the users are.

Dalenbäck: Most useful would be development of an indicator of how well a particular prediction result can be trusted, or for which frequency bands. Eventually it should be possible to estimate how well a prediction method will work based on room shape and size, surface sizes, absorption, and scattering coefficient distribution and uncertainty ranges. However, so far, it tends to be so that every room and case is quite unique, and its properties are hard to pin down in advance. In the long run, it may be better for each user to gradually build a growing knowledge base of modeled and measured cases.

Day: Advancements for acoustic prediction that I would like to see include working databases of coefficients for scattering and diffusion, along with an accurate way to determine these values. In measurement, I would like to see a system that simplifies gathering directional information for acoustic arrivals.

Kahn: I would love to see prediction software that can properly address diffraction. Also, I would like to see prediction software that uses measurement data for specular reflection performance of surfaces, rather than using their calculated random incidence absorption coefficients, which are based on measurements of reverberation time in a chamber. Another much-needed improvement is a means to adjust results for all sound reflections that travel at grazing incidence over an audience seating plane.

Leamy: What is coming next? An increased convergence of prediction, measurement, optimization tools and the sound system hardware it serves. With the power of DSP and new communication protocols, sound system design will involve total system integration from a designer’s first planning stages through final system optimization at the jobsite.

This level of technology will allow more in-depth electro-acoustic optimization to occur often behind the scenes from what the typical user or designer sees. For example, with DSP placed onboard loudspeakers, we will have the ability to optimize a particular loudspeaker for its physical location in an array. A loudspeaker at the end of an array will have a different DSP applied than one in the center, this will be done in behind-the-scenes DSP settings, based on the known fundamental physics governing loudspeaker arrays.

This advancement will further the refinement of sound system optimization, making for a better listening experience, yet keeping system design and installation straightforward and uncomplicated.

McCarthy: One thing that is coming in right now is the wireless measurement mic. This promises to be a great help in improving the pace of system alignment, especially in large venues. I would like to see standardization of the high-resolution complex input data for prediction programs so that a wide array of products from different manufacturers could be available in standard form. For free, of course.

Meyer: A major feature would be some ability for predictive programs to show distortion on a per frequency basis. All levels are not linear across frequency; loudspeakers cannot reproduce all frequencies to the same maximum level. That fact is often lost when a loudspeaker is characterized with unrealistic numbers.

Over-optimistic specifications lead to people specifying fewer loudspeakers for a job than what might really be needed. Are we, as an industry, fomenting bad designs? There has to be realistic expectations of what loudspeakers can do; we should get beyond just making loud noises. There should be a way to balance reality with using the least number of loudspeakers necessary.

Schwenke: I don’t want to listen to a voice coil one degree away from melting. That’s when it’s putting out its maximum SPL, but it’s not what you want to design a system for.

Scovill: Again, from the perspective of a mixer, I would love to see FFT and spectral analysis built into the onslaught of coming digital console platforms. It will be an invaluable asset for latency measurements within the desk and input and output stage analysis completely integrated to the package. I’ll take all of that you can give me.

Featured Articles

Close