Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

By Any Measure

Everyone has pondered that inimitable question: which came first, the chicken or the egg? The same question could be posed about sound-system design evolution:

By Any Measure

Apr 1, 2004 12:00 PM,
By Bob McCarthy

High-resolution measurement systems such as SIM or SMAART display frequency and phase response.

Everyone has pondered that inimitable question: which came first, the chicken or the egg? The same question could be posed about sound-system design evolution: which came first, prediction or measurement? How would you be able to accurately predict a sound system’s response if one had never been measured? How could you measure a sound system’s response if it did not already exist? If it already exists, then its response must have been predicted, or why else would it be there? Someone must have thought it was going to work!

The only thing that really matters in the chicken and egg dilemma is that the eggs and the chickens keep on coming, because that scenario sure beats extinction. With prediction and measurement, the hope is that both carry onward with their respective evolutions. Better measurement leads to more accurate characterization of a system’s response — the good, the bad, and the ugly. Better characterization then leads to more accurate prediction, which leads to better designs. Inferior design practices are rooted out as their flaws are objectively exposed, while superior ones are further refined as their strengths are revealed. This upward evolution leads to a natural selection of sound-system design practices — in essence, a survival of the fittest.

As the industry has evolved, so have the sciences of acoustic measurement and prediction. A step forward in one field pushes the other field onward toward the desired goal of predictable sound-system performance. If you draw the lines of evolution into the future, they would appear to come together, finally reaching a point where the predictions are so accurate that they are indistinguishable from the measured response. When the crude methods of prediction and measurement employed in the past are viewed through the eyes of the present, one can see a huge gap between the predicted response and that which would be measured. But viewed through the perspective of that time period, a relative parity existed — that is, the state of acoustic measurement was limited in its ability to detect the variance from predicted to actual response. This state of parity has always existed and is destined to continue into the future. In many ways, the industry is no closer to perfection now than in the past because sound systems are judged by a progressively higher standard of performance. Before you laugh too hard at the practices of Aristotle’s time, bear in mind that the technology of today will seem quaint to the generations to follow, and many practices you hold dear to your design philosophy will be extinct.

SO MANY QUESTIONS

So where did this all come from, where is it now, and where can it go from here?

The current age holds that sound is transmitted as a wave of pressure variations through the medium of air at a speed of approximately 1,130 fps (time). Larger variations in pressure correspond to louder sound (energy), and middle C is 256 cycles per second (frequency). At a given moment in time, the pressure may be somewhere between the high-pressure and low-pressure parts of the cycle (phase). Multiple frequencies will combine in the air, creating a complex mix of sound, the nature of which depends upon the ratios of energy and phase of the combined sounds. Sound will reflect off of and refract around surfaces and recombine with direct radiating signal, creating a new complex sound.

Finally, the common belief is that modeling programs and acoustical analyzers will sort this all out so that systems integrators can get down to the business of designing and tuning the sound system. However, it was not always so easy — in fact, early acousticians walked six miles to school, uphill both ways.

SOUND TRANSMISSION

The principal focus of investigation in classical Greece was to attempt to understand what types of musical sounds were pleasing to the human ear and to create instruments that could offer repeatable results. The principal of the octave was explored by Pythagoras in 550 BC. Two hundred years later, Aristotle theorized that air was pressurized, and the sound carried with the air.

The ancient Greeks believed that sound, light, and the stars were all linked. The seven tones of the musical octave, the seven colors of the visible spectrum, and the seven heavenly bodies, your astrological sign, and your body organs were all part of a grand plan that created the “harmony of the spheres.” It was believed that the planets all hummed a song, albeit too large to fit into our small ears. If the Pythagorean acoustic modeler was still in use, you would be able to do sound predictions by inputting the position of the planets and your date of birth, and the color of the sound would pop out. These ideas held great sway, and few were strong enough to challenge them. In 1660 Boyle used measurement to prove that sound does not transmit in a vacuum, thus proving the predictions wrong. This took the air out of the planetary choir, and the emerging scientists were then able to measure sound without deference to the predictions of Aristotle.

That said, the current knowledge is that we ended up with approximately 1,130 fps for the speed of sound on air. But how did we get there?

PETER PIPER …

Seven syllables take one second. Stand 519 feet away from a wall and say, “Peter Piper picked a peck” loudly, and the echo returns just in time to follow your words. That is the way it was done in 1600, and the result was 90 percent accurate. Before you laugh too hard, think about this: 400 years later, people will attempt to set speaker delay times saying, “Check one, two,” with less accuracy than that.

The next level was long-distance cannon fire, in which the difference between the light flash and the sound arrival light gave a measurement of near-perfect accuracy. The two discoveries, however, do not apply to filmmakers, who are consistently able to produce sound at the speed of light in a vacuum (see

Star Wars

).

The measurement of the speed of sound took a step back in 1709, when Newton developed the first predictive formula for sound transmission in error (I mean, in

air

.) He calculated the theoretical speed of sound to be 979 fps. Troubled by the measured data, Newton fudged the numbers by inserting the thickness of “solid particles of air” and of seasonal “vapours floating in the air” into his equation until it matched the measured speed. The Newtonian Acoustic Prediction Program would have had input parameters such as season and the local air pollution index. That explains why Los Angeles has the lowest speed of sound in the United States.

Measurement and prediction came together in 1816 when La Place disproved Newton’s theory. The theoretical value of sound speed matched the measured with the critical insertions of temperature and transmission medium. It was La Place who proved that sound was transmitted by pressure and rarefaction through the adiabatic rather than isothermal elasticity of the medium.

FINDING FREQUENCY

Pitch is the most prominent characteristic of music and therefore provoked the most interest from the start. The relationship of one pitch to another and the resulting sonic experience of harmony and melody has remained of primary interest to music listeners from ancient times until the current age.

In the 16

th

century, Galileo categorized the principles of harmony and dissonance by using multiple pendulums and observing the ratio of the lengths of the strings that would create consonant motions. He established the math behind the octave, the fifth, the third, and so on — the foundation of harmonic analysis. He showed that the pitch of a given string was the product of its tension, diameter, and length. The first major step toward defining pitch into an exact number of vibrations per second — its frequency — was Mersenne in the 1600s, who stretched a brass wire 138 feet and counted its vibrations by eye. He then stretched smaller wires until they matched the tuning of an organ pipe and scaled up the numbers from the long wire and correctly calculated its frequency.

In 1822 Jean Baptist Fourier published the proof of the mathematical principle now known as the Fourier theorem. This key piece of mathematics proved that any sound was a combination of various singular frequency components in a mix of relative levels and phases. The Fourier theorem placed prediction far ahead of measurement. The technology for capturing a waveform and distilling into sine waves of particular frequencies and phases would become the quest of the acoustic measurement pioneers.

In the early 19

th

century, Sir Charles Wheatstone became the first person to share the distinction of using a diaphragm receiver by inventing the “microphone,” an early version of the stethoscope. Diaphragmatic action is the key to both waveform capture and eventual playback, as it is the mechanism to decode the sound into a viewable waveform. While Wheatstone’s many inventions helped advance the future of sound, he is also responsible for a major setback: he invented the concertina, which eventually led to the accordion.

By 1830 there were several ways to verify pitch precisely, including sirens and wheels with evenly spaced teeth that strike a flat plate. (Remember baseball cards in your bicycle spokes?)

This led to an extensive dispute about standardization of what would be the pitch for orchestras, which varied all over the map. In 1834 the tone of A=440 Hz was established as the standard, the Stuttgart pitch, and was used in most of Europe. The French disagreed and established by law in 1859 that A=435 Hz. The United States went along with the French in theory, but the Boston Symphony drifted up to A=442 Hz by 1916. When that was discovered, the United States moved its standard to A=440 Hz. No doubt the spin doctors of the day explained that it was all as planned.

The task of providing the means of maintaining these standards fell on Rudolf Koenig of Paris, who was the finest craftsman in acoustical history. He created a precise tuning fork for the French government. He also invented a series of devices to accurately calibrate his forks, including the clock-fork, which used microscope optics mounted on a tuning fork that drove a precision clock. The device was accurate to .0001 Hz, which for those days was not too shabby. Koenig was key in the invention and improvement of the original spectrum analysis tools: the wave siren, the phonautograph, and the manometric flame device. All of those were capable of determining frequency and PHASE, which opened the door for complex analysis.

An alternate path was that of harmonic analysis, in which pulleys of various sizes represented individual sine waves and the resulting combined waveform was traced on paper. Whereas the phonautograph and manometric flame captured the time record waveform, which could then be distilled into frequencies, the harmonic analyzer mixed the sine waves together and generated the time record. These two paths were like two trains running in the opposite directions of the Fourier theorem.

The electromagnetic microphone (Hughes 1878) allowed for the conversion of acoustic energy into electrical. An acoustic waveform could now be captured and projected onto paper by the oscillograph, which, with the advent of the cathode-ray tube, would form the oscilloscope.

The oscilloscope’s electronic waveform capture marked the dawn of the modern era. With the advent of electronic signals, you move quickly into the frequency analyzer, which was composed of filters capable of converting the waveform and splitting the frequency domain into individual bands for analysis. Through the ’50s and ’60s, the realm of acoustic measurement was mostly restricted to manufacturers’ research laboratories, universities, and the military. In the 1970s, the ⅓-octave resolution real-time analyzer (RTA) became affordable to modern sound engineers in the fledgling professional audio industry as their first piece of roadworthy laboratory equipment. The RTA was (and is) limited in that it computes only the sound level in its frequency bands with no regard to phase and no regard to the source of the sound. Because sound does not behave independent of phase, the RTA measurements will not correspond to an accurate prediction of the response. An RTA gives almost no hint as to whether a problem would be best solved by acoustical modification, speaker delay, speaker repositioning, level setting, or equalization. Scientifically, the RTA was a setback to a more primitive form of analysis, because it neglected the phase aspect of sound. The most enduring aspect of this is that the RTA’s inaccuracies gave rise to a skepticism in the concept of acoustic measurement in the professional audio community. That manifested itself in the “Analyzers? We don’t need no stinking analyzers!” attitude that pervaded the industry until recent times. Logistically, however, it was the best available at the time — far more practical than taking a manometric flame device on the road.

The 1970s saw the first analyzers capable of completely using the Fourier theorem for practical use: the Fourier transform (FFT) analyzer and the time-delay-spectrometry (TDS) system. These systems were capable of analyzing the amplitude and the phase components as well as seeing the time arrival of direct and reflected sound. These systems were capable of doing virtually every single operation that the ancients had sought: defined frequency, capture of a complex waveform and its distillation into the amplitude and phase components and precise measurement of the speed of sound.

In the 1980s, the use of complex audio analyzers moved out of the laboratory and into the field. Various systems such as MLSSA, SIM, TEF, and others have different ways of gathering the data, but all share the aspect of obtaining both the frequency and time domain signatures. System alignment was refined into a specialty of its own, as the complexity and scientific basis of the task became clear to mix artisans not equipped with such backgrounds. In spite of these advances, the most common tool was still the RTA, which kept its hold because of size, price, and simplicity. The ’90s brought SMAART, which brought the FFT analyzer within the reach of every audio engineer. The new standard was a scientifically defendable, 24

th

-octave analyzer capable of viewing amplitude, phase, time delay, the acoustical properties of the hall, and more. The skill of the practitioners has improved dramatically, and the industry is clearly headed in the right direction. With complex analyzers, any user has the ability to learn from the analysis, even from his or her mistakes in a way that RTAs could never provide.

ACOUSTIC PREDICTION

There was awareness in Roman times that echoes decreased intelligibility, and that was documented as early as 50 BC. There is also evidence that such acoustical considerations were completely ignored by the architects of medieval churches. Compare that to the modern era, in which the acoustical considerations are completely ignored by the architects of modern churches.

In 1673 Athanasius Kircher advanced architectural acoustics by defining the directionality of echoes and showing how the geometric shape of buildings would affect sound transmission. The ray-tracing method of predicting echo paths originated at this time and persists to date as the bedrock of acoustic prediction.

Hermann Helmholtz advanced the understanding of resonance. The Helmholtz resonator is a tuned cavity that could be used to absorb sound in a room. This resonant theory gave rise to some fantastic predictions for room acoustic design, most notably that the concept of stringing wires all over the ceiling of a room tuned to various frequencies would absorb the sound. When measured, the Helmholtz resonator theory holds air, but the wire absorbers became unstrung.

Professor Wallace Sabine stands out at the forefront of architectural acoustics. His groundbreaking work in the early 1900s brought the science into the modern era. He provided formulas for the calculation of reverberation time that are still in use. Sabine proved that absorption of sound could be accomplished with various soft goods and that this would decrease the loss of intelligibility due to echoes. Sabine created scale models of concert halls and photographed the sound waves being propagated from a spark source. These were the first pictures of compression waves emanating from a point source (see

Fig. 1

).

The primary tools for prediction of speaker performance were speaker polar plots, protractors, and ray tracing. In the 1980s computer prediction programs led by Bose and Renkus-Heinz came to the market and automated the calculations. At first these programs showed slices of the horizontal and vertical planes, but they evolved into 3-D models of the speaker response. The programs represented a huge step forward in terms of bringing visual representation of the sound field into the hands of all users and consumers. However, the resolution of the polar and frequency data, combined with a lack of phase response data, made for designs whose predicted response would often not accurately correlate when measured in high resolution. The key to successful design with these early prediction programs was to configure the system first using experience, common sense, and techniques known to work in the field.

In recent times, high-resolution acoustic prediction in the 1 degree, 24

th

-octave range with phase data has become available on a limited basis. This has proven to be a level of prediction capability that matches the complex measured response, including both positive and negative interaction of speakers. Although this level of resolution may seem excessive at first glance, it is absolutely required to keep up with the current technology in speaker design and with the ear’s perception. Note that the dominant speaker product on the market today is the line array, a system whose elements typically range at 5 to 10 degrees of vertical coverage. These systems have extremely delicate interactions in which a change of a single degree between elements has considerable impact on performance. The steering of such systems is, as always, phase dependent. For these systems the 10-degree angular resolution and absence of phase data is not sufficient.

THE FUTURE

So where does the industry go from here? The frequency and angular resolution described previously should be sufficient to move forward. More speed is always welcome. The next logical step would be to incorporate the high-resolution predictions into 3-D mappings instead of the current 2-D horizontal and vertical slices. An accurate prediction of acoustic properties of the hall would be the next major advance to the high-resolution effort.

On the alignment side, there is a need for speed and more ease of operation. The user needs to sample as much of the hall as possible in order to find solutions that benefit the most audience members. Measurement-quality wireless mics would expedite movement through the hall and save precious time. Improved interface between the analyzer and multichannel signal processing would help greatly in the process of alignment, which at present can be a nightmare of spaghetti wiring and terminal blocks. Signal processing has been manufactured without the slightest recognition of the fact that there are people out there using analyzers to try to set their parameters precisely rather than just believing their user interface readouts. It is only within the last year that there has been any evidence that this trend may reverse.

Where it all comes together would be the future mode in which systems integrators could view the measured response for a given point in the hall and, at the same time, view the predicted response for that same spot on the same screen. As these traces move closer and closer together, integrators will know that the art of acoustic measurement is headed in the right direction.

Bob McCarthy

is an independent consultant specializing in sound-system design and alignment. He has spent more than 20 years aligning sound systems using high-resolution acoustic measurement tools. He lives in St. Louis and can be reached at

[email protected]

.

Featured Articles

Close