Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Virtual Soundscapes

Jonathan Deans is sitting in the audience with his eyes closed, listening to the actors onstage. As they move around, he can tell exactly where each one is. ?I was able to pinpoint the sound to a performer within a few inches of where they were standing,? says Deans.

Virtual Soundscapes

Jonathan Deans is sitting in the audience with his eyes closed, listening to the actors onstage. As they move around, he can tell exactly where each one is. ?I was able to pinpoint the sound to a performer within a few inches of where they were standing,? says Deans.

Jonathan Deans is sitting in the audience with his eyes closed, listening to the actors onstage. As they move around, he can tell exactly where each one is. “I was able to pinpoint the sound to a performer within a few inches of where they were standing,” says Deans.

That ability doesn’t come just from spending the past 30 years a sound designer for shows such as Siegfried and Roy and Cirque du Soleil. Instead, it’s an ability that any audience member can acquire, thanks to a combination of technologies from pro AV and . . . warehouses.

The pro AV tools include LCS, a suite of products from Berkeley, Calif.-based Meyer Sound Laboratories, which recently bought the LCS line that it had been reselling. LCS’ Matrix3, which includes features such as SpaceMap, creates the perception of a sound source in places where there might not be an actual loudspeaker. For the audience, that means sounds don’t appear to come from, say, loudspeakers that are visible in a nearby wall.

“The effect is most pronounced when you hear a sound position move over time,” says John McMahon, executive director of LCS Series at Meyer Sound. “I think your ears are more sensitive to changes than to steady-state level.”

Mapping Sound

SpaceMap uses patented algorithms to perform multi-channel panning between loudspeakers in two and three dimensions. The user interface divides the panning space into a series of triangles, which the user then can manipulate on-screen to create a specific acoustic environment.

RFID tracking and Meyer Sound’s LCS system create a realistic soundscape that moves with onstage performers in the Broadway production of Boublil and Schonberg’s musical, “The Pirate Queen.”

“Each triangle vertex (node) is mapped to one or more or no loudspeakers,” says McMahon. “The level of the signal for a given node is determined by its barycentric weight.” Barycentrics is a geometric concept in which a shape is diced into triangles. When a single node is mapped to more than one loudspeaker, the result is what Meyer calls a virtual node. There’s also a derived node, in which power is designated for any of a set of loudspeakers to a designated loudspeaker.

“The original intent of this was to be able to set up regional subwoofers,” says McMahon. “For instance, a front subwoofer might derive from a set of front loudspeakers, and a rear subwoofer from a set of rear loudspeakers. Derived nodes are sometimes used in theaters, so that balcony fills or surrounds are derived from outputs on the floor level.”

SpaceMap also supports divergence, or bleeding, which sends a signal to every loudspeaker in the map. That ensures that when the sound is panned to one side of the room, there’s enough going to the other side so that audience members there don’t have to strain to hear.

SpaceMap was born out of an attempt to create creating a panning system for a 16-channel sound system installed in a geodesic dome. As a result, it supports three dimensions, including Z, which is roughly equivalent to height.

“You can pan between levels of loudspeakers for a Z axis by panning/ interpolating between two SpaceMaps,” says McMahon. “Both divergence and SpaceMap pan can be used to achieve 3D panning movements.”

AV pros who have used Matrix3 say it’s a powerful tool that gives them new flexibility. “It’s an effort to trick the mind and the ear into different types of imaging,” says Geoff Shearing, co-principal of Masque Sound, an East Rutherford, N.J., company that specializes in events. “We’re trying to amplify the human voice but do it in as natural way as possible, so the voice appears to come from the actor onstage. That’s been the Holy Grail for us from the beginning.”

“We’ve created such sophisticated, virtual, digital audio tools that we can now create environments electronically that we couldn’t before,” says Chris Conte, business development manager at Electrosonic, a Minneapolis integrator. His company is installing Meyer Sound’s Constellation in a large Florida church. Constellation is an electroacoustic architecture system that gives venues the flexibility to alter their acoustics instantly and accommodate a variety of events and source material while remaining virtually invisible to the eye.“With Matrix3, you can create larger sweet spots. You can pan sounds left to right, virtually,” says Conte.

Besides LCS, Deans and his colleague, Brian Hsieh, also are using radio frequency identification (RFID) technology, where integrated circuits the size of luggage tags communicate wirelessly with antennas and receivers. Retailers such as Wal-Mart use RFID to track products from the warehouse to store shelves in order to update their inventory and thwart theft.

Deans discovered RFID in 2006 and has been using it in Chicago and New York productions of the musical, “The Pirate Queen.” It tracks the location of one or more performers who wear an RFID tag onstage. That information is fed into the Matrix3, which then pans that performer’s audio accordingly.

Meyer Sound’s SpaceMap software provides AV pros with a graphical interface for manipulating sound so that, for example, a soloist’s voice pans across the theater as she walks across the stage. SpaceMap controls multichannel panning by defining each sound’s trajectory independently from the physical loudspeaker itself.

“It’s intentionally a very subtle type of movement,” says Hsieh, assistant sound designer for The Pirate Queen. “It’s designed so that audience members can watch the show and subconsciously look on stage where a voice is coming from. It’s believable and realistic, as opposed to the person being on stage and (the audience) hearing his voice from loudspeakers on the sides of the stage or above it.”

Location, Location, Location

Deans, Hsieh, and their colleagues assembled the RFID portion of their system using about $30,000 in off-the-shelf components. The RFID hardware comes from Multispectral Solutions, a Germantown, Md., company that sells primarily into industrial markets such as warehouses.

Multispectral also adds in ultrawideband (UWB), which uses brief pulses of energy to transmit data at speeds of 100 Mb/s or more. UWB also has inherent radarlike capabilities that also let it be used for locating people and objects. That’s why researchers at Graz University of Technology in Austria and elsewhere are developing pro AV applications for UWB, such as virtual reality and videoconferencing systems that use UWB to track participants’ locations.

Combining UWB and RFID helps improve accuracy. Multispectral says that its combination RFID-UWB Sapphire DART product, which The Pirate Queen uses, can identify a tag’s location down to about four inches.

DART supports multiple receiving antennas. Hsieh and Dean’s setup uses six: two attached to the balcony rail midhouse, two in the first portal, and two on pipes, each mounted 16 feet upstage and 10 feet off to either side, pointing back downstage.

Each antenna is paired with a receiver that feeds the signal via Cat-5 cable to the DART hub, which turns it into location information that’s sent in generic ASCII format over Ethernet to the LCS platform. Any computer connected to the LCS network then can access that information via an application that runs in a Web browser interface.

“It allows you to set up the basic configuration and tell the system where the reference tags are, how many tags I have, or what the data rate will be,” says Hsieh.

Once configured, the DART system sends each tag’s coordinates to the Open Sound Control engine in LCS. The sound-mapping application looks at the stream of coordinates coming in and, based on the configuration, makes changes to the panner on a particular audio input channel.

“I can assign it to delay and pan,” says Deans. “As you see the performer walk around the stage, delay, EQ, and pan all move relative to where they are. We can track up 60 performers. It’s quite amazing.”

Besides the RFID tags affixed to performers, the DART system also uses reference RFID tags, which are positioned around the stage, such as those embedded in scenery. The reference tags serve as landmarks that help the system pinpoint the location of the tags worn by performers. Multispectral says that DART can read a tag more than 650 feet away when there are no physical obstructions between tag and antenna, and more than 160 feet when there are multiple obstructions.

The DART system provides what’s known as X, Y, and Z coordinates, where X and Y — essentially the equivalent latitude and longitude — are the performer’s position on stage. The Z coordinate is her height, such as Juliet up in a window, being serenaded by Romeo on the ground. That information enables three-dimensional acoustical modeling, making it possible to pan the sound not just from side to side, but up and down, too. For example, if the set includes a staircase, the singer’s voice could be panned down as he descends.

That’s a lot of information, and preparation is the key to making sure it’s consistent and accurate. For example, Multispectral’s system needs to know various distances around the stage to help calculate the location of each tag. The catch is that the system is designed for warehouses and other industrial environments. Those tend to be square and rectangular, making it relatively straightforward to map the interior in order to identify tag locations.

“In theaters, however, there is no square space at all,” says Hsieh. “Working in a three-dimensional environment, where every measurement has to be on a 90-degree angle to the next plane, is quite a challenge. We spent hours racking our brains, trying to come up with the best methodology to measure the distances.”

Hsieh and his colleagues wound up having to use a variety of off-the-shelf tools from the construction world, including distance-finding lasers, plumb bobs, and old-fashioned measuring tape. “We used pretty much every type of measuring device available for consumers,” says Hsieh. “There were a couple of trips to Home Depot.”

Beyond Sound

Another challenge is that in many performances, scenery moves, so RFID signals ping-pong around differently than they did just an act or scene earlier. That creates another set of variables that affects the system’s ability to track performers’ tags.

“A tag might be tracking very well on an open stage, but the moment a flat comes in, or a piece of scenery moves on or off, or a trap opens, all of the variables change,” says Hsieh. “Fortunately, with six antennas and careful setup of the [wireless] data stream in terms of what we want and don’t want to see, we’re able to get it fairly reliable.”

The Pirate Queen installations use active RFID technology, in which the tag has a battery-powered transmitter that communicates with the receiver connected to the antenna. The other main type of RFID technology is passive, in which the tag doesn’t have a transmitter. Instead, the receiver creates an electrical field that sucks the information out of the tag. Passive tags generally are used for tracking individual items in stores.

The Pirate Queen uses Multispectral’s RFID technology in ways for which it wasn’t designed, and in the process, some shortcomings have appeared. For example, the tags are designed to be small and unobtrusive. One way to achieve that is by using low-power transmitters in the tags and compensating for the weak signals by using highly sensitive receivers.

“The [transmit] power of the tag is miniscule compared to, for instance, a cell phone or a wireless mic,” says Hsieh. “Things like that are a limitation.”

However, Multispectral says that DART is immune to interference from wireless technologies such as Wi-Fi. That’s a plus, because it gives AV pros one less thing to worry about.

Nevertheless, a more powerful signal would be helpful in order to punch through scenery or multiple costume layers, or reach an antenna that’s just on the fringe of coverage. Those issues are some of the ways that a vendor could tweak the system to make it a better fit for pro AV applications, including ones outside of audio.

“If we were to get it working with a certain degree of reliability, I could see lighting guys and automation guys wanting to use it,” says Hsieh. “It’s always been every sound designer’s dream to have a system like this. It’s just a question of cost and reliability.”

Tim Kridel is a freelance writer and analyst who covers telecom and technology. He’s based in Columbia, Mo., and can be reached at [email protected].

Featured Articles

Close