Computer Automation in sound-reinforcement consoles: Computer automationand livesound shouldn’t be mutually exclusive. Newtechnology can minimizeuncertainty and savetime and costs.
Aug 1, 1997 12:00 PM,
Nick Franks, Geoff Muizr and Dave Lewty
It is often said that in live sound, there is no rehearsal and no secondchance; each performance is considered to be a unique and unpredictableevent. As a consequence, the myth of live sound mixing as potentiallymortal combat with chaotic forces has been propagated for many years. Ithas been used by engineers to create their rock’n’roll, can-do image.
This article explains how computer automation of sound-reinforcementconsoles can minimize the uncertainty and save time and costs byintroducing repeatability and programmability into the equation, allwithout ruining any carefully nurtured reputations.
To begin with, let’s take a look at where we are now. Here is the typicalscenario for the vast majority of live sound engineers in the computer age:As you stand behind the console, hands and ears poised, you cast a glanceat the automated lighting console. The pre-programmed lights dim, the videoscreens burst into life, the intro tape (doubtless mixed on a studioconsole with comprehensive computer automation) thunders through thespeakers and then, in true 1960s style, you proceed to mix the whole showby hand with only experience, memory and split-second reaction to guide you.
But does it really need to be that way?
For years lighting engineers have been able to program the cues for acomplete show into the board. Secure in the knowledge that even ifeverything goes very, very wrong – for example, the automation dies – theyknow that they can at least get various washes up and limp through the showquite successfully.
Studio engineers have had some level of computer assistance since themid-1970s; this technology has now become very sophisticated. The mix canbe rehearsed until perfect and reproduced when required. If the artistreturns after three months demanding a remix, everything can be recreatedmore or less exactly as it was.
Why, therefore, should the live sound engineer not share in the benefits ofthese technological developments as a matter of course?
The revolution starts in the theaterDuring the 1980s, the development of theater productions using extensiveamounts of technology began to change the traditional situation with regardto computer automation of consoles. The necessity was for an evenreproduction of audio night after night, often following complex scenechanges. Thus the emergence of sound design as a new and seperatediscipline; the show’s audio would be programmed during productionrehearsals, but the equipment would be operated by an engineer required toaccurately follow cues created by the designer. Without a computerassistant linked to a suitable console, these cues would becomeincreasingly difficult to handle.
The real possibilities for sound-reinforcement console automation opened upwith the simultaneous emergence of powerful and rugged portable computersand the availability of flexible and friendly software for studio consoles.Both of these key components were affordable. The question was no longer,”Is it possible?” but rather, “What are we waiting for?”
RepeatabilityThe essence of the matter is repeatability and resettability. Suchfacilities are currently available in studio consoles at many differentlevels of inclusiveness. Simple consoles provide storage of fader and muteinformation; complex consoles allow virtually every control to be reset ondemand. The distinguishing factor in most cases is currently cost – audiohardware cost. Software development costs are high initially, but whenamortized over a sufficient number of sales can be reduced to manageableproportions.
Various methods of repeatability have been introduced in the studioconsole. These should be examined in outline before we consider how thesetechnologies can be applied in the real-time world of sound reinforcement.
The basic studio automation system stores fader movements and mute switchpresses made in real time. These are typically synchronized to the sourcematerial – for example, the tape being mixed – via time code, the consolerunning as a slave. Level settings are normally generated from VCAs orservo-assisted (moving) faders. When the tape is replayed, the automatedcontrols will be dynamically controlled by the computer, recreating the mixexactly.
In addition to dynamic automation, some systems provide snapshots ofautomated functions. Snapshots are freeze-frame images of fader and mutesettings, which can be loaded either statically, at the operator’s command,or dynamically, against time code.
At a more advanced level, the ability to store data generated fromadditional module switches, such as EQ in/out or aux on/off, can beincorporated in the system. This can be extended to include every consolecontrol, and settings can be stored and reloaded either dynamically orstatically. Thus, for example, a fully dynamically automated console wouldreplay adjustments to auxiliary or equalizer controls as they were madeoriginally. As an auxiliary send was adjusted during the mix, so it wouldbe adjusted by the computer.
Finally, a half-way system is generally called recall. In recall systems,the positions of all console controls are stored in the computer but canonly be reloaded manually, typically using graphics displayed on thecomputer monitor. Although entirely static, recall is neverthelessinexpensive and guarantees a high degree of accuracy when resetting theconsole and may be sufficient in many applications.
Some manufacturers have extended the software beyond console automationoperations to include dynamics and outboard effects control via MIDI. Thusthe automation system can provide a range of dynamics controllers, whichcan be assigned to the channels; settings can be stored with the mix data,and dynamics parameters can be adjusted during a mix. Such virtual systemssave huge amounts of rack space. Effects control software allows storage,editing and loading of effects devices from the console via MIDI, thusfreeing the engineer from the need to turn away from the console andattempt to manipulate complex heirarchical menus.
Various combinations of these automation methods are obviously possible,and the best interrelation of function and cost must be taken intoconsideration when designing a console automation system. A properassessment of what is appropriate for the intended customer’s applicationis inevitably necessary. Moreover, a thorough knowledge ofsound-reinforcement mixing techniques is absolutely essential if atranslation of existing studio automation technology into a form usable bylive sound engineers is to take place.
On-line and off-lineThe capture and replay of automation information may be considered theon-line aspect of console automation. Mixes are generated as required, thenstored by the computer for retrieval later. However, it is quite common fora mix to be less than perfect or to require adjustments. Thus appears theneed to edit the mix data off-line.
>From the computing point of view, a mix is no more or less than a data fileand, as with any other data, can be manipulated. A parallel example isword-processing, where the wording of a basic text can be worked on untilit reads to the writer’s total satisfaction. Other text files can beincorporated, different versions of the text can be merged together,sections can be deleted or extracted for use elsewhere and so on. All ofthese possibilities and more are inherent in console automation systems. Assoon as mix files have been created, editing can begin.
Furthermore, it becomes equally possible to create the mix, or cues in themix, before the system has been set up. The sound designer or engineer cantake the portable computer with him while travelling; instead of idling histime away in the hotel room spending money on pay TV movies or the minibar,he can more profitably occupy himself with programming cues off-line.Different versions can then be tried out in the concert hall or arena.
Failsafe, technofear and pilot errorOne of the main concerns of the diligent engineer who is consideringsurrendering some of his power to a computer must be, what happens if thecomputer crashesIt is at this point that the immediate, real-time nature oflive work casts its shadow over all aspects of automation. The answernaturally must be that it should be possible to run the console in a fullymanual mode. Thus, the engineer will still need all his skills just incase. Automation is not going to make the engineer redundant. It will makehis life easier, but it is not going to replace him.
On the other hand, technofear must not be underestimated. Computers insound reinforcement are a new technology. They are notorious for notobeying instructions. In other words, they do as they are programmed and,unlike humans, do not adapt to circumstances. At the present time, unlikehumans (who prefer to think of themselves as intelligent), computers arenot intelligent. The result is that the computer can only work in a certainway, and any unwillingness to respond is most often the result of humanerror, ie., pilot error. This dictates that the successful live soundautomation program must be easy to learn and use. Furthermore, it isessential that any editing is, so far as is possible, non-destructive,making it possible to return to the original data.
Given these considerations, the console itself must provide all thefacilities, which would normally be required of a high-qualitysound-reinforcement board. Comprehensive equalization, audio and VCAsubgrouping (or servo faders with similar facilities), multiple auxiliarysends and output matrices must all be standard and operable without thecomputer. So what can we do with the computer, given the function-costequation mentioned above?
Snapshot automation and the cue listThe snapshot is a static picture of the settings of the console’s automatedcontrols. It can include levels, mutes, dynamics, outboard effects and anyother automated functions. It can be loaded manually or to incoming timecode. It should also, preferably, provide some means of triggering multipleexternal events; MIDI is the preferred method at the present time.
Snapshots may be created by capturing, on line, a particular console set-upor by programming the required configuration off-line. Either way, the datacan be edited as required. Snapshots, which may also be called scenes, canbe combined into a list of cues that apply to a particular piece of music,and the cue list can be saved as a performance. Because the scenes areindependent of the cue list, different cue lists can be created out of theavailable scenes, allowing experimentation with different approaches toautomation of the mix.
For example, a song can be broken up into sections. It may begin with voiceand piano, with all other inputs muted, soft compression on the voice andan intimate reverb. As the song progresses through the verse and the stageillumination increases, channels can be unmuted, bringing rhythmicinstruments into the mix and, at the same time, changing the vocalcompression and reverb settings. By the time the chorus is reached thewhole console can be opened up to bring in full drum kit, backing vocals,different effects and dynamics settings, and changes in level. If at somepoint the song reverts to simple voice and piano combination, all theengineer needs to have done is inserted the appropriate scene into the cuesequence and he will then revert to his opening position.
Following this approach, an entire concert or club set, theatrical show,foldback system or audio-visual presentation can be analyzed into sectionsand pre-programmed in minute detail, giving the engineer much greaterfreedom to concentrate on artistic matters. Furthermore, if the runningorder of a show is suddenly changed during rehearsal or a song is repeatedas an encore, the engineer only has to load the respective performance andhe will be ready to provide an accurate re-run of his mix.Time-code-synchronized loading of the cue list is also possible, which willcertainly be of value in shows based on MIDI or other forms of sequencers.
RecallRecall is a means of storing the positions of all non-automated controls ona console. Because recall dictates that settings must be reloaded manually,it is quite slow, typically 15 to 20 minutes to reconfigure a 56-channelconsole. Nevertheless, recall has certain powerful advantages.
The first of these is found where a number of acts are rotating through astage on a tour. During the changeover, there will be time to make a fullrecall of console settings so that the basic mix parameters for each actwill be in place by the time the artist takes the stage. In clubs orbroadcast sound stages, where a number of artists or shows use the venue ona regular basis, settings can be stored away on hard or floppy disk andused when required. Data may even be transported from one console toanother. Not insignificant is the fact that recall allows one console to beused where previously two or more might have been required, one for thesupport act and one for the main act. Even more important, in many stagemonitor applications, there is insufficient space for more than one consoleanyway, so recall facilities can provide a much higher standard of foldbackfor all artists.
Finally, occasionally members of an audience or congregation are soimpressed by the audio console that they try their own hand at tweaking thecontrols. As this doubtless innocent interference may take place when theengineer is away or, worse still, after the show has finished and theengineer has gone home, he or she may be unaware of any changes until thecurtain goes up the next day on an audio horror show. A quick recall scanof the console every night before the show will do much to cure thisproblem because the recall system will quickly point to any fascinating butmost likely unwanted settings.
This brief overview of the origins and possibilities of automation in soundreinforcement gives an idea of the present position. This area is ripe forrapid development, and we are at the beginning of an era of far-reachingchanges in live performance consoles. The speed of this change will mainlyresult from the translation of technology that has been laboriouslydeveloped in related areas of audio; technology that took 20 or more yearsto develop in the studio is ready to be applied to sound reinforcementalmost immediately.
If this surmise is correct, then digital consoles are not far away in soundreinforcement, because they have already arrived in recording andbroadcast. The digital console can be much smaller than its analogcounterpart, with any input accessible immediately by selecting it tocontrols located in front of the engineer instead of seven feet away acrossthe console. Furthermore, software routines can be developed that willconstantly monitor and analyze the acoustic conditions of the hall and thesignals from the stage. Automatic adjustment of room equalization, removalof feedback, optimization of microphone signals and so on could all be donewithin the console processing engine, giving the engineer the best possiblemixing environment in which to work without him even having to think aboutthe parameters. In fact only the control surface will be locatedfront-of-house, because the console itself can be located stageside. Cableruns will be greatly reduced because signals will not have to flow downhundreds of feet of multicore.
Those who ignore the first steps in sound-reinforcement console automationdo so at their peril. The future is arriving quickly, and failure to adaptwill lead to redundancy. Those who embrace the new possibilities, however,will unleash higher levels of creativity for themselves and will enjoytheir art even more than they do now. And the only way to tell what thecomputer did and what the engineer did will be by listening to the mixes ofthose who do not use automation.