Dec 1, 1999 12:00 PM,
Every A-V designer who has worked on more than a few projects knows howmany things can go wrong as the design for a high-tech presentation space -or a concert hall, auditorium or control room – makes the transition toreality. Absolute disasters in the industry are, mercifully, few and farbetween, but it has been equally rare that a finished installation iscompletely free of the little problems that impair an A-V space’sfunctionality. Human factors, especially ergonomics, are often inadequatelyaddressed. Sightline problems and problems with the legibility of projectedimages or the intelligibility of sound are rarely ruinous, and theygenerally only affect a limited portion of a finished space. They are,however, annoying, not only to clients who are paying a lot of money fordesign services and sophisticated equipment and who therefore have everyright to expect perfection, but also to designers who take pride in theirwork and who truly hate to see anything go wrong.
Until the recent past, however, such problems have been notoriouslydifficult to anticipate with a high degree of precision and almostimpossible to fix prior to construction and installation. The success of aparticular presentation space is always the product of such an enormousnumber of (sometimes quite subtle) electronic, acoustical and architecturalvariables that designers have had to content themselves with merelyadequate design and, of course, to commit themselves to ironing out thebugs once a space has been built. After all, you could not be there -inside the space – before the space was actually built. Those limitations,however, are quickly dissolving, as A-V design engineers and consultantsbegin to follow the lead set by their colleagues in the architecturalprofession and adopt new 3-D imaging design technologies.
By designing from within 3-D virtual space, A-V design engineers alreadyhave an enormously enhanced, if largely untapped, capability to predict howan installation will look, sound and even feel. Further, an engineer canalleviate design conflicts long before construction begins. Autodesk 3DStudio has quickly become a standard tool by which architects and engineersnow collaborate – creating, visiting and working within the same virtualenvironment and perfecting it before construction. Continuing advances indata storage technologies and plummeting prices are simply wiping awaydifficulties that even in the recent past would have been insurmountable asit becomes ever easier and cheaper to store and trade large files. Also, awhole range of software applications lets us know ahead of time and with ahigh degree of accuracy how the potentially troublesome variables willinteract once a space has been built.
Among the software tools already available is ray-tracing software thatpermits the designer to foresee how the light from all sources, includingreflected light, will affect a user’s view from any point within thefinished space. An example is Autodesk’s Radiosity application, whichrenders in fine detail the multiplicity of light rays bouncing around atypical space, bleeding onto projection screens, causing glare and hotspotson displays and inadequately illuminating audience members duringvideoconferences. Using it, we can see precisely how light will reactwithin a space and can therefore suggest alternative fixture locations,directionality and intensities to optimize the impact of lighting on theA-V system. Although this task is traditionally left to the lightingdesigner (with sometimes unfortunate results), the future integration ofA-V and lighting design will benefit from a fully rendered, fullyintegrated 3-D view.
Texture mapping reproduces the visual impact that materials and finisheswill have on the space. An example is Autodesk’s 3D Studio Viz renderingand animation program, which offers a vast array of materials – almostanything that might be encountered. In addition, custom surfaces andmaterials can be scanned into the system and mapped onto architecturalelements to see exactly what they will look like.
Acoustical simulation technologies allow designers to predict and improvethe intelligibility of sound from any point within an interior space. Oneof many emerging technologies in the acoustical domain is Renkus-Heinz’sEase/Ears software, which provides auralization or experiencing theacoustic field in virtual space. Ease/Ears allows you to listen to theperformance of your system design from any audience location you mightchoose. In other words, the auralization gives you the chance to initiateabsolutely required changes to a room’s acoustics before installation.
Ergonomic software places photo-realistic, virtual human beings within the3-D image, helping designers to make sure that sightlines are as good asthey can reasonably be from every occupied position within the space. We atCosentini Associates are using ArchSoft’s RealPeople, which offersaccurate, anthropometric men and women, although many other suchapplications, equally powerful, are becoming available. The placement ofaccurately scaled people in a drawing offers meaningful visual cuesregarding conflicts, obstructions and comfort issues.
Moreover, in high-tech presentation spaces, the ability of cameras to seewhat is going on and transmit intelligible, pleasing images has becomeevery bit as important as human occupants’ ability to see clearly. Toenhance this critical aspect of presentation-space design, software hasbeen developed that permits the designer to situate a virtual camera – onewhose lens fields-of-view precisely match those of the actual camera – atany location within the virtual space to gauge how well that camera willsee at a variety of lens settings and adjust the placement and lensspecification accordingly.
Beyond these applications, there is a newly emerging technique that willplay a key role in the future of A-V design – online collaboration. Byusing a project site, the various designers, engineers and other teammembers can meet within the virtual space on the Web or on a privateintranet or extranet to contribute their disciplines’ drawings and designelements in a shared environment. Online collaboration promises to lead togreat reductions in conflict resolution and great improvements inefficiency and precision. Some early examples of collaborative-environmentsoftware include Bentley Systems’ ProjectWise, Blueline Online’sProjectNet, Kamel Software’s FastLook Plus and Framework Technologies’ActiveProject.
>From 2 degree-D to 3-D
In actuality, 3-D imaging is already enabling A-V designers to leapfrogover difficulties that, even recently, have been the bane of sound andvideo consultants. For example, before the advent of tools like thoselisted above, it would have been extremely difficult (if not impossible) topredict that a particular finish, if used on a conference room table top,would cause glare that would impair occupants’ visual comfort. Now, thatkind of problem can be foreseen and corrected relatively easily.
Despite such advances, however, the term “3-D” is still something of amisnomer. It is more accurate to say that we have moved beyond twodimensions but not quite into the third. As beneficial as the currentlyavailable tools are, today’s technologies are still 2 degree-D – restrictedby the mouse, keyboard and 2-D monitor environment, which preventsdesigners from experiencing spaces while they are creating them. We arelike Alice in the first chapter of Lewis Carroll’s Through the LookingGlass, poised at the mirror, longing to climb into the 3-D space on theother side.
Nevertheless, the moment when we will be able to jump through into virtualspace is fast approaching. It is highly probable that by the middle of thenext decade architects and engineers will be able to abandon the 2-Drepresentations that appear on flat computer screens and virtually enterthe 3-D space under design, manipulating it from within. A whole new classof human-machine interface devices are now under development that willallow us to see, hear and feel a virtual space almost as if we were insidean actual, built room.
Today, 3-D display systems, such as those from StereoGraphics, usepolarizing glasses or glasses with liquid-crystal shutters to add theZ-axis, where depth cues reside, to 2-D images. Within just a few years,such glasses will be used by designers to climb inside the rooms theycreate, but techniques for putting us inside the virtual space will not endthere. Autostereo-scopic systems will enable users to see in true 3-Dwithout glasses. Special earpieces will enable the designer to hear theacoustic attributes of a virtual space, including subtle variances inreverberation and intelligibility produced by different furniture andfinishes (and their interaction) and make acoustically appropriate choices.Also, a variety of hand-held wands, joysticks, gloves and other interactivedevices will provide accurate navigational and even tactile feedback fromwithin the 3-D environment.
Advances in sensory or perceptual tools like those just described willcombine with software advances to enable designers to alter the virtualenvironment in real time. Of course, we already have the ability to createvideo fly-throughs of virtual spaces by animating sequences of images shotfrom shifting locations within a virtual space. Such fly-throughs, asvaluable as they are as presentation tools, enabling the client to get avery accurate picture of how an installation will look, still require agreat deal of time-consuming number-crunching. It may take a few days ofintensive labor to produce even a 30 second sequence. Our sophisticatedfly-throughs will look absolutely primitive in the near future, when adesigner will be able to enter a virtual space, turn around and navigatewithin it, open a virtual cabinet of components, select from a range ofproducts, place them at various locations within the 3-D environment andimmediately discern how well each will work.
The refinement of current 3-D technologies and the emergence ofincreasingly sophisticated imaging tools are exciting enough, but they areonly part of the story. It will not be long till computer-assisted designwill be more enhanced by the development of sophisticated cyberagents,intelligent computer surrogates/assistants that will possess a detailedunderstanding of design principles. Among other talents, they will serve asrepositories of regulatory codes and of all the mathematical and scientificinformation that determines the success of a high-tech interior space.Through continuous, high-speed Internet connections, they will communicatewith product manufacturers’ own cyberagents to identify equipment thatmeets a project’s technical, budgetary and availability/schedulerequirements instantaneously.
These cyberagents will be capable of responding to the designer’s verbalcommands. At the moment, speech-recognition technologies still suffer fromerror rates of roughly 3% to 5%, rendering them inadequate for manymission-critical uses. As speech recognition is perfected, however, an A-Vdesign engineer, working within the 3-D environment, will be able tocommunicate directly with the cyberagent, which will respond toinstructions instantly. A verbal command string might sound something like:”Place a Sony DXC 151 camera with a Fuji model XYZ lens with 9 mm to 45 mmfocal length on the rear wall on a Vicon 123 mount, at 7 feet 6 inchesabove the floor, 18 inches from the left wall.” The cyberagent will pull upthe optical and mechanical attributes of the devices, assemble them, placethem on the wall in the designated location and then produce an animatedsequence showing the view through that camera’s lens as it pans, tilts andzooms. Another simple command sequence might b!e: “Cyberagent, go on the Web and find three choices for program loudspeakers with no dimension greaterthan 18 inches and with dual drivers. Make sure they are self-powered andpriced at less than $500 each. Find their wallmounts, and show me how theylook on the front wall, in scale.”
Already, equipment manufacturers regularly provide CAD-compatiblerepresentations of their products that an A-V designer can simply insertinto drawings. This practice, too, will undergo adaptation and expansionfor the 3-D design environment with manufacturers delivering productinformation files that include 3-D representations of the virtual objectfor easy insertion into the virtual space. Beyond showing the object’sphysical appearance, these files will also incorporate the product’smechanical, optical or acoustical characteristics so that the designer can,for example, experience the panning speed of a given pan-tilt head.
Cyberagents will also serve as information mediators between engineers,product manufacturers and clients by, for example, generating continuouslyupdated lists of specified products, asking the selected A-V vendors (viainstant Internet connectivity) to insert prices on a pre-structuredelectronic form, compiling up-to-the-minute cost estimates for the client’sreview and, from these, developing accurate budgets well before a job goesout for contractors’ bids. Any designer who has had any experiencewhatsoever with today’s paper-based budgeting methods and all theinconsistencies of pricing and availability with which we must now contendwill recognize the advantages.
The possibilities are nearly endless. Back in the 1960s, the futurologistMarshall McLuhan used to speak of tools as extensions of human beings. Thecyberagent makes that concept crystal clear – as the designer and theintelligent agent work together, the agent will gradually become part ofthe designer, understanding the designer’s predilections and designintents, emulating them and bringing the same set of principles to eachsuccessive job, allowing, of course, for evolution and customization. Thecyberagent will be, in effect, an ideal executive assistant, one that neverneeds a coffee break, never gets sick and never gets bored or frustrated,whether laying out the most mundane or the most complex and challengingsystem.
Hal vs. the Holodeck.
We human beings seem to have a hard time getting over our dread of newtechnologies, especially artificial intelligence (AI) technologies likethose I have just covered. For many of us who came of age in the 1960s and1970s, the image of AI gone drastically wrong is epitomized by the Hal 9000computer of the late filmmaker Stanley Kubrick’s masterpiece, 2001: A SpaceOdyssey. Well, it is now almost 2001 in real time, so to speak, and I thinkit is about time we put Hal to rest.
Granted, there are aspects of the coming AI revolution that should give ussome pause: for instance, we can only speculate about the kinds of socialchanges that the ever more widespread use of AI devices will inspire -including changes in the ways that human beings work with one another. Inthe A-V engineering profession, we may see fundamental changes inapprenticeship – in the ways, for example, that knowledge is passed on fromolder, more experienced designers to their younger colleagues. I’m surethat in the near future a lot more of the basics will be taught throughcomputer tutorials. It is my hope that the advent of cyberagents will meanthat younger designers will be able to engage in truly rewarding, creativework at an earlier stage in their careers than is now typically the case.
Especially regarding A-V design, I think the future is looking bright, notbleak. Now, a great deal – much too much – of a designer’s time is taken upin tedious research tasks that would be much better left to a cyberagent.Moreover, there are whole areas of design, centrally important areas, wherethe decisions will continue to be made by human designers. I am referringto the most intellectually satisfying aspect of design work, the reasonthat designers become designers in the first place – the creation ofbeautiful, functional spaces that meet a client’s needs. Questions oftaste, beauty, setting a project’s parameters, understanding clients’ needsand helping clients realize their goals will for a long time remain theprovince of human decision-makers. Let the computers do the grunt work.
In other words, forget Hal. The A-V design future is more accuratelyportrayed by a much more optimistic and benign image from science fiction -the Holodeck from TV’s Star Trek: The Next Generation. On the Holodeck,imagination is instantly translated into a real-seeming virtual reality. Itgives absolutely free rein to intellectual creativity. Granted, it is notlikely that 3D imaging technologies will achieve that degree of perfectionsoon. For example, I doubt we will soon be able, in virtual reality, toreproduce with absolute precision the differing visual characteristics ofthe images thrown by two different video projectors. Limitations instereoscopic display technologies and the limited resolution with which wewill have to live for a long time to come prevent us from seeing or judgingthe subtle differences in pixel characteristics, jitter, flicker, opticalaberrations, lens vignetting and all the infinitesimal yet importantvariances between one real-world projector and another.
These limitations, however, are actually quite minor when you compare themwith the ones we must work around today, and imaging technologies will comea lot closer to the Holodeck ideal than we can now imagine. I look forwardto being there.