Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

The National University of Mexico Enhances Learning with Sophisticated Visualization Technology

The National University of Mexico (UNAM) has established the Observatory of Visualization, referred to as IXTLI, an Aztec word that means face and eye. The innovative IXTLI facility allows professors and researchers to study real or abstract objects, scientific phenomena

The National University of Mexico Enhances Learning with Sophisticated Visualization Technology

May 16, 2007 12:00 PM

More AV in Education news from
The Briefing Room

The National University of Mexico (UNAM) has established the Observatory of Visualization, referred to as IXTLI, an Aztec word that means face and eye. The innovative IXTLI facility allows professors and researchers to study real or abstract objects, scientific phenomena, theoretical concepts, and complex models in a three-dimensional immersive virtual reality environment. IXTLI is used to conduct scientific research and instruction in multiple disciplines, including archeology, medicine, molecular chemistry, geography, biochemistry, architecture, topology, psychology, and microbiology.

“The goal is to maximize the educational experience for students and provide a powerful investigative tool to researchers,” says Dr. Genevieve Lucet, director of computing facilities for research at UNAM. “The IXTLI’s use of state-of-the-art immersive visualization technology is unique in Latin America. This room’s advanced display technology allows participants to visualize and simulate complex objects and images in 3D with real time image control and manipulation.”

UNAM contracted Fakespace Systems, a pioneer in the development of immersive visualization and virtual reality, to design and install this novel display system. “The centerpiece is a massive 30ft. long by 8.3ft. high 140-degree curved, contiguous screen designed to provide a stimulating, engaging educational experience,” says Malcolm Green, senior account executive for Fakespace Systems. “The large screen is an incredible ‘canvas’ on which multiple monoscopic image windows can be presented simultaneously.

“RGB Spectrum’s SuperView multi-image display processors were selected to provide high performance image integration to display multiple visuals as they are being generated from different sources. The SuperView allows manipulation of all onscreen windows, creating dynamic user interaction. The display is also designed for immersive virtual reality. The system’s projectors are also capable of presenting stereoscopic, computer-generated images with three dimensional depth perception.”

“The visuals are dynamic, allowing instructors and researchers to naturally interact with and manipulate images in realtime,” Lucet says. “The individual is outfitted with a movement-tracking system composed of a wireless glove with finger sensors, head sensors, and a sensor that analyzes motion of a wand or three-dimensional mouse device. As the individual moves their body and head, the image generation system regenerates the visuals to match their position and viewing perspective as if they were moving in the real world. Stereoscopic, three-dimensional depth sensation is generated by projecting distinct images for each eye and alternating these at high speed. The audience views the imagery through electronic glasses that shutter open and closed to match the alternating presented by the projector. The shuttering is imperceptible to the user and creates the two-perspective view that is necessary to create the three dimensional experience. The center’s sophisticated multi-channel audio system complements the visuals and stimulates 3D auditory sensations in the audience.”

The system required intensive, realtime image processing to deal with the extremely complex computer-generated imagery. Three SuperView processors receive both computer and video inputs. Computer sources include imagery generated from an SGI Onyx 350 supercomputer and PC and Macintosh computers. The content is mainly comprised of three-dimensional models, and has the possibility to include animation, graphs, internet pages, PowerPoint presentations, and spreadsheets. Video sources include five robotic cameras, VCR and DVD players, and videoconferencing.

The SuperView processors integrate these computer and video signals and output the consolidated multi-window images to three ceiling-mounted 3-chip DLP projectors at their native 1280×1024 SXGA pixel resolution. Each projector produces one-third of the overall screen and an electronic blending technology overlaps and balances the three sections into one contiguous image. The combined screen image has a viewable resolution of 3840×1024 pixels.

Instructors and researchers use an AMX touchscreen control panel to operate the SuperView processors and select the visual sources to be displayed. Display windows can be resized and positioned anywhere on screen to allow viewers to compare and correlate visuals. Preset display configurations can be selected at the push of a button.

“The vanguard facility is a showcase for exciting, dynamic use of visual technology,” Lucet says. “The SuperView processors deliver outstanding image quality and performance. The response from instructors, researches, colleagues, and students has been excellent.”

Featured Articles

Close