Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

WATCH: Meta demos bleeding-edge VR tech prototypes

Prototypes attempt to tackle some of VR's most difficult problems

During last week’s SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques) convention, Meta had the chance to showcase some bleeding-edge VR technologies that they are working on. These are not technologies that are currently implemented in any current (or even soon-to-be released) VR devices, but rather conceptual attempts to remedy some of VR’s hurdles. These include varifocus, or the ability to shift visual focus between objects, and perspective-correct passthrough, which is simply the user’s ability to view the real world around them while still immersed in virtual reality.

See also: Apple’s security requirements for Vision Pro developers are stringent, to say the least

Reality Labs, a research unit of Meta, demonstrated some of these prototype technologies at SIGGRAPH, and for those of us who couldn’t be there in person, Meta has released a series of video demonstrations on a recent blog post:

“To create a great varifocal experience, the hardware and software need to work together seamlessly,” says Research Scientist Olivier Mercier. “Our understanding of distortion correction, eye tracking, rendering, and latency have all been refined to create a high-quality experience that can use our best varifocal hardware to its maximum capabilities. Over the years, varifocal has also moved from a niche research topic to something that a lot more people have interest in. Our varifocal software has evolved from an unstable, research-y, one-off branch of the main code into something that’s now much better integrated with the rest of our VR platform. This makes collaboration with other teams much easier and makes for a much more polished experience where varifocal is an integral part of the rendering pipeline. This tighter integration also means that we support many more games and applications today, compared to our early varifocal headsets that would often use more simplistic or custom demo content.”

“Unlike a traditional light field camera featuring a lens array, Flamera (think “flat camera”) strategically places an aperture behind each lens in the array. Those apertures physically block unwanted rays of light so only the desired rays reach the eyes (whereas a traditional light field camera would capture more than just those light rays, resulting in unacceptably low image resolution). The architecture used also concentrates the finite sensor pixels on the relevant parts of the light field, resulting in much higher resolution images…All of this results in a view of the physical world seen through the lens of the headset that more closely approximates what the eye would see naturally and with fewer artifacts compared to commercial headsets on the market today and at a higher resolution than traditional light field cameras.”

“With our work on light field passthrough, we’re focused on previewing the experience of perspective-correct MR passthrough,” says Display Systems Research Director Douglas Lanman. “We previously shared our work on MR passthrough with the SIGGRAPH community through the neural passthrough project, which was aimed at using machine learning methods to synthesize passthrough imagery with fewer artifacts than existing commercial systems. However, neural passthrough still produces reprojection artifacts and requires workstation-class GPUs to operate in real time. Light field passthrough takes a computational imaging approach, where the camera hardware and reprojection algorithms have been designed to work in concert—greatly simplifying the computational challenge of synthesizing a viewpoint with the correct perspective.”

 

Featured Articles

Close