Barmak Heshmat, MIT Media Lab, USA
Reza Khorasaninejad, Harvard Univ., USA
Federico Capasso, Harvard Univ., USA
Creating immersive 3D stereoscopic, auto stereoscopic, and lightfield experiences are becoming the center point of optical design for future near eye displays, augmented reality, and virtual reality headsets, monitors, and desktop screens. In recent years, there has been a major investment (tens of billions of dollars) from industry along with many breakthroughs from academia in optics, graphics, and display communities to realize immersive light field experiences. So far, geometrical, computational, multi-focal, and holographic methods have been the four leading methods paving the way to demonstrate such experiences at system level: either at stereoscopic or monocular settings. While there is ongoing research to provide better lightfield displays; there are still major challenges and sometimes unnoted fundamental barriers. Particularly, the breakthroughs at system level are hindered by device performance and the breakthroughs at device level are not scalable to system implementation.
The symposium on “Augmented and Virtual Reality: Systems Meet Devices” will review the recent progress and emerging directions of future displays not only at the system level but also at the device level to fill in the gap between them. Novel directional light sources, metasurfaces, advanced image guides, and emerging methods for monocular depth representation are among some of the topics that will be covered in this symposium.
Mark Brongersma, Stanford University, USA
Zhaoyi Li, Harvard University, USA
Jonghyun Kim, NVIDIA Corporation, USA
Ajit Ninan, Dolby Laboratories, USA
Edward Tang, Avegant Corporation, USA