Perspectives on More than 25 Years of VR/AR
I initially entered this field in 1990 during the early days of the Human Interface Technology Lab at the University of Washington. My boss was Dr. Thomas Furness, a man of extraordinary knowledge, talent and vision. At that time, the state of the art for commercially available stereoscopic head-mounted displays was the VPL EyePhone. The device was based on two monochromatic LCDs and RGB color filters serving as the image sources. Each LCD was comprised of an array of 86,400 (360×240) individually controllable cells. When combined with the color filters to produce RGB triads, this resulted in a resolution of 28,800 color pixels per eye. The display incorporated a wide-angle, stereoscopic lens set known as LEEP (Large Expanse Extra Perspective) optics. Each optics module (one for each eye) consisted of three lenses aligned on a common axis and provided a combined field-of-view of ~90 degrees. The display required the use of a high-performance graphics workstation in order to drive a sufficient stereo frame rate, as well as a magnetic tracking system from Polhemus to monitor position and orientation of the user’s head. A binaural audio solution known as the Convolvotron, developed under a contract from NASA, was available from Crystal River Engineering. Navigation through, and interactions with, the virtual world was accomplished using a fiber optic-based gesture-recognition device known as the DataGlove.
In total, the basic cost of entry was well into the many tens of thousands of dollars.
Fast forward to the present. Resolutions for flat-panel arrays have increased dramatically, optics are better, and the cost of computer hardware necessary to drive simulations at real-time frame rates are a fraction of that from 25+ years earlier. Sensor technologies for tracking position and orientation of displays and controllers are numerous. Systems such as the HTC Vive, Oculus CV1, PSVR and OSVR are well within consumer price points. Similarly, extremely high quality stabilized spatial audio solutions are numerous, as are the variety of input controllers.
Few of us involved in the field over two decades ago would have dreamed that mobile devices would one day be used to drive such simulations…… but here we are. In mid-2014, Google introduced a smartphone-based virtual reality headset made of cardboard, cheap biconvex plastic lenses, a magnet, washer, and a few tabs of Velcro. Now, anyone with a smart phone and as little as $15 can immediately begin running fully immersive virtual reality simulations (albeit in relatively crude form) and even get started developing their own applications.
In terms of augmented reality, advancements in the enabling technologies leading to the production of commercially available optical see-through displays have taken place at a much slower pace, but they are here now, and implemented correctly, can produce impressive results. Further, it is widely expected that within just a few short years, the current generation of flat panel-based head mounted displays for virtual reality will begin being displaced by dual purpose augmenting displays that use the retina of the eye as the final display surface.
This is an interesting time to be alive. For the first time since the virtual reality craze of the 1990s, the enabling technologies underpinning this field are providing sound platforms for developers to begin showing off their best ideas, which hopefully will extend far beyond first-person games, apps that overlay the location of coffee shops in your neighborhood, or enable you to tweet your heart rate and check email while out for a jog. These technologies hold potential for so much more.