**Brooke Krajancich, Electrical Engineering PhD Candidate, Stanford**
Joint event hosted by Silicon Valley ACM SIGGRAPH
This is the second of two joint events on VR with SV SIGGRAPH
7:00 Talk introduction
7:10 Speaker starts
8:20 – 8:45 Speaker finishes
Virtual and augmented reality (VR/AR) wearable displays strive to provide perceptually realistic user experiences, while constrained by limited compute budgets, hardware, and transmission bandwidths of wearable computing systems. This presentation describes two different ways in which a greater understanding of the human visual system may assist in achieving this goal. The first looks at how studying the anatomy of the eye reveals inaccuracies in how we currently render disparity depth cues, leading to objects appearing closer than intended, or in the case of AR, poorly aligned with target objects in the physical world. However, this can be corrected with gaze-contingent stereo rendering can, enabled by eye-tracking. The second derives a spatio-temporal model of the visual system, describing the gamut of visible signals for a given eccentricity and display luminance. This model could enable future foveated graphics techniques with over 7x the bandwidth savings than those today.
Brooke Krajancich is a final year PhD candidate in the Electrical Engineering Department at Stanford University, advised by Prof. Gordon Wetzstein. Her research focuses on developing computational techniques that leverage the co-design of optical elements, image processing algorithms and intimate knowledge of the human visual system for improving current-generation virtual and augmented reality displays. She is actively looking for full-time positions starting this June.
Another joint VR event with SV SIGGRAPH will be scheduled:
Human-Centered Design for VR Training by Jason Jerald, CEO, NextGen Interactions and Lead VR Advisor at XMod