Egocentric Distance Judgments in Full-Cue Video-See-Through VR Conditions are No Better than Distance Judgments to Targets in a Void [conference paper]

Conference

IEEE Virtual Reality and 3D User Interfaces (VR) Conference - March 27, 2021

Authors

Koorosh Vaziri (Ph.D. student), Maria Bondy (summer REU researcher), Amanda Bui (undergraduate research assistant), Victoria Interrante (professor)

Abstract

Understanding the extent to which, and conditions under which, scene detail affects spatial perception accuracy can inform the responsible use of sketch-like rendering styles in applications such as immersive architectural design walkthroughs using 3D concept drawings. This paper reports the results of an experiment that provides important new insight into this question using a custom-built, portable video-see-through (VST) conversion of an optical-see-through head-mounted display (HMD). Participants made egocentric distance judgments by blind walking to the perceived location of a real physical target in a real-world outdoor environment under three different conditions of HMD-mediated scene detail reduction: full detail (raw camera view), partial detail (Sobel-filtered camera view), and no detail (complete background subtraction), and in a control condition of unmediated real world viewing through the same HMD. Despite the significant differences in participants' ratings of visual and experiential realism between the three different video-see-through rendering conditions, we found no significant difference in the distances walked between these conditions. Consistent with prior findings, participants underestimated distances to a significantly greater extent in each of the three VST conditions than in the real world condition. The lack of any clear penalty to task performance accuracy not only from the removal of scene detail, but also from the removal of all contextual cues to the target location, suggests that participants may be relying nearly exclusively on context - independent information such as angular declination when performing the blind-walking task. This observation highlights the limitations in using blind walking to the perceived location of a target on the ground to make inferences about people's understanding of the 3D space of the virtual environment surrounding the target. For applications like immersive architectural design, where we seek to verify the equivalence of the 3D spatial understanding derived from virtual immersion and real world experience, additional measures of spatial understanding should be considered.

Link to full paper

Egocentric Distance Judgments in Full-Cue Video-See-Through VR Conditions are No Better than Distance Judgments to Targets in a Void

Keywords

graphics, virtual reality, virtual environments

Share