Capture to Rendering Pipeline for Generating Dynamically Relightable Virtual Objects with Handheld RGB-D Cameras [conference paper]

Conference

26th ACM Symposium on Virtual Reality Software and Technology - November 1, 2020

Authors

Chih-Fan Chen, Evan Suma Rosenberg (assistant professor)

Abstract

We present a complete end-to-end pipeline for generating dynamically relightable virtual objects captured using a single handheld consumer-grade RGB-D camera. The proposed system plausibly replicates the geometry, texture, illumination, and surface reflectance properties of non-Lambertian objects, making them suitable for integration within virtual reality scenes that contain arbitrary illumination. First, the geometry of the target object is reconstructed from depth images captured using a handheld camera. To get nearly drift-free texture maps of the virtual object, a set of selected images from the original color stream is used for camera pose optimization. Our approach further separates these images into diffuse (view-independent) and specular (view-dependent) components using low-rank decomposition. The lighting conditions during capture and reflectance properties of the virtual object are subsequently estimated from the computed specular maps. By combining these parameters with the diffuse texture, the reconstructed model can then be rendered in real-time virtual reality scenes that plausibly replicate real world illumination at the point of capture. Furthermore, these objects can interact with arbitrary virtual lights that vary in direction, intensity, and color.

Link to full paper

Capture to Rendering Pipeline for Generating Dynamically Relightable Virtual Objects with Handheld RGB-D Cameras

Keywords

virtual reality

Share