HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge [preprint]

Preprint date

September 30, 2021

Authors

Jae Shin Yoon (Ph.D. student), Zhixuan Yu (Ph.D. student), Jaesik Park, Hyun Soo Park (assistant professor)

Abstract

This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing. The goal of HUMBI is to facilitate modeling view-specific appearance and geometry of five primary body signals including gaze, face, hand, body, and garment from assorted people. 107 synchronized HD cameras are used to capture 772 distinctive subjects across gender, ethnicity, age, and style. With the multiview image streams, we reconstruct high fidelity body expressions using 3D mesh models, which allows representing view-specific appearance. We demonstrate that HUMBI is highly effective in learning and reconstructing a complete human model and is complementary to the existing datasets of human body expressions with limited views and subjects such as MPII-Gaze, Multi-PIE, Human3.6M, and Panoptic Studio datasets. Based on HUMBI, we formulate a new benchmark challenge of a pose-guided appearance rendering task that aims to substantially extend photorealism in modeling diverse human expressions in 3D, which is the key enabling factor of authentic social tele-presence. HUMBI is publicly available at http://humbi-data.net

Link to full paper

HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge

Keywords

computer vision

Share