Learning Rolling Shutter Correction from Real Data without Camera Motion Assumption [preprint]

Preprint date

November 5, 2020

Authors

Jiawei Mo (Ph.D. student), Md Jahidul Islam (Ph.D.student), Junaed Sattar (assistant professor)

Abstract

The rolling shutter mechanism in modern cameras generates distortions as the images are formed on the sensor through a row-by-row readout process; this is highly undesirable for photography and vision-based algorithms (e.g., structure-from-motion and visual SLAM). In this paper, we propose a deep neural network to predict depth and camera poses for single-frame rolling shutter correction. Compared to the state-of-the-art, the proposed method has no assumptions on camera motion. It is enabled by training on real images captured by rolling shutter cameras instead of synthetic ones generated with certain motion assumption. Consequently, the proposed method performs better for real rolling shutter images. This makes it possible for numerous vision-based algorithms to use imagery captured using rolling shutter cameras and produce highly accurate results. Our evaluations on the TUM rolling shutter dataset using DSO and COLMAP validate the accuracy and robustness of the proposed method. 

Link to full paper

Learning Rolling Shutter Correction from Real Data without Camera Motion Assumption

Keywords

computer vision

Share