Early Stopping for Deep Image Prior [preprint]

Preprint date

December 11, 2021

Authors

Hengkang Wang (Ph.D. student), Taihui Li (Ph.D. student), Zhong Zhuang (Ph.D. student), Tiancong Chen (Ph.D. student), Hengyue Liang, Ju Sun (assistant professor)

Abstract

Deep image prior (DIP) and its variants have showed remarkable potential for solving inverse problems in computer vision, without any extra training data. Practical DIP models are often substantially overparameterized. During the fitting process, these models learn mostly the desired visual content first, and then pick up the potential modeling and observational noise, i.e., overfitting. Thus, the practicality of DIP often depends critically on good early stopping (ES) that captures the transition period. In this regard, the majority of DIP works for vision tasks only demonstrates the potential of the models -- reporting the peak performance against the ground truth, but provides no clue about how to operationally obtain near-peak performance without access to the groundtruth. In this paper, we set to break this practicality barrier of DIP, and propose an efficient ES strategy, which consistently detects near-peak performance across several vision tasks and DIP variants. Based on a simple measure of dispersion of consecutive DIP reconstructions, our ES method not only outpaces the existing ones -- which only work in very narrow domains, but also remains effective when combined with a number of methods that try to mitigate the overfitting.

Link to full paper

Early Stopping for Deep Image Prior

Keywords

computer vision

Share