Self-Validation: Early Stopping for Single-Instance Deep Generative Priors [preprint]

Preprint date

October 23, 2021


Taihui Li (Ph.D. student), Zhong Zhuang, Hengyue Liang, Le Peng (Ph.D. student), Hengkang Wang (Ph.D. student), Ju Sun (assistant professor)


Recent works have shown the surprising effectiveness of deep generative models in solving numerous image reconstruction (IR) tasks, even without training data. We call these models, such as deep image prior and deep decoder, collectively as single-instance deep generative priors (SIDGPs). The successes, however, often hinge on appropriate early stopping (ES), which by far has largely been handled in an ad-hoc manner. In this paper, we propose the first principled method for ES when applying SIDGPs to IR, taking advantage of the typical bell trend of the reconstruction quality. In particular, our method is based on collaborative training and self-validation: the primal reconstruction process is monitored by a deep autoencoder, which is trained online with the historic reconstructed images and used to validate the reconstruction quality constantly. Experimentally, on several IR problems and different SIDGPs, our self-validation method is able to reliably detect near-peak performance and signal good ES points. Our code is available at

Link to full paper

Self-Validation: Early Stopping for Single-Instance Deep Generative Priors


machine learning