Rethink Autoencoders: Robust Manifold Learning [conference paper]

Conference

ICML workshop on Uncertainty and Robustness in Deep Learning - July 17, 2020

Authors

Taihui Li (Ph.D. student), Rishabh Mehta (M.S. student), Zecheng Qian (undergrad research assistant), Ju Sun (assistant professor)

Abstract

PCA can be made robust to data corruption, i.e., robust PCA. What about the deep autoencoder, as a nonlinear generalization of PCA? This further motivates us to “reinvent” a factorization-based PCA as well as its nonlinear generalization. Focusing on sparse corruption, we model the sparsity structure explicitly using the `1 norm to obtain various robust formulations. For linear data, robust factorization performs comparably to the seminal convex formulation of robust PCA, whereas robust autoencoders provably fail. For nonlinear data, we perform careful experimental evaluation of robust deep autoencoders and robust nonlinear factorization for corruption removal on natural images. Both schemes can remove a considerable level of sparse corruption and effectively reconstruct the clean images.

Link to full paper

Rethink Autoencoders: Robust Manifold Learning

Keywords

machine learning, deep learning

Share