Simplicity Bias in Deep Learning
Prateek Jain (Google Inc.)
While deep neural networks have achieved large gains in performance on benchmark datasets, their performance often degrades drastically with changes in data distribution encountered during real-world deployment. In this work, through systematic experiments and theoretical analysis, we attempt to understand the key reasons behind such brittleness of neural networks in real-world settings.
More concretely, we demonstrate through empirical+theoretical studies that (i) neural network training exhibits "simplicity bias" (SB), where the models learn only the simplest discriminative features and (ii) SB is one of the key reasons behind non-robustness of neural networks. We will then briefly outline some of our (unsuccessful) attempts so far on fixing SB in neural networks illustrating why this is an exciting but challenging problem.