How Well Can We Generalize Nonlinear Learning Models in High Dimensions??

Inbar Seroussi (Weizmann Institute of Science)

Modern learning algorithms such as deep neural networks operate in regimes that defy the traditional statistical learning theory. Neural networks architectures often contain more parameters than training samples. Despite their huge complexity, the generalization error achieved on real data is small. In this talk, we aim to study the generalization properties of algorithms in high dimensions. We first show that algorithms in high dimensions require a small bias for good generalization. We show that this is indeed the case for deep neural networks in the over-parametrized regime. We, then, provide lower bounds on the generalization error in various settings for any algorithm. We calculate such bounds using random matrix theory (RMT). We will review the connection between deep neural networks and RMT and existing results. These bounds are particularly useful when the analytic evaluation of standard performance bounds is not possible due to the complexity and nonlinearity of the model. The bounds can serve as a benchmark for testing performance and optimizing the design of actual learning algorithms. Joint work with Ofer Zeitouni, more information in arxiv.org/abs/2103.14723.

Inbar Seroussi is a postdoctoral fellow in the mathematics department at the Weizmann Institute of Science, hosted by Prof. Ofer Zeitouni. Previously, she completed her Ph.D. in the applied mathematics department at Tel-Aviv University under the supervision of Prof. Nir Sochen. Her research interest includes modeling of complex and random systems in high dimensions with application to modern machine learning, physics and medical imaging. She develops and uses advanced tools drawn from statistical physics, stochastic calculus, and random matrix theory.

Start date
Tuesday, April 12, 2022, 1:25 p.m.
End date
Tuesday, April 12, 2022, 2:25 p.m.
Location

Zoom

Share