ISyE Seminar Series: Boris Hanin

Boris Hanin

"Ridgeless Interpolation in 1D with One Layer ReLU Networks and Tight Generalization Bounds for Learning Lipschitz Functions"

Presentation by Boris Hanin
Assistant Professor
Princeton ORFE

Wednesday, November 3
3:30-5:00 PM CST — Graduate Seminar and Reception (Zoom)

*Required attendance for students in IE 8773 and 8774

About the seminar:

In this talk, I will give a complete answer to the question of how neural networks use training data to make predictions on unseen inputs in a very simple setting. Namely, for a fixed dataset D = {(x_i,y_i), i=1,...,N} with x_i and y_i being scalars, I will consider the space of all one layer ReLU networks of arbitrary width that exactly fit this data and, among all such interpolants, achieve the minimal possible L_2-norm on the neuron weights. Intuitively, this is the space of “ridgeless ReLU interpolants” in that sense that it consists of ReLU networks that minimize the mean squared error over D plus an infinitesimal L_2-regularization on the neuron weights. I will give a complete characterization of how such ridgeless ReLU interpolants can make predictions on intervals (x_i, x_{i+1}) between consecutive datapoints. I will then explain how to use this characterization to obtain, uniformly over the infinite collection of ridgeless ReLU interpolants of a given dataset D, tight generalization bounds under the assumption y_i = f(x_i) with f a Lipschitz function.

Bio:

Boris Hanin works on theoretical machine learning, probability, and mathematical physics and is currently an Assistant Professor at Princeton Operations Research and Financial Engineering (ORFE). Prior to Princeton, Hanin was an Assistant Professor at Texas A&M Mathematics; a Visiting Scholar at Facebook, Google, and the Simons Institute; and an NSF Postdoctoral Fellow in Mathematics at MIT.

Seminar Video:

Category
Start date
Wednesday, Nov. 3, 2021, 3:30 p.m.
End date
Wednesday, Nov. 3, 2021, 5 p.m.
Location

Zoom

Share