The effect of Leaky ReLUs on the training and generalization of overparameterized networks

Data Science Seminar

Yinglong Guo (University of Minnesota)


We investigate the training and generalization errors of overparameterized neural networks (NNs) with a wide class of leaky rectified linear unit (ReLU) functions. More specifically, we carefully upper bound both the convergence rate of the training error and the generalization error of such NNs and investigate the dependence of these bounds on the Leaky ReLU parameter, $\alpha$. We show that $\alpha =-1$, which corresponds to the absolute value activation function, is optimal for the training error bound. Furthermore, in special settings, it is also optimal for the generalization error bound. Numerical experiments empirically support the practical choices guided by the theory.

Start date
Tuesday, Feb. 20, 2024, 1:25 p.m.
End date
Tuesday, Feb. 20, 2024, 2:25 p.m.

Lind Hall 325 or via Zoom

Zoom registration