Normalization effects and mean field theory for deep neural networks

Data Science Seminar

Konstantinos Spiliopoulos (Boston University)

Abstract

We study the effect of normalization on the layers of deep neural networks. A given layer $i$ with $N_{i}$ hidden units is allowed to be normalized by $1/N_{i}^{\gamma_{i}}$ with $\gamma_{i}\in[1/2,1]$ and we study the effect of the choice of the $\gamma_{i}$ on the statistical behavior of the neural network’s output (such as variance) as well as on the test accuracy on the MNIST and CIFAR10 data sets. We find that in terms of variance of the neural network’s output and test accuracy the best choice is to choose the $\gamma_{i}$’s to be equal to one, which is the mean-field scaling. We also find that this is particularly true for the outer layer, in that the neural network’s behavior is more sensitive in the scaling of the outer layer as opposed to the scaling of the inner layers. The mechanism for the mathematical analysis is an asymptotic expansion for the neural network’s output and corresponding mean field analysis. An important practical consequence of the analysis is that it provides a systematic and mathematically informed way to choose the learning rate hyperparameters. Such a choice guarantees that the neural network behaves in a statistically robust way as the $N_i$'s grow to infinity.

Start date
Tuesday, Nov. 14, 2023, 1:25 p.m.
End date
Tuesday, Nov. 14, 2023, 2:25 p.m.
Location

Lind Hall 325 and Zoom

Zoom registration

Share