Products of Many Large Random Matrices and Gradients in Deep Neural Networks

Boris Hanin, Mihai Nica

Research output: Contribution to journalArticlepeer-review

34 Scopus citations

Abstract

We study products of random matrices in the regime where the number of terms and the size of the matrices simultaneously tend to infinity. Our main theorem is that the logarithm of the ℓ2 norm of such a product applied to any fixed vector is asymptotically Gaussian. The fluctuations we find can be thought of as a finite temperature correction to the limit in which first the size and then the number of matrices tend to infinity. Depending on the scaling limit considered, the mean and variance of the limiting Gaussian depend only on either the first two or the first four moments of the measure from which matrix entries are drawn. We also obtain explicit error bounds on the moments of the norm and the Kolmogorov-Smirnov distance to a Gaussian. Finally, we apply our result to obtain precise information about the stability of gradients in randomly initialized deep neural networks with ReLU activations. This provides a quantitative measure of the extent to which the exploding and vanishing gradient problem occurs in a fully connected neural network with ReLU activations and a given architecture.

Original languageEnglish (US)
Pages (from-to)287-322
Number of pages36
JournalCommunications In Mathematical Physics
Volume376
Issue number1
DOIs
StatePublished - May 1 2020
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Statistical and Nonlinear Physics
  • Mathematical Physics

Fingerprint

Dive into the research topics of 'Products of Many Large Random Matrices and Gradients in Deep Neural Networks'. Together they form a unique fingerprint.

Cite this