TY - CONF
T1 - DEEP RELU NETWORKS PRESERVE EXPECTED LENGTH
AU - Hanin, Boris
AU - Jeong, Ryan
AU - Rolnick, David
N1 - Funding Information:
The authors gratefully acknowledge a range of funding sources that supported this research. BH would like to acknowledge NSF grants DMS-1855684 DMS-2133806 as well as an NSF CAREER grant DMS-2143754 and an ONR MURI on Foundations of Deep Learning. DR would like to acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grants program and the Canada CIFAR AI Chairs program.
Publisher Copyright:
© 2022 ICLR 2022 - 10th International Conference on Learning Representationss. All rights reserved.
PY - 2022
Y1 - 2022
N2 - Assessing the complexity of functions computed by a neural network helps us understand how the network will learn and generalize. One natural measure of complexity is how the network distorts length - if the network takes a unit-length curve as input, what is the length of the resulting curve of outputs? It has been widely believed that this length grows exponentially in network depth. We prove that in fact this is not the case: the expected length distortion does not grow with depth, and indeed shrinks slightly, for ReLU networks with standard random initialization. We also generalize this result by proving upper bounds both for higher moments of the length distortion and for the distortion of higher-dimensional volumes. These theoretical results are corroborated by our experiments.
AB - Assessing the complexity of functions computed by a neural network helps us understand how the network will learn and generalize. One natural measure of complexity is how the network distorts length - if the network takes a unit-length curve as input, what is the length of the resulting curve of outputs? It has been widely believed that this length grows exponentially in network depth. We prove that in fact this is not the case: the expected length distortion does not grow with depth, and indeed shrinks slightly, for ReLU networks with standard random initialization. We also generalize this result by proving upper bounds both for higher moments of the length distortion and for the distortion of higher-dimensional volumes. These theoretical results are corroborated by our experiments.
UR - http://www.scopus.com/inward/record.url?scp=85132309710&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85132309710&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85132309710
T2 - 10th International Conference on Learning Representations, ICLR 2022
Y2 - 25 April 2022 through 29 April 2022
ER -