TY - JOUR
T1 - Towards understanding hierarchical learning
T2 - 34th Conference on Neural Information Processing Systems, NeurIPS 2020
AU - Chen, Minshuo
AU - Bai, Yu
AU - Lee, Jason D.
AU - Zhao, Tuo
AU - Wang, Huan
AU - Xiong, Caiming
AU - Socher, Richard
N1 - Funding Information:
We thank the anonymous reviewers for the suggestions. We thank Song Mei for the discussions about the concentration of long-tailed covariance matrices. JDL acknowledges support of the ARO under MURI Award W911NF-11-1-0303, the Sloan Research Fellowship, and NSF CCF 2002272.
Publisher Copyright:
© 2020 Neural information processing systems foundation. All rights reserved.
PY - 2020
Y1 - 2020
N2 - Deep neural networks can empirically perform efficient hierarchical learning, in which the layers learn useful representations of the data. However, how they make use of the intermediate representations are not explained by recent theories that relate them to “shallow learners” such as kernels. In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks and can be advantageous over raw inputs. We consider a fixed, randomly initialized neural network as a representation function fed into another trainable network. When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree-p polynomial (p = 4) in d dimension, neural representation requires only Oe(ddp/2e) samples, while the best-known sample complexity upper bound for the raw input is Oe(dp-1). We contrast our result with a lower bound showing that neural representations do not improve over the raw input (in the infinite width limit), when the trainable network is instead a neural tangent kernel. Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
AB - Deep neural networks can empirically perform efficient hierarchical learning, in which the layers learn useful representations of the data. However, how they make use of the intermediate representations are not explained by recent theories that relate them to “shallow learners” such as kernels. In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks and can be advantageous over raw inputs. We consider a fixed, randomly initialized neural network as a representation function fed into another trainable network. When the trainable network is the quadratic Taylor model of a wide two-layer network, we show that neural representation can achieve improved sample complexities compared with the raw input: For learning a low-rank degree-p polynomial (p = 4) in d dimension, neural representation requires only Oe(ddp/2e) samples, while the best-known sample complexity upper bound for the raw input is Oe(dp-1). We contrast our result with a lower bound showing that neural representations do not improve over the raw input (in the infinite width limit), when the trainable network is instead a neural tangent kernel. Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
UR - http://www.scopus.com/inward/record.url?scp=85108441185&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85108441185&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85108441185
SN - 1049-5258
VL - 2020-December
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
Y2 - 6 December 2020 through 12 December 2020
ER -