Abstract
Deep learning has achieved tremendous success in recent years. In simple words, deep learning uses the composition of many nonlinear functions to model the complex dependency between input features and labels. While neural networks have a long history, recent advances have significantly improved their empirical performance in computer vision, natural language processing and other predictive tasks. From the statistical and scientific perspective, it is natural to ask: What is deep learning? What are the new characteristics of deep learning, compared with classical statistical methods? What are the theoretical foundations of deep learning? To answer these questions, we introduce common neural network models (e.g., convolutional neural nets, recurrent neural nets, generative adversarial nets) and training techniques (e.g., stochastic gradient descent, dropout, batch normalization) from a statistical point of view. Along the way, we highlight new characteristics of deep learning (including depth and overparametrization) and explain their practical and theoretical benefits. We also sample recent results on theories of deep learning, many of which are only suggestive. While a complete understanding of deep learning remains elusive, we hope that our perspectives and discussions serve as a stimulus for new statistical research.
Original language | English (US) |
---|---|
Pages (from-to) | 264-290 |
Number of pages | 27 |
Journal | Statistical Science |
Volume | 36 |
Issue number | 2 |
DOIs | |
State | Published - May 2021 |
All Science Journal Classification (ASJC) codes
- Statistics and Probability
- General Mathematics
- Statistics, Probability and Uncertainty
Keywords
- Neural networks
- approximation theory
- generalization error
- overparametrization
- stochastic gradient descent