Universal function approximation by deep neural nets with bounded width and ReLU activations

Research output: Contribution to journalArticlepeer-review

160 Scopus citations

Abstract

This article concerns the expressive power of depth in neural nets with ReLU activations and a bounded width. We are particularly interested in the following questions: What is the minimal width wmin(d) so that ReLU nets of width wmin(d) (and arbitrary depth) can approximate any continuous function on the unit cube [0, 1]d arbitrarily well? For ReLU nets near this minimal width, what can one say about the depth necessary to approximate a given function? We obtain an essentially complete answer to these questions for convex functions. Our approach is based on the observation that, due to the convexity of the ReLU activation, ReLU nets are particularly well suited to represent convex functions. In particular, we prove that ReLU nets with width d + 1 can approximate any continuous convex function of d variables arbitrarily well. These results then give quantitative depth estimates for the rate of approximation of any continuous scalar function on the d-dimensional cube [0, 1]d by ReLU nets with width d + 3.

Original languageEnglish (US)
Article number992
JournalMathematics
Volume7
Issue number10
DOIs
StatePublished - Oct 1 2019
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • General Mathematics

Keywords

  • Approximation theory
  • Deep neural nets
  • ReLU networks

Fingerprint

Dive into the research topics of 'Universal function approximation by deep neural nets with bounded width and ReLU activations'. Together they form a unique fingerprint.

Cite this