Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels

E. Weinan, Stephan Wojtowytsch

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

We establish a scale separation of Kolmogorov width type between subspaces of a given Banach space under the condition that a sequence of linear maps converges much faster on one of the subspaces. The general technique is then applied to show that reproducing kernel Hilbert spaces are poor L2-approximators for the class of two-layer neural networks in high dimension, and that multi-layer networks with small path norm are poor approximators for certain Lipschitz functions, also in the L2-topology.

Original languageEnglish (US)
Article number5
JournalResearch in Mathematical Sciences
Volume8
Issue number1
DOIs
StatePublished - Mar 2021

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Mathematics (miscellaneous)
  • Computational Mathematics
  • Applied Mathematics

Keywords

  • Approximation theory
  • Barron space
  • Curse of dimensionality
  • Kolmogorov width
  • Multi-layer network
  • Neural tangent kernel
  • Population risk
  • Random feature model
  • Reproducing kernel Hilbert space
  • Two-layer network

Fingerprint

Dive into the research topics of 'Kolmogorov width decay and poor approximators in machine learning: shallow neural networks, random feature models and neural tangent kernels'. Together they form a unique fingerprint.

Cite this