Maximum principle based algorithms for deep learning

Qianxiao Li, Long Chen, Cheng Tai, E. Weinan

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

The continuous dynamical system approach to deep learning is explored in order to devise alternative frameworks for training algorithms. Training is recast as a control problem and this allows us to formulate necessary optimality conditions in continuous time using the Pontryagin’s maximum principle (PMP). A modification of the method of successive approximations is then used to solve the PMP, giving rise to an alternative training algorithm for deep learning. This approach has the advantage that rigorous error estimates and convergence results can be established. We also show that it may avoid some pitfalls of gradient-based methods, such as slow convergence on flat landscapes near saddle points. Furthermore, we demonstrate that it obtains favorable initial convergence rate per-iteration, provided Hamiltonian maximization can be efficiently carried out - a step which is still in need of improvement. Overall, the approach opens up new avenues to attack problems associated with deep learning, such as trapping in slow manifolds and inapplicability of gradient-based methods for discrete trainable variables.

Original languageEnglish (US)
Pages (from-to)1-29
Number of pages29
JournalJournal of Machine Learning Research
Volume18
StatePublished - Apr 1 2018

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Keywords

  • Deep learning
  • Method of successive approximations
  • Optimal control
  • Pontryagin’s maximum principle

Fingerprint Dive into the research topics of 'Maximum principle based algorithms for deep learning'. Together they form a unique fingerprint.

Cite this