Linearly recurrent autoencoder networks for learning dynamics

Samuel E. Otto, Clarence Worth Rowley

Research output: Contribution to journalArticlepeer-review

193 Scopus citations

Abstract

This paper describes a method for learning low-dimensional approximations of nonlinear dynamical systems, based on neural network approximations of the underlying Koopman operator. Extended Dynamic Mode Decomposition (EDMD) provides a useful data-driven approximation of the Koopman operator for analyzing dynamical systems. This paper addresses a fundamental problem associated with EDMD: a trade-off between representational capacity of the dictionary and overfitting due to insufficient data. A new neural network architecture combining an autoencoder with linear recurrent dynamics in the encoded state is used to learn a low-dimensional and highly informative Koopman-invariant subspace of observables. A method is also presented for balanced model reduction of overspecified EDMD systems in feature space. Nonlinear reconstruction using partially linear multikernel regression aims to improve reconstruction accuracy from the low-dimensional state when the data has complex but intrinsically low-dimensional structure. The techniques demonstrate the ability to identify Koopman eigenfunctions of the unforced Duffing equation, create accurate low-dimensional models of an unstable cylinder wake flow, and make short-time predictions of the chaotic Kuramoto-Sivashinsky equation.

Original languageEnglish (US)
Pages (from-to)558-593
Number of pages36
JournalSIAM Journal on Applied Dynamical Systems
Volume18
Issue number1
DOIs
StatePublished - 2019

All Science Journal Classification (ASJC) codes

  • Analysis
  • Modeling and Simulation

Keywords

  • Data-driven analysis
  • High-dimensional systems
  • Koopman operator
  • Neural networks
  • Nonlinear systems
  • Reduced-order modeling

Fingerprint

Dive into the research topics of 'Linearly recurrent autoencoder networks for learning dynamics'. Together they form a unique fingerprint.

Cite this