Hamiltonian GAN

Research output: Contribution to journalConference articlepeer-review

Abstract

A growing body of work leverages the Hamiltonian formalism as an inductive bias for physically plausible neural network based video generation. The structure of the Hamiltonian ensures conservation of a learned quantity (e.g., energy) and imposes a phase-space interpretation on the low-dimensional manifold underlying the input video. While this interpretation has the potential to facilitate the integration of learned representations in downstream tasks, existing methods are limited in their applicability as they require a structural prior for the configuration space at design time. In this work, we present a GAN-based video generation pipeline with a learned configuration space map and Hamiltonian neural network motion model, which allow us to learn a representation of the configuration space from data. We train our model with a physics-inspired cyclic-coordinate loss function which encourages a minimal representation of the configuration space and improves interpretability. We demonstrate the efficacy and advantages of our approach on the Hamiltonian Dynamics Suite Toy Physics dataset.

Original languageEnglish (US)
Pages (from-to)1662-1674
Number of pages13
JournalProceedings of Machine Learning Research
Volume242
StatePublished - 2024
Event6th Annual Learning for Dynamics and Control Conference, L4DC 2024 - Oxford, United Kingdom
Duration: Jul 15 2024Jul 17 2024

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Keywords

  • Dynamics Learning
  • Generative modeling
  • Physics-Informed Machine Learning
  • Structure-Preserving Neural Networks

Fingerprint

Dive into the research topics of 'Hamiltonian GAN'. Together they form a unique fingerprint.

Cite this