Learning invariance with compact transforms

Anna T. Thomas, Albert Gu, Tri Dao, Atri Rudra, Christopher Ré

Research output: Contribution to conferencePaperpeer-review

Abstract

The problem of building machine learning models that admit efficient representations and also capture an appropriate inductive bias for the domain has recently attracted significant interest. Existing work for compressing deep learning pipelines has explored classes of structured matrices that exhibit forms of shift-invariance akin to convolutions. We leverage the displacement rank framework to automatically learn the structured class, allowing for adaptation to the invariances required for a given dataset while preserving asymptotically efficient multiplication and storage. In a setting with a small fixed parameter budget, our broad classes of structured matrices improve final accuracy by 5–7% on standard image classification datasets compared to conventional parameter constraining methods.

Original languageEnglish (US)
StatePublished - 2018
Externally publishedYes
Event6th International Conference on Learning Representations, ICLR 2018 - Vancouver, Canada
Duration: Apr 30 2018May 3 2018

Conference

Conference6th International Conference on Learning Representations, ICLR 2018
Country/TerritoryCanada
CityVancouver
Period4/30/185/3/18

All Science Journal Classification (ASJC) codes

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint

Dive into the research topics of 'Learning invariance with compact transforms'. Together they form a unique fingerprint.

Cite this