Consistency Regularization for Variational Auto-Encoders

Samarth Sinha, Adji B. Dieng

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Scopus citations


Variational auto-encoders (vaes) are a powerful approach to unsupervised learning. They enable scalable approximate posterior inference in latent-variable models using variational inference (vi). A vae posits a variational family parameterized by a deep neural network—called an encoder—that takes data as input. This encoder is shared across all the observations, which amortizes the cost of inference. However the encoder of a vae has the undesirable property that it maps a given observation and a semantics-preserving transformation of it to different latent representations. This “inconsistency" of the encoder lowers the quality of the learned representations, especially for downstream tasks, and also negatively affects generalization. In this paper, we propose a regularization method to enforce consistency in vaes. The idea is to minimize the Kullback-Leibler (kl) divergence between the variational distribution when conditioning on the observation and the variational distribution when conditioning on a random semantic-preserving transformation of this observation. This regularization is applicable to any vae. In our experiments we apply it to four different vae variants on several benchmark datasets and found it always improves the quality of the learned representations but also leads to better generalization. In particular, when applied to the nouveau variational auto-encoder (nvae), our regularization method yields state-of-the-art performance on mnist, cifar-10, and celeba. We also applied our method to 3D data and found it learns representations of superior quality as measured by accuracy on a downstream classification task. Finally, we show our method can even outperform the triplet loss, an advanced and popular contrastive learning-based method for representation learning.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
EditorsMarc'Aurelio Ranzato, Alina Beygelzimer, Yann Dauphin, Percy S. Liang, Jenn Wortman Vaughan
PublisherNeural information processing systems foundation
Number of pages12
ISBN (Electronic)9781713845393
StatePublished - 2021
Event35th Conference on Neural Information Processing Systems, NeurIPS 2021 - Virtual, Online
Duration: Dec 6 2021Dec 14 2021

Publication series

NameAdvances in Neural Information Processing Systems
ISSN (Print)1049-5258


Conference35th Conference on Neural Information Processing Systems, NeurIPS 2021
CityVirtual, Online

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing


Dive into the research topics of 'Consistency Regularization for Variational Auto-Encoders'. Together they form a unique fingerprint.

Cite this