Abstract
Variational autoencoders (VAES) learn distribu-tions of high-dimensional data. They model data with a deep latent-variable model and then fit the model by maximizing a lower bound of the log marginal likelihood. VAES can capture complex distributions, but they can also suffer from an is-sue known as "latent variable collapse," especially if the likelihood model is powerful. Specifically, the lower bound involves an approximate poste-rior of the latent variables; this posterior "col-lapses" when it is set equal to the prior, i.e., when the approximate posterior is independent of the data. While VAES learn good generative models, la-tent variable collapse prevents them from learning useful representations. In this paper, we propose a simple new way to avoid latent variable collapse by including skip connections in our generative model; these connections enforce strong links be-tween the latent variables and the likelihood func-tion. We study generative skip models both theo-retically and empirically. Theoretically, we prove that skip models increase the mutual information between the observations and the inferred latent variables. Empirically, we study images (MNIST and Omniglot) and text (Yahoo). Compared to ex-isting VAE architectures, we show that generative skip models maintain similar predictive perfor-mance but lead to less collapse and provide more meaningful representations of the data.
| Original language | English (US) |
|---|---|
| Pages (from-to) | 2397-2405 |
| Number of pages | 9 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 89 |
| State | Published - 2019 |
| Externally published | Yes |
| Event | 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019 - Naha, Japan Duration: Apr 16 2019 → Apr 18 2019 |
All Science Journal Classification (ASJC) codes
- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence