Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Variational autoencoder
Deep learning generative model to encode data representation

In machine learning, a variational autoencoder (VAE) is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling. It belongs to the families of probabilistic graphical models and variational Bayesian methods, linking an encoder and decoder through a probabilistic latent space. Unlike traditional autoencoders, VAEs map inputs to distributions (e.g., as a multivariate Gaussian distribution) rather than points, helping to avoid overfitting. While originally designed for unsupervised learning, VAEs have demonstrated effectiveness in both semi-supervised learning and supervised learning, typically trained using the reparameterization trick to optimize the encoder-decoder networks together.

Related Image Collections Add Image
We don't have any YouTube videos related to Variational autoencoder yet.
We don't have any PDF documents related to Variational autoencoder yet.
We don't have any Books related to Variational autoencoder yet.
We don't have any archived web articles related to Variational autoencoder yet.

Overview of architecture and operation

A variational autoencoder is a generative model with a prior and noise distribution respectively. Usually such models are trained using the expectation-maximization meta-algorithm (e.g. probabilistic PCA, (spike & slab) sparse coding). Such a scheme optimizes a lower bound of the data likelihood, which is usually computationally intractable, and in doing so requires the discovery of q-distributions, or variational posteriors. These q-distributions are normally parameterized for each individual data point in a separate optimization process. However, variational autoencoders use a neural network as an amortized approach to jointly optimize across data points. In that way, the same parameters are reused for multiple data points, which can result in massive memory savings. The first neural network takes as input the data points themselves, and outputs parameters for the variational distribution. As it maps from a known input space to the low-dimensional latent space, it is called the encoder.

The decoder is the second neural network of this model. It is a function that maps from the latent space to the input space, e.g. as the means of the noise distribution. It is possible to use another neural network that maps to the variance, however this can be omitted for simplicity. In such a case, the variance can be optimized with gradient descent.

To optimize this model, one needs to know two terms: the "reconstruction error", and the Kullback–Leibler divergence (KL-D). Both terms are derived from the free energy expression of the probabilistic model, and therefore differ depending on the noise distribution and the assumed prior of the data, here referred to as p-distribution. For example, a standard VAE task such as IMAGENET is typically assumed to have a gaussianly distributed noise; however, tasks such as binarized MNIST require a Bernoulli noise. The KL-D from the free energy expression maximizes the probability mass of the q-distribution that overlaps with the p-distribution, which unfortunately can result in mode-seeking behaviour. The "reconstruction" term is the remainder of the free energy expression, and requires a sampling approximation to compute its expectation value.8

More recent approaches replace Kullback–Leibler divergence (KL-D) with various statistical distances, see see section "Statistical distance VAE variants" below..

Formulation

From the point of view of probabilistic modeling, one wants to maximize the likelihood of the data x {\displaystyle x} by their chosen parameterized probability distribution p θ ( x ) = p ( x | θ ) {\displaystyle p_{\theta }(x)=p(x|\theta )} . This distribution is usually chosen to be a Gaussian N ( x | μ , σ ) {\displaystyle N(x|\mu ,\sigma )} which is parameterized by μ {\displaystyle \mu } and σ {\displaystyle \sigma } respectively, and as a member of the exponential family it is easy to work with as a noise distribution. Simple distributions are easy enough to maximize, however distributions where a prior is assumed over the latents z {\displaystyle z} results in intractable integrals. Let us find p θ ( x ) {\displaystyle p_{\theta }(x)} via marginalizing over z {\displaystyle z} .

p θ ( x ) = ∫ z p θ ( x , z ) d z , {\displaystyle p_{\theta }(x)=\int _{z}p_{\theta }({x,z})\,dz,}

where p θ ( x , z ) {\displaystyle p_{\theta }({x,z})} represents the joint distribution under p θ {\displaystyle p_{\theta }} of the observable data x {\displaystyle x} and its latent representation or encoding z {\displaystyle z} . According to the chain rule, the equation can be rewritten as

p θ ( x ) = ∫ z p θ ( x | z ) p θ ( z ) d z {\displaystyle p_{\theta }(x)=\int _{z}p_{\theta }({x|z})p_{\theta }(z)\,dz}

In the vanilla variational autoencoder, z {\displaystyle z} is usually taken to be a finite-dimensional vector of real numbers, and p θ ( x | z ) {\displaystyle p_{\theta }({x|z})} to be a Gaussian distribution. Then p θ ( x ) {\displaystyle p_{\theta }(x)} is a mixture of Gaussian distributions.

It is now possible to define the set of the relationships between the input data and its latent representation as

  • Prior p θ ( z ) {\displaystyle p_{\theta }(z)}
  • Likelihood p θ ( x | z ) {\displaystyle p_{\theta }(x|z)}
  • Posterior p θ ( z | x ) {\displaystyle p_{\theta }(z|x)}

Unfortunately, the computation of p θ ( z | x ) {\displaystyle p_{\theta }(z|x)} is expensive and in most cases intractable. To speed up the calculus to make it feasible, it is necessary to introduce a further function to approximate the posterior distribution as

q ϕ ( z | x ) ≈ p θ ( z | x ) {\displaystyle q_{\phi }({z|x})\approx p_{\theta }({z|x})}

with ϕ {\displaystyle \phi } defined as the set of real values that parametrize q {\displaystyle q} . This is sometimes called amortized inference, since by "investing" in finding a good q ϕ {\displaystyle q_{\phi }} , one can later infer z {\displaystyle z} from x {\displaystyle x} quickly without doing any integrals.

In this way, the problem is to find a good probabilistic autoencoder, in which the conditional likelihood distribution p θ ( x | z ) {\displaystyle p_{\theta }(x|z)} is computed by the probabilistic decoder, and the approximated posterior distribution q ϕ ( z | x ) {\displaystyle q_{\phi }(z|x)} is computed by the probabilistic encoder.

Parametrize the encoder as E ϕ {\displaystyle E_{\phi }} , and the decoder as D θ {\displaystyle D_{\theta }} .

Evidence lower bound (ELBO)

Main article: Evidence lower bound

Like many deep learning approaches that use gradient-based optimization, VAEs require a differentiable loss function to update the network weights through backpropagation.

For variational autoencoders, the idea is to jointly optimize the generative model parameters θ {\displaystyle \theta } to reduce the reconstruction error between the input and the output, and ϕ {\displaystyle \phi } to make q ϕ ( z | x ) {\displaystyle q_{\phi }({z|x})} as close as possible to p θ ( z | x ) {\displaystyle p_{\theta }(z|x)} . As reconstruction loss, mean squared error and cross entropy are often used.

As distance loss between the two distributions the Kullback–Leibler divergence D K L ( q ϕ ( z | x ) ∥ p θ ( z | x ) ) {\displaystyle D_{KL}(q_{\phi }({z|x})\parallel p_{\theta }({z|x}))} is a good choice to squeeze q ϕ ( z | x ) {\displaystyle q_{\phi }({z|x})} under p θ ( z | x ) {\displaystyle p_{\theta }(z|x)} .910

The distance loss just defined is expanded as

D K L ( q ϕ ( z | x ) ∥ p θ ( z | x ) ) = E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ q ϕ ( z | x ) p θ ( z | x ) ] = E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ q ϕ ( z | x ) p θ ( x ) p θ ( x , z ) ] = ln ⁡ p θ ( x ) + E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ q ϕ ( z | x ) p θ ( x , z ) ] {\displaystyle {\begin{aligned}D_{KL}(q_{\phi }({z|x})\parallel p_{\theta }({z|x}))&=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {q_{\phi }(z|x)}{p_{\theta }(z|x)}}\right]\\&=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {q_{\phi }({z|x})p_{\theta }(x)}{p_{\theta }(x,z)}}\right]\\&=\ln p_{\theta }(x)+\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {q_{\phi }({z|x})}{p_{\theta }(x,z)}}\right]\end{aligned}}}

Now define the evidence lower bound (ELBO): L θ , ϕ ( x ) := E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x , z ) q ϕ ( z | x ) ] = ln ⁡ p θ ( x ) − D K L ( q ϕ ( ⋅ | x ) ∥ p θ ( ⋅ | x ) ) {\displaystyle L_{\theta ,\phi }(x):=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]=\ln p_{\theta }(x)-D_{KL}(q_{\phi }({\cdot |x})\parallel p_{\theta }({\cdot |x}))} Maximizing the ELBO θ ∗ , ϕ ∗ = argmax θ , ϕ L θ , ϕ ( x ) {\displaystyle \theta ^{*},\phi ^{*}={\underset {\theta ,\phi }{\operatorname {argmax} }}\,L_{\theta ,\phi }(x)} is equivalent to simultaneously maximizing ln ⁡ p θ ( x ) {\displaystyle \ln p_{\theta }(x)} and minimizing D K L ( q ϕ ( z | x ) ∥ p θ ( z | x ) ) {\displaystyle D_{KL}(q_{\phi }({z|x})\parallel p_{\theta }({z|x}))} . That is, maximizing the log-likelihood of the observed data, and minimizing the divergence of the approximate posterior q ϕ ( ⋅ | x ) {\displaystyle q_{\phi }(\cdot |x)} from the exact posterior p θ ( ⋅ | x ) {\displaystyle p_{\theta }(\cdot |x)} .

The form given is not very convenient for maximization, but the following, equivalent form, is: L θ , ϕ ( x ) = E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x | z ) ] − D K L ( q ϕ ( ⋅ | x ) ∥ p θ ( ⋅ ) ) {\displaystyle L_{\theta ,\phi }(x)=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln p_{\theta }(x|z)\right]-D_{KL}(q_{\phi }({\cdot |x})\parallel p_{\theta }(\cdot ))} where ln ⁡ p θ ( x | z ) {\displaystyle \ln p_{\theta }(x|z)} is implemented as − 1 2 ‖ x − D θ ( z ) ‖ 2 2 {\displaystyle -{\frac {1}{2}}\|x-D_{\theta }(z)\|_{2}^{2}} , since that is, up to an additive constant, what x ∼ N ( D θ ( z ) , I ) {\displaystyle x\sim {\mathcal {N}}(D_{\theta }(z),I)} yields. That is, we model the distribution of x {\displaystyle x} conditional on z {\displaystyle z} to be a Gaussian distribution centered on D θ ( z ) {\displaystyle D_{\theta }(z)} . The distribution of q ϕ ( z | x ) {\displaystyle q_{\phi }(z|x)} and p θ ( z ) {\displaystyle p_{\theta }(z)} are often also chosen to be Gaussians as z | x ∼ N ( E ϕ ( x ) , σ ϕ ( x ) 2 I ) {\displaystyle z|x\sim {\mathcal {N}}(E_{\phi }(x),\sigma _{\phi }(x)^{2}I)} and z ∼ N ( 0 , I ) {\displaystyle z\sim {\mathcal {N}}(0,I)} , with which we obtain by the formula for KL divergence of Gaussians: L θ , ϕ ( x ) = − 1 2 E z ∼ q ϕ ( ⋅ | x ) [ ‖ x − D θ ( z ) ‖ 2 2 ] − 1 2 ( N σ ϕ ( x ) 2 + ‖ E ϕ ( x ) ‖ 2 2 − 2 N ln ⁡ σ ϕ ( x ) ) + C o n s t {\displaystyle L_{\theta ,\phi }(x)=-{\frac {1}{2}}\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\|x-D_{\theta }(z)\|_{2}^{2}\right]-{\frac {1}{2}}\left(N\sigma _{\phi }(x)^{2}+\|E_{\phi }(x)\|_{2}^{2}-2N\ln \sigma _{\phi }(x)\right)+Const} Here N {\displaystyle N} is the dimension of z {\displaystyle z} . For a more detailed derivation and more interpretations of ELBO and its maximization, see its main page.

Reparameterization

To efficiently search for θ ∗ , ϕ ∗ = argmax θ , ϕ L θ , ϕ ( x ) {\displaystyle \theta ^{*},\phi ^{*}={\underset {\theta ,\phi }{\operatorname {argmax} }}\,L_{\theta ,\phi }(x)} the typical method is gradient ascent.

It is straightforward to find ∇ θ E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x , z ) q ϕ ( z | x ) ] = E z ∼ q ϕ ( ⋅ | x ) [ ∇ θ ln ⁡ p θ ( x , z ) q ϕ ( z | x ) ] {\displaystyle \nabla _{\theta }\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]=\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\nabla _{\theta }\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]} However, ∇ ϕ E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x , z ) q ϕ ( z | x ) ] {\displaystyle \nabla _{\phi }\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]} does not allow one to put the ∇ ϕ {\displaystyle \nabla _{\phi }} inside the expectation, since ϕ {\displaystyle \phi } appears in the probability distribution itself. The reparameterization trick (also known as stochastic backpropagation11) bypasses this difficulty.121314

The most important example is when z ∼ q ϕ ( ⋅ | x ) {\displaystyle z\sim q_{\phi }(\cdot |x)} is normally distributed, as N ( μ ϕ ( x ) , Σ ϕ ( x ) ) {\displaystyle {\mathcal {N}}(\mu _{\phi }(x),\Sigma _{\phi }(x))} .

This can be reparametrized by letting ε ∼ N ( 0 , I ) {\displaystyle {\boldsymbol {\varepsilon }}\sim {\mathcal {N}}(0,{\boldsymbol {I}})} be a "standard random number generator", and construct z {\displaystyle z} as z = μ ϕ ( x ) + L ϕ ( x ) ϵ {\displaystyle z=\mu _{\phi }(x)+L_{\phi }(x)\epsilon } . Here, L ϕ ( x ) {\displaystyle L_{\phi }(x)} is obtained by the Cholesky decomposition: Σ ϕ ( x ) = L ϕ ( x ) L ϕ ( x ) T {\displaystyle \Sigma _{\phi }(x)=L_{\phi }(x)L_{\phi }(x)^{T}} Then we have ∇ ϕ E z ∼ q ϕ ( ⋅ | x ) [ ln ⁡ p θ ( x , z ) q ϕ ( z | x ) ] = E ϵ [ ∇ ϕ ln ⁡ p θ ( x , μ ϕ ( x ) + L ϕ ( x ) ϵ ) q ϕ ( μ ϕ ( x ) + L ϕ ( x ) ϵ | x ) ] {\displaystyle \nabla _{\phi }\mathbb {E} _{z\sim q_{\phi }(\cdot |x)}\left[\ln {\frac {p_{\theta }(x,z)}{q_{\phi }({z|x})}}\right]=\mathbb {E} _{\epsilon }\left[\nabla _{\phi }\ln {\frac {p_{\theta }(x,\mu _{\phi }(x)+L_{\phi }(x)\epsilon )}{q_{\phi }(\mu _{\phi }(x)+L_{\phi }(x)\epsilon |x)}}\right]} and so we obtained an unbiased estimator of the gradient, allowing stochastic gradient descent.

Since we reparametrized z {\displaystyle z} , we need to find q ϕ ( z | x ) {\displaystyle q_{\phi }(z|x)} . Let q 0 {\displaystyle q_{0}} be the probability density function for ϵ {\displaystyle \epsilon } , then ln ⁡ q ϕ ( z | x ) = ln ⁡ q 0 ( ϵ ) − ln ⁡ | det ( ∂ ϵ z ) | {\displaystyle \ln q_{\phi }(z|x)=\ln q_{0}(\epsilon )-\ln |\det(\partial _{\epsilon }z)|} where ∂ ϵ z {\displaystyle \partial _{\epsilon }z} is the Jacobian matrix of z {\displaystyle z} with respect to ϵ {\displaystyle \epsilon } . Since z = μ ϕ ( x ) + L ϕ ( x ) ϵ {\displaystyle z=\mu _{\phi }(x)+L_{\phi }(x)\epsilon } , this is ln ⁡ q ϕ ( z | x ) = − 1 2 ‖ ϵ ‖ 2 − ln ⁡ | det L ϕ ( x ) | − n 2 ln ⁡ ( 2 π ) {\displaystyle \ln q_{\phi }(z|x)=-{\frac {1}{2}}\|\epsilon \|^{2}-\ln |\det L_{\phi }(x)|-{\frac {n}{2}}\ln(2\pi )}

Variations

Many variational autoencoders applications and extensions have been used to adapt the architecture to other domains and improve its performance.

β {\displaystyle \beta } -VAE is an implementation with a weighted Kullback–Leibler divergence term to automatically discover and interpret factorised latent representations. With this implementation, it is possible to force manifold disentanglement for β {\displaystyle \beta } values greater than one. This architecture can discover disentangled latent factors without supervision.1516

The conditional VAE (CVAE), inserts label information in the latent space to force a deterministic constrained representation of the learned data.17

Some structures directly deal with the quality of the generated samples1819 or implement more than one latent space to further improve the representation learning.

Some architectures mix VAE and generative adversarial networks to obtain hybrid models.202122

It is not necessary to use gradients to update the encoder. In fact, the encoder is not necessary for the generative model. 23

Statistical distance VAE variants

After the initial work of Diederik P. Kingma and Max Welling,24 several procedures were proposed to formulate in a more abstract way the operation of the VAE. In these approaches the loss function is composed of two parts :

  • the usual reconstruction error part which seeks to ensure that the encoder-then-decoder mapping x ↦ D θ ( E ψ ( x ) ) {\displaystyle x\mapsto D_{\theta }(E_{\psi }(x))} is as close to the identity map as possible; the sampling is done at run time from the empirical distribution P r e a l {\displaystyle \mathbb {P} ^{real}} of objects available (e.g., for MNIST or IMAGENET this will be the empirical probability law of all images in the dataset). This gives the term: E x ∼ P r e a l [ ‖ x − D θ ( E ϕ ( x ) ) ‖ 2 2 ] {\displaystyle \mathbb {E} _{x\sim \mathbb {P} ^{real}}\left[\|x-D_{\theta }(E_{\phi }(x))\|_{2}^{2}\right]} .
  • a variational part that ensures that, when the empirical distribution P r e a l {\displaystyle \mathbb {P} ^{real}} is passed through the encoder E ϕ {\displaystyle E_{\phi }} , we recover the target distribution, denoted here μ ( d z ) {\displaystyle \mu (dz)} that is usually taken to be a Multivariate normal distribution. We will denote E ϕ ♯ P r e a l {\displaystyle E_{\phi }\sharp \mathbb {P} ^{real}} this pushforward measure which in practice is just the empirical distribution obtained by passing all dataset objects through the encoder E ϕ {\displaystyle E_{\phi }} . In order to make sure that E ϕ ♯ P r e a l {\displaystyle E_{\phi }\sharp \mathbb {P} ^{real}} is close to the target μ ( d z ) {\displaystyle \mu (dz)} , a Statistical distance d {\displaystyle d} is invoked and the term d ( μ ( d z ) , E ϕ ♯ P r e a l ) 2 {\displaystyle d\left(\mu (dz),E_{\phi }\sharp \mathbb {P} ^{real}\right)^{2}} is added to the loss.

We obtain the final formula for the loss: L θ , ϕ = E x ∼ P r e a l [ ‖ x − D θ ( E ϕ ( x ) ) ‖ 2 2 ] + d ( μ ( d z ) , E ϕ ♯ P r e a l ) 2 {\displaystyle L_{\theta ,\phi }=\mathbb {E} _{x\sim \mathbb {P} ^{real}}\left[\|x-D_{\theta }(E_{\phi }(x))\|_{2}^{2}\right]+d\left(\mu (dz),E_{\phi }\sharp \mathbb {P} ^{real}\right)^{2}} d {\displaystyle d} must have certain properties depending on the type of algorithm used to mimize this loss function. For example, it has to be expressable as an expectation if it is to be optimized by a stochastic optimization algorithm. Several distances can be chosen and this has given rise to several flavors of VAEs:

  • the sliced Wasserstein distance used by S Kolouri, et al. in their VAE25
  • the energy distance implemented in the Radon Sobolev Variational Auto-Encoder26
  • the Maximum Mean Discrepancy distance used in the MMD-VAE27
  • the Wasserstein distance used in the WAEs28
  • kernel-based distances used in the Kernelized Variational Autoencoder (K-VAE)29

See also

Further reading

References

  1. Kingma, Diederik P.; Welling, Max (2022-12-10). "Auto-Encoding Variational Bayes". arXiv:1312.6114 [stat.ML]. /wiki/ArXiv_(identifier)

  2. Pinheiro Cinelli, Lucas; et al. (2021). "Variational Autoencoder". Variational Methods for Machine Learning with Applications to Deep Networks. Springer. pp. 111–149. doi:10.1007/978-3-030-70679-1_5. ISBN 978-3-030-70681-4. S2CID 240802776. 978-3-030-70681-4

  3. Dilokthanakul, Nat; Mediano, Pedro A. M.; Garnelo, Marta; Lee, Matthew C. H.; Salimbeni, Hugh; Arulkumaran, Kai; Shanahan, Murray (2017-01-13). "Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders". arXiv:1611.02648 [cs.LG]. /wiki/ArXiv_(identifier)

  4. Hsu, Wei-Ning; Zhang, Yu; Glass, James (December 2017). "Unsupervised domain adaptation for robust speech recognition via variational autoencoder-based data augmentation". 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). pp. 16–23. arXiv:1707.06265. doi:10.1109/ASRU.2017.8268911. ISBN 978-1-5090-4788-8. S2CID 22681625. 978-1-5090-4788-8

  5. Ehsan Abbasnejad, M.; Dick, Anthony; van den Hengel, Anton (2017). Infinite Variational Autoencoder for Semi-Supervised Learning. pp. 5888–5897. https://openaccess.thecvf.com/content_cvpr_2017/html/Abbasnejad_Infinite_Variational_Autoencoder_CVPR_2017_paper.html

  6. Xu, Weidi; Sun, Haoze; Deng, Chao; Tan, Ying (2017-02-12). "Variational Autoencoder for Semi-Supervised Text Classification". Proceedings of the AAAI Conference on Artificial Intelligence. 31 (1). doi:10.1609/aaai.v31i1.10966. S2CID 2060721. https://ojs.aaai.org/index.php/AAAI/article/view/10966

  7. Kameoka, Hirokazu; Li, Li; Inoue, Shota; Makino, Shoji (2019-09-01). "Supervised Determined Source Separation with Multichannel Variational Autoencoder". Neural Computation. 31 (9): 1891–1914. doi:10.1162/neco_a_01217. PMID 31335290. S2CID 198168155. https://direct.mit.edu/neco/article/31/9/1891/8494/Supervised-Determined-Source-Separation-with

  8. Kingma, Diederik P.; Welling, Max (2013-12-20). "Auto-Encoding Variational Bayes". arXiv:1312.6114 [stat.ML]. /wiki/ArXiv_(identifier)

  9. Kingma, Diederik P.; Welling, Max (2013-12-20). "Auto-Encoding Variational Bayes". arXiv:1312.6114 [stat.ML]. /wiki/ArXiv_(identifier)

  10. "From Autoencoder to Beta-VAE". Lil'Log. 2018-08-12. https://lilianweng.github.io/lil-log/2018/08/12/from-autoencoder-to-beta-vae.html

  11. Rezende, Danilo Jimenez; Mohamed, Shakir; Wierstra, Daan (2014-06-18). "Stochastic Backpropagation and Approximate Inference in Deep Generative Models". International Conference on Machine Learning. PMLR: 1278–1286. arXiv:1401.4082. https://proceedings.mlr.press/v32/rezende14.html

  12. Kingma, Diederik P.; Welling, Max (2013-12-20). "Auto-Encoding Variational Bayes". arXiv:1312.6114 [stat.ML]. /wiki/ArXiv_(identifier)

  13. Bengio, Yoshua; Courville, Aaron; Vincent, Pascal (2013). "Representation Learning: A Review and New Perspectives". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35 (8): 1798–1828. arXiv:1206.5538. doi:10.1109/TPAMI.2013.50. ISSN 1939-3539. PMID 23787338. S2CID 393948. https://ieeexplore.ieee.org/document/6472238

  14. Kingma, Diederik P.; Rezende, Danilo J.; Mohamed, Shakir; Welling, Max (2014-10-31). "Semi-Supervised Learning with Deep Generative Models". arXiv:1406.5298 [cs.LG]. /wiki/ArXiv_(identifier)

  15. Higgins, Irina; Matthey, Loic; Pal, Arka; Burgess, Christopher; Glorot, Xavier; Botvinick, Matthew; Mohamed, Shakir; Lerchner, Alexander (2016-11-04). beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. NeurIPS. https://openreview.net/forum?id=Sy2fzU9gl

  16. Burgess, Christopher P.; Higgins, Irina; Pal, Arka; Matthey, Loic; Watters, Nick; Desjardins, Guillaume; Lerchner, Alexander (2018-04-10). "Understanding disentangling in β-VAE". arXiv:1804.03599 [stat.ML]. /wiki/ArXiv_(identifier)

  17. Sohn, Kihyuk; Lee, Honglak; Yan, Xinchen (2015-01-01). Learning Structured Output Representation using Deep Conditional Generative Models (PDF). NeurIPS. https://proceedings.neurips.cc/paper/2015/file/8d55a249e6baa5c06772297520da2051-Paper.pdf

  18. Dai, Bin; Wipf, David (2019-10-30). "Diagnosing and Enhancing VAE Models". arXiv:1903.05789 [cs.LG]. /wiki/ArXiv_(identifier)

  19. Dorta, Garoe; Vicente, Sara; Agapito, Lourdes; Campbell, Neill D. F.; Simpson, Ivor (2018-07-31). "Training VAEs Under Structured Residuals". arXiv:1804.01050 [stat.ML]. /wiki/ArXiv_(identifier)

  20. Larsen, Anders Boesen Lindbo; Sønderby, Søren Kaae; Larochelle, Hugo; Winther, Ole (2016-06-11). "Autoencoding beyond pixels using a learned similarity metric". International Conference on Machine Learning. PMLR: 1558–1566. arXiv:1512.09300. http://proceedings.mlr.press/v48/larsen16.html

  21. Bao, Jianmin; Chen, Dong; Wen, Fang; Li, Houqiang; Hua, Gang (2017). "CVAE-GAN: Fine-Grained Image Generation Through Asymmetric Training". pp. 2745–2754. arXiv:1703.10155 [cs.CV]. /wiki/ArXiv_(identifier)

  22. Gao, Rui; Hou, Xingsong; Qin, Jie; Chen, Jiaxin; Liu, Li; Zhu, Fan; Zhang, Zhao; Shao, Ling (2020). "Zero-VAE-GAN: Generating Unseen Features for Generalized and Transductive Zero-Shot Learning". IEEE Transactions on Image Processing. 29: 3665–3680. Bibcode:2020ITIP...29.3665G. doi:10.1109/TIP.2020.2964429. ISSN 1941-0042. PMID 31940538. S2CID 210334032. https://ieeexplore.ieee.org/document/8957359

  23. Drefs, J.; Guiraud, E.; Panagiotou, F.; Lücke, J. (2023). "Direct evolutionary optimization of variational autoencoders with binary latents". Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Lecture Notes in Computer Science. Vol. 13715. Springer Nature Switzerland. pp. 357–372. doi:10.1007/978-3-031-26409-2_22. ISBN 978-3-031-26408-5. 978-3-031-26408-5

  24. Kingma, Diederik P.; Welling, Max (2022-12-10). "Auto-Encoding Variational Bayes". arXiv:1312.6114 [stat.ML]. /wiki/ArXiv_(identifier)

  25. Kolouri, Soheil; Pope, Phillip E.; Martin, Charles E.; Rohde, Gustavo K. (2019). "Sliced Wasserstein Auto-Encoders". International Conference on Learning Representations. International Conference on Learning Representations. ICPR. https://openreview.net/forum?id=H1xaJn05FQ

  26. Turinici, Gabriel (2021). "Radon-Sobolev Variational Auto-Encoders". Neural Networks. 141: 294–305. arXiv:1911.13135. doi:10.1016/j.neunet.2021.04.018. ISSN 0893-6080. PMID 33933889. https://www.sciencedirect.com/science/article/pii/S0893608021001556

  27. Gretton, A.; Li, Y.; Swersky, K.; Zemel, R.; Turner, R. (2017). "A Polya Contagion Model for Networks". IEEE Transactions on Control of Network Systems. 5 (4): 1998–2010. arXiv:1705.02239. doi:10.1109/TCNS.2017.2781467. /wiki/ArXiv_(identifier)

  28. Tolstikhin, I.; Bousquet, O.; Gelly, S.; Schölkopf, B. (2018). "Wasserstein Auto-Encoders". arXiv:1711.01558 [stat.ML]. /wiki/ArXiv_(identifier)

  29. Louizos, C.; Shi, X.; Swersky, K.; Li, Y.; Welling, M. (2019). "Kernelized Variational Autoencoders". arXiv:1901.02401 [astro-ph.CO]. /wiki/ArXiv_(identifier)