Discovering variational autoencoders 

Data management

Students, PhD students Video capsule French, English

Go to training

by Alexandros Kalousis

In this capsule we give a high level view of Variational Autoencoders (VAEs), a particular family of generative models that consists of an encoder mapping instances to a latent space and a decoding component which receives input samples from the latent space and maps them to the original input space. The encoding-decoding architecture of VAEs allows for several interesting applications, such as conditional generation and style transfer. In addition the presence of a decoder allows us to easily incorporate domain knowledge such as physics laws grounding the semantics of the latent space to real world entities. We provide a small example on gait modelling.

To the video

Pre-required skills

  • Not Specified

Skills worked on

  • Exploitation (level D)
Go back to trainings