skip to content
 

An Instability in Variational Methods for Learning Topic Models

Presented by: 
Andrea Montanari Stanford University
Date: 
Tuesday 16th January 2018 - 09:00 to 09:45
Venue: 
INI Seminar Room 1
Abstract: 
Topic models are extremely useful to extract latent degrees of freedom form large unlabeled datasets. Variational Bayes algorithms are the approach most commonly used by practitioners to learn topic models. Their appeal lies in the promise of reducing the problem of variational inference to an optimization problem. I will show that, even within an idealized Bayesian scenario, variational methods display an instability that can lead to misleading results. [Based on joint work with Behroz Ghorbani and Hamid Javadi]
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons