variational inference

(55 minutes to learn)


In most probabilistic models of interest, it's intractable to compute posterior marginals and/or normalizing constants exactly. Variational inference is a framework for approximating both. Variational inference treats inference as an optimization problem: we're trying to find a distribution (or a representation resembling a distribution) which is as close as possible to the true posterior, according to some measure.


This concept has the prerequisites:

Core resources (read/watch one of the following)


Supplemental resources (the following are optional, but you may find them useful)


See also