inference in MRFs
(1.4 hours to learn)
Summary
One reason we build graphical models is so we can perform inference, i.e. ask questions about the distribution. The most common queries include: (1) finding the marginal distribution of one or several nodes, (2) finding the most likely joint assignment, or (3) computing the partition function. Items (1) and (3) are closely related. While exact inference is intractable in the general case, there are powerful approximate inference algorithms, as well as interesting classes of tractable models.
Context
This concept has the prerequisites:
Core resources (read/watch one of the following)
-Free-
→ Coursera: Probabilistic Graphical Models (2013)
An online course on probabilistic graphical models.
Other notes:
- Click on "Preview" to see the videos.
See also
- Under widely held assumptions, there is no efficient exact inference algorithm for graphical models.
- We can perform inference in Bayes nets by converting them to MRFs.
- Here are some algorithms for exact inference:
- variable elimination , a conceptually simple one
- belief propagation, an extension of variable elimination which reuses computations. This has two forms:
- sum-product , for computing marginals
- max-product , for computing the most likely assignment
- loopy belief propagation , which applies the BP update rules on non-tree graphs
- Markov chain Monte Carlo , a sampling-based method
- variational inference , which tries to find a tractable approximation to the posterior