inference in MRFs
(1.4 hours to learn)
One reason we build graphical models is so we can perform inference, i.e. ask questions about the distribution. The most common queries include: (1) finding the marginal distribution of one or several nodes, (2) finding the most likely joint assignment, or (3) computing the partition function. Items (1) and (3) are closely related. While exact inference is intractable in the general case, there are powerful approximate inference algorithms, as well as interesting classes of tractable models.
This concept has the prerequisites:
Core resources (read/watch one of the following)
→ Coursera: Probabilistic Graphical Models (2013)
An online course on probabilistic graphical models.
- Click on "Preview" to see the videos.
- Under widely held assumptions, there is no efficient exact inference algorithm for graphical models.
- We can perform inference in Bayes nets by converting them to MRFs.
- Here are some algorithms for exact inference:
- variable elimination , a conceptually simple one
- belief propagation, an extension of variable elimination which reuses computations. This has two forms:
- loopy belief propagation , which applies the BP update rules on non-tree graphs
- Markov chain Monte Carlo , a sampling-based method
- variational inference , which tries to find a tractable approximation to the posterior
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation