loopy belief propagation
(1.8 hours to learn)
The sum-product and max-product algorithms give exact answers for tree graphical models, but if we apply the same update rules on a general graph, it often gives pretty reasonable results. This is known as loopy belief propagation, and it is a widely used approximate inference algorithm in coding theory and low level vision.
This concept has the prerequisites:
Core resources (read/watch one of the following)
→ Machine Learning: a Probabilistic Perspective
A very comprehensive graudate-level machine learning textbook.
Location: Sections 22.1-22.2.5, pages 767-774
Supplemental resources (the following are optional, but you may find them useful)
→ Coursera: Probabilistic Graphical Models (2013)
→ Probabilistic Graphical Models: Principles and Techniques
A very comprehensive textbook for a graduate-level course on probabilistic AI.
Location: Sections 11.3-11.3.3 (pages 391-401) and boxes 11.B and 11.C (pages 407-411)
- This uses the cluster graph formalism, which is more general than the factor graph version, but has more cumbersome notation.
- The Brouwer fixed point theorem implies that loopy BP has at least one fixed point.
- Loopy BP can be viewed as a [variational inference](variational_inference) algorithm which tries to minimize the Bethe free energy.
- In the case of Gaussian graphical models, loopy BP is guaranteed to converge to the correct mean.
- Loopy BP is guaranteed to converge in graphs with a single loop.
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation