loopy BP as variational inference
(2.8 hours to learn)
Loopy belief propagation sounds like a hack, but it can be interpreted as a variational inference algorithm. In particular, it is a fixed point update for an approximation to variational inference, where both the energy functional and the marginal polytope are approximated. While this analysis doesn't lead to any strong guarantees, it is the basis for generalizations of loopy BP which have stronger guarantees.
This concept has the prerequisites:
- loopy belief propagation
- variational inference
- Lagrange multipliers (Lagrange multipliers are needed to derive the update rules.)
Core resources (read/watch one of the following)
→ Graphical models, exponential families, and variational inference (2008)
An in-depth review of exact and approximate inference methods for graphical models.
Location: Sections 4.1-4.1.4, pages 76-91
Supplemental resources (the following are optional, but you may find them useful)
→ Machine Learning: a Probabilistic Perspective
A very comprehensive graudate-level machine learning textbook.
Location: Sections 22.3-22.3.5, pages 776-782
- exponential families
- mean field approximation
→ Probabilistic Graphical Models: Principles and Techniques
A very comprehensive textbook for a graduate-level course on probabilistic AI.
Location: Sections 11.2 (pages 386-390), 11.3.5 (pages 404-407) and 11.3.6 (pages 411-414)
- junction trees
- Loopy BP is guaranteed to converge to the correct mean for Gaussian graphical models.
- Loopy BP is guaranteed to converge for a graph with a single loop.
- Tree-reweighted belief propagation is an algorithm inspired by the same ideas, but where the approximation to KL divergence is convex and gives an upper bound on the partition function.
- Some other inference algorithms based on variational principles:
- expectation propagation , which approximates BP messages in terms of expectations
- mean field approximation , where different variables are approximated as independent in the posterior
- variational Bayes , a general framework for posterior inference in Bayesian models
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation