Markov random fields
(2 hours to learn)
Summary
Markov random fields (MRFs) are a kind of probabilistic model which encodes the model structure as an undirected graph. Two variables are connected by an edge if they directly influence each other. MRFs are useful for domains which can be described in terms of "soft constraints" between variables. MRFs can be equivalently characterized in terms of factorization of the joint distribution or conditional independence properties.
Context
This concept has the prerequisites:
- random variables (MRFs are a way of organizing information about random variables.)
- conditional probability (MRFs are used to reason about conditional probability.)
- conditional independence (MRFs can be characterized in terms of conditional independence properties.)
Core resources (read/watch one of the following)
-Free-
→ Coursera: Probabilistic Graphical Models (2013)
An online course on probabilistic graphical models.
Other notes:
- Click on "Preview" to see the videos.
-Paid-
→ Probabilistic Graphical Models: Principles and Techniques
A very comprehensive textbook for a graduate-level course on probabilistic AI.
Location:
Sections 4.1-4.3.1, pages 103-117
→ Pattern Recognition and Machine Learning
A textbook for a graduate machine learning course, with a focus on Bayesian methods.
Location:
Sections 8.3-8.3.3, pages 383-390
Supplemental resources (the following are optional, but you may find them useful)
-Paid-
→ Machine Learning: a Probabilistic Perspective
A very comprehensive graudate-level machine learning textbook.
See also
- Factor graphs provide a more fine-grained representation of the factorizations of Boltzmann distributions.
- The Hammersley-Clifford theorem shows that MRFs can be characterized in terms of conditional independencies.
- Bayes nets are another graphical model formalism good for capturing generative processes and causal structure.
- Neither Bayes nets nor MRFs are strictly more powerful than the other.
- Often, we are interested in: Some examples of MRFs include: