Markov random fields
(2 hours to learn)
Markov random fields (MRFs) are a kind of probabilistic model which encodes the model structure as an undirected graph. Two variables are connected by an edge if they directly influence each other. MRFs are useful for domains which can be described in terms of "soft constraints" between variables. MRFs can be equivalently characterized in terms of factorization of the joint distribution or conditional independence properties.
This concept has the prerequisites:
Core resources (read/watch one of the following)
→ Probabilistic Graphical Models: Principles and Techniques
A very comprehensive textbook for a graduate-level course on probabilistic AI.
Location: Sections 4.1-4.3.1, pages 103-117
→ Pattern Recognition and Machine Learning
A textbook for a graduate machine learning course, with a focus on Bayesian methods.
Location: Sections 8.3-8.3.3, pages 383-390
Supplemental resources (the following are optional, but you may find them useful)
- Factor graphs provide a more fine-grained representation of the factorizations of Boltzmann distributions.
- The Hammersley-Clifford theorem shows that MRFs can be characterized in terms of conditional independencies.
- Bayes nets are another graphical model formalism good for capturing generative processes and causal structure.
- Neither Bayes nets nor MRFs are strictly more powerful than the other.
- Often, we are interested in:
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation