Bayesian model comparison
(2.5 hours to learn)
The framework of Bayesian model comparison evaluates probabilistic models based on the marginal likelihood, or the probability they assign a dataset with all the parameters marginalized out. The marginalization of model parameters implements a sort of "Occam's razor" effect. Marginal likelihoods can also be used to compute a posterior over model classes using Bayes' rule.
This concept has the prerequisites:
- Know what the marginal likelihood of a model refers to
- Motivate the marginal likelihood in terms of Bayes factors
- Understand the basis for the "Bayesian Occam's razor" effect (hint: it's not primarily a result of assigning lower prior probability to models with more parameters, as many people believe)
- Derive the Bayes factor for a simple example (e.g. a beta-Bernoulli model)
Core resources (read/watch one of the following)
→ Machine Learning: a Probabilistic Perspective
A very comprehensive graudate-level machine learning textbook.
Location: Section 5.3, pages 155-165
Supplemental resources (the following are optional, but you may find them useful)
→ Pattern Recognition and Machine Learning
A textbook for a graduate machine learning course, with a focus on Bayesian methods.
Location: Section 3.4, pages 161-165
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation