Bayesian model comparison

(2.5 hours to learn)


The framework of Bayesian model comparison evaluates probabilistic models based on the marginal likelihood, or the probability they assign a dataset with all the parameters marginalized out. The marginalization of model parameters implements a sort of "Occam's razor" effect. Marginal likelihoods can also be used to compute a posterior over model classes using Bayes' rule.


This concept has the prerequisites:


  • Know what the marginal likelihood of a model refers to
  • Motivate the marginal likelihood in terms of Bayes factors
  • Understand the basis for the "Bayesian Occam's razor" effect (hint: it's not primarily a result of assigning lower prior probability to models with more parameters, as many people believe)
  • Derive the Bayes factor for a simple example (e.g. a beta-Bernoulli model)

Core resources (read/watch one of the following)


Information Theory, Inference, and Learning Algorithms
A graudate-level textbook on machine learning and information theory.
Author: David MacKay


Supplemental resources (the following are optional, but you may find them useful)


See also