Bayesian linear regression
(2 hours to learn)
By interpreting linear regression as a Bayesian model, we can automatically infer the prior variance and the noise variance, and make calibrated predictions. Bayesian linear regression is a useful component in fancier probabilistic models.
This concept has the prerequisites:
- Know the form of the Bayesian linear regression model
- Visualize the prior, evidence, and posterior
- Derive the predictive distribution
- Visualize the posterior predictive distribution
- Be able to infer the variance parameters (with the evidence approximation or a conjugate prior)
Core resources (read/watch one of the following)
→ Pattern Recognition and Machine Learning
A textbook for a graduate machine learning course, with a focus on Bayesian methods.
Location: Section 3.3, pages 152-161
- the evidence approximation
Supplemental resources (the following are optional, but you may find them useful)
→ Machine Learning: a Probabilistic Perspective
A very comprehensive graudate-level machine learning textbook.
Location: Sections 7.6-7.6.2, pages 231-234
- Gaussian process regression is a nonparametric analogue of Bayesian linear regression which uses kernels.
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation