By formulating PCA as a Bayesian model, we can auotmatically choose a latent dimensionality by maximizing the (approximate) marginal likelihood of the model.
This concept has the prerequisites:
- probabilistic PCA (Bayesian PCA is an elaboration of probabilistic PCA.)
- Bayesian linear regression (Bayesian PCA is based on similar ideas to Bayesian linear regression.)
- the evidence approximation (The evidence approximation can be used to select the dimensionality.)
- Bayesian parameter estimation: multivariate Gaussians (The same ideas are required for modeling the variance parameters.)
- the Laplace approximation (The Laplace approximation can be used to integrate out the parameter matrix.)
- Know the definition of the Bayesian PCA model
- How can it be used to select the dimension of the latent space?
- Know of a way to approximate the marginal likelihood (e.g. the evidence approximation)
Core resources (we're sorry, we haven't finished tracking down resources for this concept yet)
Supplemental resources (the following are optional, but you may find them useful)
→ Pattern Recognition and Machine Learning
A textbook for a graduate machine learning course, with a focus on Bayesian methods.
Location: Section 12.2.3, pages 580-583
- Probabilistic matrix factorization (PMF) is another Bayesian formulation of PCA, used for predicting missing entries of a matrix.
- We can perform inference in this model using Gibbs sampling .
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation