# Bayesian PCA

## Summary

By formulating PCA as a Bayesian model, we can auotmatically choose a latent dimensionality by maximizing the (approximate) marginal likelihood of the model.

## Context

This concept has the prerequisites:

- probabilistic PCA (Bayesian PCA is an elaboration of probabilistic PCA.)
- Bayesian linear regression (Bayesian PCA is based on similar ideas to Bayesian linear regression.)
- the evidence approximation (The evidence approximation can be used to select the dimensionality.)
- Bayesian parameter estimation: multivariate Gaussians (The same ideas are required for modeling the variance parameters.)
- the Laplace approximation (The Laplace approximation can be used to integrate out the parameter matrix.)

## Goals

- Know the definition of the Bayesian PCA model

- How can it be used to select the dimension of the latent space?

- Know of a way to approximate the marginal likelihood (e.g. the evidence approximation)

## Core resources (we're sorry, we haven't finished tracking down resources for this concept yet)

## Supplemental resources (the following are optional, but you may find them useful)

## -Paid-

→ Pattern Recognition and Machine Learning

A textbook for a graduate machine learning course, with a focus on Bayesian methods.

Location:
Section 12.2.3, pages 580-583

## See also

- Probabilistic matrix factorization (PMF) is another Bayesian formulation of PCA, used for predicting missing entries of a matrix.
- We can perform inference in this model using Gibbs sampling .