principal component analysis (proof)
(45 minutes to learn)
The proof that principal component analysis (PCA) finds the subspace maximizing the variance and minimizing the reconstruction error.
This concept has the prerequisites:
- principal component analysis
- variational characterization of eigenvalues (Justifying PCA depends on the formulation of eigenvalues as an optimization problem.)
Core resources (read/watch one of the following)
→ Pattern Recognition and Machine Learning
A textbook for a graduate machine learning course, with a focus on Bayesian methods.
Location: Sections 12.1.1-12.1.2, pages 561-565
→ Machine Learning: a Probabilistic Perspective
A very comprehensive graudate-level machine learning textbook.
Location: Section 12.2.2, pages 389-391
-No Additional Notes-
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation