Fisher's linear discriminant
(1.1 hours to learn)
Fisher's linear discriminant is a technique for visualizing high-dimensional data belonging to multiple classes by projecting it onto a low-dimensional subspace. The subspace is chosen to maximize the ratio of between-class to within-class variance.
This concept has the prerequisites:
- projection onto a subspace (FDA is an algorithm for projection onto a low-dimensional subspace.)
- Gaussian discriminant analysis (FDA is a visualization technique based on GDA.)
- eigenvalues and eigenvectors (FDA is a generalized eigenvalue problem.)
- optimization problems (FDA is formulated as an optimization problem.)
- covariance matrices (FDA is formulated in terms of covariance matrices.)
- Derive the subspace which maximizes the ratio of between-class and within-class variance.
- Why might this projection give better classification results than GDA in the original space?
Core resources (read/watch one of the following)
→ The Elements of Statistical Learning
A graudate-level statistical learning textbook with a focus on frequentist methods.
Location: Section 4.3.3, "Reduced-rank linear discriminant analysis," pages 113-119
→ Pattern Recognition and Machine Learning
A textbook for a graduate machine learning course, with a focus on Bayesian methods.
Location: Section 4.1.4, "Fisher's linear discriminant," pages 186-189
- Other methods for projecting data onto a low-dimensional subspace include:
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation