# Fisher's linear discriminant

(1.1 hours to learn)

## Summary

Fisher's linear discriminant is a technique for visualizing high-dimensional data belonging to multiple classes by projecting it onto a low-dimensional subspace. The subspace is chosen to maximize the ratio of between-class to within-class variance.

## Context

This concept has the prerequisites:

- projection onto a subspace (FDA is an algorithm for projection onto a low-dimensional subspace.)
- Gaussian discriminant analysis (FDA is a visualization technique based on GDA.)
- eigenvalues and eigenvectors (FDA is a generalized eigenvalue problem.)
- optimization problems (FDA is formulated as an optimization problem.)
- covariance matrices (FDA is formulated in terms of covariance matrices.)

## Goals

- Derive the subspace which maximizes the ratio of between-class and within-class variance.

- Why might this projection give better classification results than GDA in the original space?

## Core resources (read/watch one of the following)

## -Free-

→ The Elements of Statistical Learning

A graudate-level statistical learning textbook with a focus on frequentist methods.

Location:
Section 4.3.3, "Reduced-rank linear discriminant analysis," pages 113-119

## -Paid-

→ Pattern Recognition and Machine Learning

A textbook for a graduate machine learning course, with a focus on Bayesian methods.

Location:
Section 4.1.4, "Fisher's linear discriminant," pages 186-189

## See also

- Other methods for projecting data onto a low-dimensional subspace include: Often we can get a better visualization of the data using a nonlinear embedding. Some methods for doing this include: If the goal is classification rather than visualization, consider some other algorithms more directly geared towards the task: