independent component analysis

(1.1 hours to learn)


Independent component analysis (ICA) is a latent variable model where the observations are modeled as linear combinations of latent variables which are usually drawn from a heavy-tailed distribution. Common uses include source separation and sparse dictionary learning.


This concept has the prerequisites:

Core resources (read/watch one of the following)


Information Theory, Inference, and Learning Algorithms
Stanford's Machine Learning lecture notes
Lecture notes for Stanford's machine learning course, aimed at graduate and advanced undergraduate students.
Author: Andrew Y. Ng


Supplemental resources (the following are optional, but you may find them useful)


The Elements of Statistical Learning
A graudate-level statistical learning textbook with a focus on frequentist methods.
Authors: Trevor Hastie,Robert Tibshirani,Jerome Friedman
Additional dependencies:
  • differential entropy


See also

  • Some other techniques for learning meaningful representations of data:
    • manifold learning , where we try to embed points in a low-dimensional space where similar points are close together
    • sparse coding , a generative model similar to ICA, but which gives an overcomplete representation (i.e. larger than the input representation)
    ICA is often used for learning representations of: FastICA is an efficient algorithm for fitting ICA models.