linear regression

(2.3 hours to learn)


Linear regression is an algorithm for learning to predict a real-valued ``target'' variable as a linear function of one or more real-valued ``input'' variables. It is one of the most widely used statistical learning algorithms, and with care it can be made to work very well in practice. Because it has a closed-form solution, we can exactly analyze many properties of linear regression which have no exact form for other models. This makes it a useful starting point for understanding many other statistical learning algorithms.


This concept has the prerequisites:

Core resources (read/watch one of the following)


Stanford's Machine Learning lecture notes
Lecture notes for Stanford's machine learning course, aimed at graduate and advanced undergraduate students.
Author: Andrew Y. Ng
Coursera: Machine Learning (2013)
An online machine learning course aimed at a broad audience.
  • Lecture sequence "Linear regression with one variable"
  • Lecture sequence "Linear regression with multiple variables": "Multiple features" up through "Features and polynomial regression"
Author: Andrew Y. Ng
Other notes:
  • Click on "Preview" to see the videos.

Supplemental resources (the following are optional, but you may find them useful)


The Elements of Statistical Learning
A graudate-level statistical learning textbook with a focus on frequentist methods.
Authors: Trevor Hastie,Robert Tibshirani,Jerome Friedman
Coursera: Neural Networks for Machine Learning (2012)
An online course by Geoff Hinton, who invented many of the core ideas behind neural nets and deep learning.
  • Lecture "Learning the weight of a linear neuron"
  • Lecture "The error surface of a linear neuron"
Author: Geoffrey E. Hinton


See also