early stopping

(30 minutes to learn)


Early stopping is a technique for controlling overfitting in machine learning models, especially neural networks, by stopping training before the weights have converged. Often we stop when the performance has stopped improving on a held-out validation set.


This concept has the prerequisites:

Core resources (read/watch one of the following)


Coursera: Neural Networks for Machine Learning (2012)
An online course by Geoff Hinton, who invented many of the core ideas behind neural nets and deep learning.
Location: Lecture "Overview of ways to improve generalization"
Author: Geoffrey E. Hinton


See also

  • Other strategies for controlling overfitting in feed-forward neural nets include: