(30 minutes to learn)
Early stopping is a technique for controlling overfitting in machine learning models, especially neural networks, by stopping training before the weights have converged. Often we stop when the performance has stopped improving on a held-out validation set.
This concept has the prerequisites:
- backpropagation (Early stopping is commonly applied to the backpropagation algorithm.)
- generalization (Early stopping is meant to improve generalization performance.)
Core resources (read/watch one of the following)
→ Coursera: Neural Networks for Machine Learning (2012)
An online course by Geoff Hinton, who invented many of the core ideas behind neural nets and deep learning.
Location: Lecture "Overview of ways to improve generalization"
→ Pattern Recognition and Machine Learning
A textbook for a graduate machine learning course, with a focus on Bayesian methods.
Location: Section 5.5.2, pages 259-261
- Other strategies for controlling overfitting in feed-forward neural nets include:
- Weight decay , a form of $L_2$ regularization
- Tikhonov regularization , which rewards invariance to noise in the inputs
- Tangent propagation , which rewards invariance to irrelevant transformations of the inputs such as translation and scalling
- Generative pre-training , which improves generalization by encouraging solutions which also reflect the data distribution
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation