tangent propagation
Summary
Tangent propagation is a way of regularizing neural nets. It encourages the representation to be invariant by penalizing large changes in the representation when small transformations are applied to the inputs.
Context
This concept has the prerequisites:
- backpropagation (Tangent propagation is based on the same idea as backpropagation.)
- learning invariances in neural nets (Tangent propagation is a technique for learning invariances in neural nets.)
Core resources (we're sorry, we haven't finished tracking down resources for this concept yet)
Supplemental resources (the following are optional, but you may find them useful)
-Paid-
→ Pattern Recognition and Machine Learning
A textbook for a graduate machine learning course, with a focus on Bayesian methods.
Location:
Section 5.5.4, pages 263-265
See also
- Some other strategies for learning invariances:
- building it explicitly into the architecture, as in convolutional nets
- augmenting the training set with warped examples
- Tikhonov regularization , which penalizes instability with respect to noise