(3.4 hours to learn)
Hopfield networks are a kind of recurrent neural network which implements an associative memory. The behavior of the network can be modeled in terms of minimizing an energy function.
-this concept has no prerequisites-
- Know the rules for
- learning the weights of a Hopfield net
- finding a low energy configuration
- Understand why the dynamics of the network can be described in terms of an energy function.
- What happens if you try to store too many memories in a Hopfield net?
- Know how Hopfield nets can be applied to optimization problems (e.g. image interpretation, Traveling Salesman)
Core resources (read/watch one of the following)
→ Coursera: Neural Networks for Machine Learning (2012)
An online course by Geoff Hinton, who invented many of the core ideas behind neural nets and deep learning.
- Skim the introductory networks for a general motivation of neural nets.
→ Information Theory, Inference, and Learning Algorithms
A graudate-level textbook on machine learning and information theory.
- The argument about variational energy and the mathematical analysis of Section 42.7 are optional.
- Boltzmann machines (and in particular, [restricted Boltzmann machines (RBMs)](restricted_boltzmann_machines) ), are a modern probabilistic analogue of Hopfield nets.
- The mean field approximation updates in an Ising model have a similar form to Hopfield nets.
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation