# feed-forward neural nets

(2 hours to learn)

## Summary

Feed-forward neural networks are a supervised learning architecture consisting of a set of neuron-like "units," each one of which computes a simple function of its inputs. Because layers of such neurons can be stacked, neural nets are capable of learning complex nonlinear functions of the inputs.

## Context

This concept has the prerequisites:

- basis function expansions (Feed-forward neural nets can be seen as an adaptive basis function expansion.)

## Core resources (read/watch one of the following)

## -Free-

→ Coursera: Machine Learning (2013)

An online machine learning course aimed at a broad audience.

Location:
Lecture series "Neural networks: representation"

Other notes:

- Click on "Preview" to see the videos.

## -Paid-

→ Pattern Recognition and Machine Learning

A textbook for a graduate machine learning course, with a focus on Bayesian methods.

Location:
Section 5.1, pages 227-232

→ Artificial Intelligence: a Modern Approach

A textbook giving a broad overview of all of AI.

Location:
Section 20.5, pages 736-748

## Supplemental resources (the following are optional, but you may find them useful)

## -Free-

→ Coursera: Machine Learning

An online machine learning course aimed at advanced undergraduates.

Location:
Lecture "Multilayer perceptrons"

Additional dependencies:

- gradient descent
- perceptron algorithm

Other notes:

- Click on "Preview" to see the videos.

→ The Elements of Statistical Learning

A graudate-level statistical learning textbook with a focus on frequentist methods.

## See also

- Neural nets are a form of distributed representation .
- Neural nets can be trained using an algorithm called backpropagation .
- Some examples of neural net architectures:
- convolutional nets , an architecture for vision problems where the weights are replicated across an image
- Boltzmann machines , a kind of neural net used for density modeling
- deep belief nets , which are used for learning multilayer representations
- recurrent neural nets , which implement a form of memory over time

- Connectionist psychology uses neural nets to model human cognition.
- We can theoretically analyze the representational capacity of neural nets .