# gradient

(1.1 hours to learn)

## Summary

The gradient of a function gives the direction of maximum increase. Its entries are given by the partial derivatives of the function. It is widely used in optimization.

## Context

This concept has the prerequisites:

- linear approximation (The gradient gives another way to represent the linear approximation to a function.)
- partial derivatives (The gradient is computed in terms of partial derivatives.)
- dot product (The linear approximation is given in terms of a dot product with the gradient.)
- functions of several variables (The gradient is typically applied to functions of several variables)

## Core resources (read/watch one of the following)

## -Free-

→ MIT Open Courseware: Multivariable Caclulus (2010)

Video lectures for MIT's introductory multivariable calculus class.

## -Paid-

→ Multivariable Calculus

An introductory multivariable calculus textbook.

Location:
Section 13.8, "Directional derivatives and the gradient vector," up to "The gradient vector as normal vector," pages 907-913

→ Multivariable Mathematics

A textbook on linear algebra and multivariable calculus with proofs.

Location:
Section 3.4, "The gradient," pages 104-106

Other notes:

- See section 3.1 for the definition of directional derivatives.

## Supplemental resources (the following are optional, but you may find them useful)

## -Free-

→ Khan Academy: Calculus

## See also

- Gradient ascent , or 'hill climbing', is a simple and widely used optimization algorithm.