(9 hours to learn)
The Lagrange dual of a convex optimization problem is another convex optimization problem where the optimization variables are the Lagrange multipliers of the original problem. It leads to surprising relationships between seemingly different optimization problems. Duality is commonly used in approximation algorithms, since constraining the dual corresponds to relaxing the original problem.
This concept has the prerequisites:
Core resources (read/watch one of the following)
→ Convex Optimization
A graduate-level textbook on convex optimization.
- See Section 3.3 for the definition of the convex conjuate and Section 4.3 for the definition of a linear program.
- The KKT conditions characterize the optimal solutions to a convex optimization problem.
- Applications in machine learning and statistics:
- Support vector machines (SVMs) , a classification algorithm, use duality in the optimization.
- Variational inference , where an intractable distribution is approximated with a tractable one, can be thought of in terms of convex duality.
- The relationship between min-cut and max-flow is an example of Lagrange duality.
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation