SVM optimality conditions
(1.1 hours to learn)
Using Lagrange duality, we can formulate a set of conditions that characterize the optimal solution to the SVM objective. These conditions show that the weight vector is a linear combination of a (hopefully small) subset of the training points, those for which the margin constraint is tight.
This concept has the prerequisites:
Core resources (read/watch one of the following)
→ Stanford's Machine Learning lecture notes
Lecture notes for Stanford's machine learning course, aimed at graduate and advanced undergraduate students.
Supplemental resources (the following are optional, but you may find them useful)
→ The Elements of Statistical Learning
A graudate-level statistical learning textbook with a focus on frequentist methods.
→ Pattern Recognition and Machine Learning
A textbook for a graduate machine learning course, with a focus on Bayesian methods.
Location: Section 7.1, up to 7.1.1, pages 326-331
-No Additional Notes-
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation