soft margin SVM

(50 minutes to learn)

Summary

The standard SVM objective function, which maximizes the margin, only makes sense when the training set is linearly separable. The soft margin SVM gives more flexibility by allowing some of the training points to be misclassified. In addition to handling non-separable training sets, it also can be more robust to outliers or mislabeled data.

Context

This concept has the prerequisites:

Core resources (read/watch one of the following)

-Free-

Stanford's Machine Learning lecture notes
Lecture notes for Stanford's machine learning course, aimed at graduate and advanced undergraduate students.
Author: Andrew Y. Ng

-Paid-

Supplemental resources (the following are optional, but you may find them useful)

-Free-

The Elements of Statistical Learning
A graudate-level statistical learning textbook with a focus on frequentist methods.
Authors: Trevor Hastie,Robert Tibshirani,Jerome Friedman

-Paid-

See also

-No Additional Notes-