# support vector regression

(50 minutes to learn)

## Summary

Support vector regression is an analogue of the support vector machine (SVM) which is used for regression rather than classification. The loss function is hinge loss, which ignores small errors and penalizes errors linearly beyond some margin. Like with SVMs, the main selling point is the ability to represent complex nonlinear decision boundaries in terms of kernels with a small number of training examples.

## Context

This concept has the prerequisites:

- the support vector machine
- linear regression
- SVM optimality conditions (The SVM optimality conditions are a necessary context for understanding the behavior of the solutions.)

## Core resources (read/watch one of the following)

## -Free-

→ The Elements of Statistical Learning

A graudate-level statistical learning textbook with a focus on frequentist methods.

## -Paid-

→ Pattern Recognition and Machine Learning

A textbook for a graduate machine learning course, with a focus on Bayesian methods.

Location:
Section 7.1.4, pages 339-343

## Supplemental resources (the following are optional, but you may find them useful)

## -Paid-

→ Machine Learning: a Probabilistic Perspective

A very comprehensive graudate-level machine learning textbook.

Location:
Section 14.5.1, pages 497-498

## See also

-No Additional Notes-