basis function expansions
(1 hours to learn)
A basis function expansion augments/replaces the attributes of a dataset with transformations of these attributes. For instance, given an input attribute X, a basis function expansion could map this attribute to three features: 1, X, X^2---a "polynomial basis." This mapping allows various learning algorithms and statistical procedures to capture nonlinear trends in the data while still using linear models to analyze these transformed attributes. For instance, using the polynomial basis functions with linear regression allows linear regression to find polynomial (nonlinear) trends in the data; this is commonly called "polynomial regression." The process of selecting the particular mapping (basis functions) is typically referred to as "feature engineering."
This concept has the prerequisites:
Core resources (read/watch one of the following)
→ The Elements of Statistical Learning
A graudate-level statistical learning textbook with a focus on frequentist methods.
Location: Section 5.1 provides a good overview and Section 5.2 provides a detailed explanation of common basis functions (piecewise polynomials and splines)
- the remainder of chapter 5 provides a good reference for using spline and wavelet basis functions
Supplemental resources (the following are optional, but you may find them useful)
→ Mathematical Monk Tutorials
- watch from 3:20 for an overview of using basis functions with linear regression
- Often basis function expansions do not give us enough flexibility to model the nonlinear structure we're interested in. More powerful methods include:
- Neural networks , which allow the basis functions to be adapted to the data.
- Kernels , which a way of implicitly representing a very high-dimensional (possibly infinite dimensional) feature expansion in terms of a kernel function between data points
- Representation learining , an area of machine learning which tries to learn high-level feature representations automatically from the raw data