(1.1 hours to learn)
AdaBoost is an example of a boosting algorithm, where the goal is to take a "weak classifier" (one which performs slightly above chance) and make it into a "strong classifier" (one which performs well on the training set). It is widely used in data mining, especially in conjunction with decision trees, because of its simplicity and effectiveness.
-this concept has no prerequisites-
- Understand the steps in the AdaBoost algorithm
- Be aware of the underlying motivation, namely taking a "weak classifier" which performs slightly above chance, and producing a "strong classifier," which classifies the whole training set correctly.
Core resources (read/watch one of the following)
→ The Elements of Statistical Learning
A graudate-level statistical learning textbook with a focus on frequentist methods.
- Read the introductory chapters if you're not familiar with the basic machine learning setup.
→ Coursera: Machine Learning
→ Artificial Intelligence: a Modern Approach
A textbook giving a broad overview of all of AI.
Location: Section 18.4, "Ensemble learning," pages 664-668
- Read 18.1-18.2 if you're not familiar with the basic machine learning setup, and skim 18.3 to learn about decision trees.
Supplemental resources (the following are optional, but you may find them useful)
→ Pattern Recognition and Machine Learning
A textbook for a graduate machine learning course, with a focus on Bayesian methods.
Location: Section 14.3, "Boosting," not including 14.3.1, pages 657-659
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation