MAP parameter estimation
(35 minutes to learn)
In Bayesian parameter estimation, unless the prior is specially chosen, often there's no analytical way to integrate out the model parameters. In these cases, maximum a posteriori (MAP) estimation is a common approximation, where we choose the parameters which maximize the posterior. Although this is computationally convenient, it has the drawbacks that it's not invariant to reparameterization, and that the MAP estimate may not be typical of the posterior.
This concept has the prerequisites:
Core resources (read/watch one of the following)
→ Machine Learning: a Probabilistic Perspective
A very comprehensive graudate-level machine learning textbook.
Location: Section 5.2.1, pages 149-152
→ Probabilistic Graphical Models: Principles and Techniques
A very comprehensive textbook for a graduate-level course on probabilistic AI.
Location: Section 17.4.4, pages 751-754
-No Additional Notes-
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation