# MAP parameter estimation

(35 minutes to learn)

## Summary

In Bayesian parameter estimation, unless the prior is specially chosen, often there's no analytical way to integrate out the model parameters. In these cases, maximum a posteriori (MAP) estimation is a common approximation, where we choose the parameters which maximize the posterior. Although this is computationally convenient, it has the drawbacks that it's not invariant to reparameterization, and that the MAP estimate may not be typical of the posterior.

## Context

This concept has the prerequisites:

- Bayesian parameter estimation (MAP estimation is an approximation to Bayesian parameter estimation.)
- optimization problems (Map estimation is an optimization problem.)

## Core resources (read/watch one of the following)

## -Paid-

→ Machine Learning: a Probabilistic Perspective

A very comprehensive graudate-level machine learning textbook.

Location:
Section 5.2.1, pages 149-152

→ Probabilistic Graphical Models: Principles and Techniques

A very comprehensive textbook for a graduate-level course on probabilistic AI.

Location:
Section 17.4.4, pages 751-754

## See also

-No Additional Notes-