In Bayesian parameter estimation, uninformative priors are a way of making minimal assumptions about the model. They are commonly chosen to be invariant to certain transformations, such as translation or scaling. While uninformative priors are often improper, they can still lead to proper posterior distributions, and thereby be usable in posterior inference.
This concept has the prerequisites:
- Bayesian parameter estimation (Uninformative priors are normally used for Bayesian parameter estimation.)
- Gaussian distribution (The Gaussian distribution is an instructive example.)
- gamma distribution (A degenerate gamma distribution often serves as an uninformative prior.)
Core resources (we're sorry, we haven't finished tracking down resources for this concept yet)
Supplemental resources (the following are optional, but you may find them useful)
→ Pattern Recognition and Machine Learning
A textbook for a graduate machine learning course, with a focus on Bayesian methods.
Location: Section 2.4.3, pages 117-120
→ Mathematical Statistics and Data Analysis
An undergraduate statistics textbook.
Location: Section 8.6.1, "Further remarks on priors," pages 294-296
- Jeffreys priors are a general way to construct uninformative priors.
- Weakly informative priors keep the flexibility of uninformative priors while avoiding the problems caused by improper priors.
- Improper priors can't be used when computing model evidence in Bayesian model comparison.
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation