# uninformative priors

## Summary

In Bayesian parameter estimation, uninformative priors are a way of making minimal assumptions about the model. They are commonly chosen to be invariant to certain transformations, such as translation or scaling. While uninformative priors are often improper, they can still lead to proper posterior distributions, and thereby be usable in posterior inference.

## Context

This concept has the prerequisites:

- Bayesian parameter estimation (Uninformative priors are normally used for Bayesian parameter estimation.)
- Gaussian distribution (The Gaussian distribution is an instructive example.)
- gamma distribution (A degenerate gamma distribution often serves as an uninformative prior.)

## Core resources (we're sorry, we haven't finished tracking down resources for this concept yet)

## Supplemental resources (the following are optional, but you may find them useful)

## -Paid-

→ Pattern Recognition and Machine Learning

A textbook for a graduate machine learning course, with a focus on Bayesian methods.

Location:
Section 2.4.3, pages 117-120

→ Mathematical Statistics and Data Analysis

An undergraduate statistics textbook.

Location:
Section 8.6.1, "Further remarks on priors," pages 294-296

## See also

- Jeffreys priors are a general way to construct uninformative priors.
- Weakly informative priors keep the flexibility of uninformative priors while avoiding the problems caused by improper priors.
- Improper priors can't be used when computing model evidence in Bayesian model comparison.