uninformative priors


In Bayesian parameter estimation, uninformative priors are a way of making minimal assumptions about the model. They are commonly chosen to be invariant to certain transformations, such as translation or scaling. While uninformative priors are often improper, they can still lead to proper posterior distributions, and thereby be usable in posterior inference.


This concept has the prerequisites:

Core resources (we're sorry, we haven't finished tracking down resources for this concept yet)

Supplemental resources (the following are optional, but you may find them useful)


See also

  • Jeffreys priors are a general way to construct uninformative priors.
  • Weakly informative priors keep the flexibility of uninformative priors while avoiding the problems caused by improper priors.
  • Improper priors can't be used when computing model evidence in Bayesian model comparison.