Cramer-Rao bound
(1.1 hours to learn)
Summary
The Cramer-Rao bound gives the minimum possible variance of an unbiased estimator of the parameters of a probability distribution. It is used to prove the asymptotic efficiency of the maximum likelihood estimator.
Context
This concept has the prerequisites:
- Fisher information (The bound is given in terms of Fisher information.)
- covariance (The proof of the theorem uses properties of covariance.)
- partial derivatives (The proof of the theorem uses partial derivatives.)
Goals
- Prove the Cramer-Rao theorem, which bounds the variance of any unbiased estimator of model parameters.
- Use the result to compute the asymptotic relative efficiency of an estimator.
Core resources (read/watch one of the following)
-Paid-
→ Probability and Statistics
An introductory textbook on probability theory and statistics.
Location:
Section 8.8, "Fisher information," subsections "The information inequality" and "Efficient estimators," pages 518-522
→ Mathematical Statistics and Data Analysis
An undergraduate statistics textbook.
Location:
Section 8.7, "Efficiency and the Cramer-Rao lower bound," pages 298-305
See also
- The bound implies that maximum likelihood estimation is asymptotically efficient .