# Bayes' rule

(2 hours to learn)

## Summary

Bayes' rule is a formula for combining prior beliefs with observed evidence to obtain a "posterior" distribution. It is central to Bayesian statistics, where one infers a posterior over the parameters of a statistical model given the observed data.

## Context

This concept has the prerequisites:

- conditional probability (Bayes' rule follows from the definition of conditional probability.)

## Goals

- Know the statement of Bayes' Rule

- Be able to use it to combine prior information with evidence

- Derive Bayes' Rule from the definition of conditional probability

- Know terminology: prior, posterior

- Be able to reason intuitively about Bayes' Rule in terms of odds ratios

## Core resources (read/watch one of the following)

## -Free-

→ Mathematical Monk: Probability Primer (2011)

Online videos on probability theory.

Other notes:

- This uses the measure theoretic notion of probability, but should still be accessible without that background. Refer to Lecture 1.S for unfamiliar terms.

## -Paid-

→ A First Course in Probability

An introductory probability textbook.

Location:
Section 3.3, "Bayes' Formula," pages 72-87

→ Probability and Statistics

An introductory textbook on probability theory and statistics.

Location:
Section 2.3, "Bayes' Theorem," pages 66-77

## Supplemental resources (the following are optional, but you may find them useful)

## -Free-

→ BerkeleyX: Introduction to Statistics: Probability

## See also

- Bayes nets are a framework for sophisticated probabilistic reasoning about many variables of interest using things like Bayes' rule.
- Bayesian statistics is a branch of statistics loosely inspired by Bayesian reasoning.