# variable elimination

(1.7 hours to learn)

## Summary

Variable elimination is a simple algorithm for marginalization and partition function computation in graphical models. It is based on interchanging sums and products in the definitions of marginals or partition functions. While it produces exact answers, the complexity blows up exponentially in the worst case.

## Context

This concept has the prerequisites:

- Markov random fields (variable elimination is an MRF inference algorithm.)
- Bayesian networks (variable elimination is an inference algorithm for Bayesian networks.)

## Goals

- Does the order of elimination matter for directed graphical models? Undirected graphical models?

- Is it usually necessary to keep track of the partition function when performing variable elimination in an undirected graphical model?

## Core resources (read/watch one of the following)

## -Free-

→ Coursera: Probabilistic Graphical Models (2013)

An online course on probabilistic graphical models.

Other notes:

- Click on "Preview" to see the videos.

## -Paid-

→ Machine Learning: a Probabilistic Perspective

A very comprehensive graudate-level machine learning textbook.

Location:
Section 20.3, pages 714-720

→ Artificial Intelligence: a Modern Approach

A textbook giving a broad overview of all of AI.

Location:
Section 14.4, pages 504-510

## Supplemental resources (the following are optional, but you may find them useful)

## -Paid-

→ Probabilistic Graphical Models: Principles and Techniques

A very comprehensive textbook for a graduate-level course on probabilistic AI.

Location:
Sections 9.2-9.4.2, pages 292-310

## See also

- If we want to perform multiple queries, we go through lots of redundant calculations. Belief propagation gives a way of reusing the computations.
- In Gaussian graphical models , variable elimination is [equivalent](gaussian_variable_elimination_as_gaussian_elimination) to [Gaussian elimination](gaussian_elimination) .