Bayesian networks

(2.1 hours to learn)

Summary

Bayesian networks are a graphical formalism for representing the structure of a probabilistic model, i.e. the ways in which the random variables may depend on each other. Intuitively, they are good at representing domains with a causal structure, and the edges in the graph determine which variables directly influence which other variables. They can be equivalently viewed as representing a factorization structure of the joint probability distribution, or as encoding a set of conditional independence assumptions about the distribution.

Context

This concept has the prerequisites:

Core resources (read/watch one of the following)

-Free-

Coursera: Machine Learning
An online machine learning course aimed at advanced undergraduates.
Author: Pedro Domingos
Other notes:
  • Click on "Preview" to see the videos.
Coursera: Probabilistic Graphical Models (2013)
An online course on probabilistic graphical models.
Author: Daphne Koller
Other notes:
  • Click on "Preview" to see the videos.
Artificial Intelligence II (IIT video lectures)
Author: Pallab Dasgupta
Other notes:
  • Youtube comment from user "SiddharthBhaiVideos" provides a nice outline of the lecture

-Paid-

Supplemental resources (the following are optional, but you may find them useful)

-Paid-

See also

  • Bayes nets are closely related to Markov random fields (MRFs) , a graphical model formalism which is good at representing soft constraints between variables.
  • Neither Bayes nets nor MRFs are strictly more powerful than the other.
  • Bayes nets can also be characterized in terms of their conditional independencies. The conditional independencies can be found using d-separation .
  • Often we are interested in The Bayes net representation assumes all the relevant information about the problem is contained in the joint distribution over all the variables. Causal networks are a closely related formalism which captures more details about the causal relationships.