Here is an overview of the topics covered in MIT's probabilistic graphical models course, 6.438. If you're a student taking the class, you may find this a helpful source of additional readings. If you're not taking the class, but want to learn about graphical models, this can help you identify some of the key topics. This roadmap corresponds to how the class was taught in the fall of 2011 (the semester I TA'ed it), and the class has probably changed since then. ### Lecture 1: Introduction, overview, preliminaries * Nothing specifically for this lecture, but you may want to learn about [conditional independence](conditional_independence) now, since that gets used a lot early on in the course. ### Lecture 2: Directed probabilistic graphical models * [Bayesian networks](bayesian_networks), or Bayes nets, known in 438-land as directed graphical models * [d-separation](d_separation), a way of analyzing conditional independence structure in Bayes nets * [Bayes Ball](bayes_ball), an efficient algorithm for computing Bayes net conditional independencies. Note that while the course uses Bayes Ball to find conditional independencies, you may find it more intuitive to think directly in terms of the d-separation rules, as in the previous item. ### Lecture 3: Undirected graphs * [Markov random fields (MRFs)](markov_random_fields), also known as undirected graphical models ### Lecture 4: Factor graphs; generating and converting graphs * [factor graphs](factor_graphs). Note that factor graphs and undirected graphical models are two different ways to represent the structure of Boltzmann distributions, and the only real difference is that factor graphs are a more fine-grained notation. * [converting between graphical models](converting_between_graphical_models) ### Lecture 5: Perfect maps, chordal graphs, Markov chains, trees * Nothing to go with this lecture, sorry. ### Lecture 6: Gaussian graphical models * [multivariate Gaussian distribution](multivariate_gaussian_distribution) * [information form for multivariate Gaussians](multivariate_gaussians_information_form) * [Gaussian MRFs](gaussian_mrfs) * [linear-Gaussian models](linear_gaussian_models), or Gaussian Bayes nets ### Lecture 7: Inference on graphs: elimination algorithm * [elimination algorithm](variable_elimination) ### Lecture 8: Inference on trees: sum-product algorithm * [sum-product algorithm](sum_product_on_trees). Unfortunately, different sources differ in which version of this algorithm they present. Most of them use the factor graph version, which is covered in a later lecture. Koller and Friedman jump straight to the junction tree (clique tree) version, which is the most general, but it can be a lot to take in all at once. Start with whichever you like, and it should make the other versions easier to understand. ### Lecture 9: Example: forward-backward algorithm * [hidden Markov models](hidden_markov_models) * [forward-backward algorithm](forward_backward_algorithm) * [HMM inference as a special case of belief propagation](hmm_inference_as_bp). This one covers MAP inference as well, which doesn't appear until a later lecture. ### Lecture 10: Sum-product algorithm with factor graphs * See the references for lecture 8, since some of them use factor graphs. ### Lecture 11: MAP estimation and min-sum algorithm * [the max-product algorithm](max_product_on_trees) (Note that max-product, max-sum, and min-sum are all basically the same algorithm.) * [the Viterbi algorithm](viterbi_algorithm), the special case of max-product applied to HMMs * [HMM inference as a special case of belief propagation](hmm_inference_as_bp) * If you're feeling rusty on linear algebra, now is a good time to brush up since the Gaussian inference lectures will make heavy use of it. ### Lecture 12: Inference with Gaussian graphical models * [Gaussian belief propagation](gaussian_bp_on_trees) * [connection between Gaussian inference and variable elimination](gaussian_variable_elimination_as_gaussian_elimination) * Note that these nodes have quite a few linear algebra dependencies. You may want to review those before the lecture, so that the derivations will make sense. ### Lecture 13: Example: Kalman filtering and smoothing * [Kalman filter](kalman_filter), and [derivation](kalman_filter_derivation) * [Kalman smoother](kalman_smoother) * Viewing Kalman smoothing as a [special case of forward-backward](kalman_as_forward_backward) ### Lecture 14: Junction tree algorithm * [junction trees](junction_trees) ### Lecture 15: Loopy belief propagation, part 1 * [loopy BP](loopy_belief_propagation) ### Lecture 16: Loopy belief propagation, part 2 * [basics of variational inference](variational_inference) * [variational interpretation of loopy BP](loopy_bp_as_variational) ### Lecture 17: Variational inference * [mean field approximation](mean_field) ### Lecture 18: Sampling by Markov chain Monte Carlo * [Gibbs sampling](gibbs_sampling) * [Metropolis-Hastings](metropolis_hastings) ### Lecture 19: Approximate inference by particle methods * [importance sampling](importance_sampling) * [particle filter](particle_filter) ### Lecture 20: Parameter estimation in directed graphs * [maximum likelihood](maximum_likelihood) * [learning Bayes net parameters](bayes_net_parameter_learning) * [Bayesian parameter estimation](bayesian_parameter_estimation) * [Bayesian estimation of Bayes net parameters](bayesian_estimation_bayes_net_params) ### Lecture 21: Learning structure in directed graphs * [Bayes net structure learning](bayes_net_structure_learning) * [Chow-Liu trees](chow_liu_trees) ### Lecture 22: Modeling from partial observations * [EM algorithm](expectation_maximization) * [learning Bayes nets with missing data](learning_bayes_nets_missing_data) ### Lecture 23: Learning undirected graphical models * [MRF parameter learning](mrf_parameter_learning) ### Lecture 24: Learning exponential family models * [exponential families](exponential_families) * [maximum likelihood in exponential families](maximum_likelihood_in_exponential_families)