- This is example text. After quickly learning the syntax, feel free to delete this text and start creating your roadmap. At any point, press the information buttons in the toolbar (above) to see this example roadmap.
+ ### Lecture 1: Introduction, overview, preliminaries
  
- ## A Two-Minute Overview
+ * Nothing specifically for this lecture, but you may want to learn about [conditional independence](http://www.metacademy.org/graphs/concepts/conditional_independence#lfocus=conditional_independence) now, since that gets used a lot early on in the course.
  
+ ### Lecture 2: Directed probabilistic graphical models
- ### Fundamentals
- Here's some text, you can just write like normal text. You can also make text **bold** or *italics,* which should help you emphasize all the great things you want to say.
  
- Placing a blank line between two text segments creates a new paragraph.
+ * [Bayesian networks](http://www.metacademy.org/graphs/concepts/bayesian_networks#lfocus=bayesian_networks), or Bayes nets, known in 438-land as directed graphical models
+ * [d-separation](http://www.metacademy.org/graphs/concepts/d_separation#lfocus=d_separation), a way of analyzing conditional independence structure in Bayes nets
+ * [Bayes Ball](http://www.metacademy.org/graphs/concepts/bayes_ball#lfocus=bayes_ball), an efficient algorithm for computing Bayes net conditional independencies. Note that while the course uses Bayes Ball to find conditional independencies, you may find it more intuitive to think directly in terms of the d-separation rules, as in the previous item.
  
+ ### Lecture 3: Undirected graphs
- Or you could use a blockquote to say something
- > I could have made money this way, and perhaps amused myself writing code. But I knew that at the end of my career, I would look back on years of building walls to divide people, and feel I had spent my life making the world a worse place. - Richard Stallman
  
- ### Links
+ * [Markov random fields (MRFs)](http://www.metacademy.org/graphs/concepts/markov_random_fields#lfocus=markov_random_fields), also known as undirected graphical models
  
- Here is an external link to the [free software foundation](http://www.fsf.org)
+ ### Lecture 4: Factor graphs; generating and converting graphs
  
- Here is an internal link to a metacademy concept that uses the concept's *tag*: [convolutional neural networks](convolutional_nets)
+ * [factor graphs](http://www.metacademy.org/graphs/concepts/factor_graphs#lfocus=factor_graphs). Note that factor graphs and undirected graphical models are two different ways to represent the structure of Boltzmann distributions, and the only real difference is that factor graphs are a more fine-grained notation.
+ * [converting between graphical models](http://www.metacademy.org/graphs/concepts/converting_between_graphical_models#lfocus=converting_between_graphical_models)
  
+ ### Lecture 5: Perfect maps, chordal graphs, Markov chains, trees
  
- Here is an internal link to a metacademy concept that uses the concept *title* with a shorthand link syntax:  [[convolutional neural nets]]
+ * Nothing to go with this lecture, sorry.
  
- Here is an **incorrect/missing** internal link to a metacademy concept that uses the concept *title* with a shorthand link syntax:  [[convolutional neural networks]]
+ ### Lecture 6: Gaussian graphical models
  
- **Note**: soon we will allow the use of titles instead of tags and will include an autocomplete type of feature
+ * [multivariate Gaussian distribution](http://www.metacademy.org/graphs/concepts/multivariate_gaussian_distribution#lfocus=multivariate_gaussian_distribution)
+ * [information form for multivariate Gaussians](http://www.metacademy.org/graphs/concepts/multivariate_gaussians_information_form#lfocus=multivariate_gaussians_information_form)
+ * [Gaussian MRFs](http://www.metacademy.org/graphs/concepts/gaussian_mrfs#lfocus=gaussian_mrfs)
+ * [linear-Gaussian models](http://www.metacademy.org/graphs/concepts/linear_gaussian_models#lfocus=linear_gaussian_models), or Gaussian Bayes nets
  
- ### Lists
+ ### Lecture 7: Inference on graphs: elimination algorithm
  
- And of course, you can use lists:
+ * [elimination algorithm](http://www.metacademy.org/graphs/concepts/variable_elimination#lfocus=variable_elimination)
  
+ ### Lecture 8: Inference on trees: sum-product algorithm
- * here is a list item
- * here is another list item
-     * here is an embedded list item
- * back to the outer layer
  
- Or numbered lists
+ * [sum-product algorithm](http://www.metacademy.org/graphs/concepts/sum_product_on_trees#lfocus=sum_product_on_trees). Unfortunately, different sources differ in which version of this algorithm they present. Most of them use the factor graph version, which is covered in a later lecture. Koller and Friedman jump straight to the junction tree (clique tree) version, which is the most general, but it can be a lot to take in all at once. Start with whichever you like, and it should make the other versions easier to understand.
  
+ ### Lecture 9: Example: forward-backward algorithm
- 1. Here's a numbered element
- 84.  You can use any number to start a numbered list
-     902. here's an embedded numbered list item
  
+ * [hidden Markov models](http://www.metacademy.org/graphs/concepts/hidden_markov_models#lfocus=hidden_markov_models)
+ * [forward-backward algorithm](http://www.metacademy.org/graphs/concepts/forward_backward_algorithm#lfocus=forward_backward_algorithm)
+ * [HMM inference as a special case of belief propagation](http://www.metacademy.org/graphs/concepts/hmm_inference_as_bp#lfocus=hmm_inference_as_bp). This one covers MAP inference as well, which doesn't appear until a later lecture.
- Make sure to place a blank line before your lists, other you'll end up with a list that looks like:
- * oops
- * this is wrong
-     * so, so wrong
  
+ ### Lecture 10: Sum-product algorithm with factor graphs
- ### Code
- Place inline code between backticks such as `printf()` or specify block-code via
  
+ * See the references for lecture 8, since some of them use factor graphs.
- ```
- import sys
- print sys.copyright
- ```
  
- We currently do not have code highlighting/coloring in place, but let us know if you want to use it and we can oblige: [send us feedback](/feedback)
+ ### Lecture 11: MAP estimation and min-sum algorithm
  
+ * [the max-product algorithm](http://www.metacademy.org/graphs/concepts/max_product_on_trees#lfocus=max_product_on_trees) (Note that max-product, max-sum, and min-sum are all basically the same algorithm.)
+ * [the Viterbi algorithm](http://www.metacademy.org/graphs/concepts/viterbi_algorithm#lfocus=viterbi_algorithm), the special case of max-product applied to HMMs
+ * [HMM inference as a special case of belief propagation](http://www.metacademy.org/graphs/concepts/hmm_inference_as_bp#lfocus=hmm_inference_as_bp)
+ * If you're feeling rusty on linear algebra, now is a good time to brush up since the Gaussian inference lectures will make heavy use of it.
  
+ ### Lecture 12: Inference with Gaussian graphical models
- ### Math
- Math support is possible via [MathJax](http://www.mathjax.org/) by using two dollar signs: $$x^n + y^n \neq z^n \forall n > 2$$
  
+ * [Gaussian belief propagation](http://www.metacademy.org/graphs/concepts/gaussian_bp_on_trees#lfocus=gaussian_bp_on_trees)
+ * [connection between Gaussian inference and variable elimination](http://www.metacademy.org/graphs/concepts/gaussian_variable_elimination_as_gaussian_elimination#lfocus=gaussian_variable_elimination_as_gaussian_elimination)
+ * Note that these nodes have quite a few linear algebra dependencies. You may want to review those before the lecture, so that the derivations will make sense.
- ### Tables
- First Header  | Second Header
- ------------- | -------------
- Content Cell  | Content Cell
- Content Cell  | Content Cell
  
- ### Multimedia
- We currently do not support embedded images or videos, but let us know if you want this functionality: [send us feedback](/feedback)
+ ### Lecture 13: Example: Kalman filtering and smoothing
+ 
+ * [Kalman filter](http://www.metacademy.org/graphs/concepts/kalman_filter#lfocus=kalman_filter), and [derivation](http://www.metacademy.org/graphs/concepts/kalman_filter_derivation#lfocus=kalman_filter_derivation)
+ * [Kalman smoother](http://www.metacademy.org/graphs/concepts/kalman_smoother#lfocus=kalman_smoother)
+ * Viewing Kalman smoothing as a [special case of forward-backward](http://www.metacademy.org/graphs/concepts/kalman_as_forward_backward#lfocus=kalman_as_forward_backward)
+ 
+ ### Lecture 14: Junction tree algorithm
+ 
+ * [junction trees](http://www.metacademy.org/graphs/concepts/junction_trees#lfocus=junction_trees)
+ 
+ ### Lecture 15: Loopy belief propagation, part 1
+ 
+ * [loopy BP](http://www.metacademy.org/graphs/concepts/loopy_belief_propagation#lfocus=loopy_belief_propagation)
+ 
+ ### Lecture 16: Loopy belief propagation, part 2
+ 
+ * [basics of variational inference](http://www.metacademy.org/graphs/concepts/variational_inference#lfocus=variational_inference)
+ * [variational interpretation of loopy BP](http://www.metacademy.org/graphs/concepts/loopy_bp_as_variational#lfocus=loopy_bp_as_variational)
+ 
+ ### Lecture 17: Variational inference
+ 
+ * [mean field approximation](http://www.metacademy.org/graphs/concepts/mean_field#lfocus=mean_field)
+ 
+ ### Lecture 18: Sampling by Markov chain Monte Carlo
+ 
+ * [Gibbs sampling](http://www.metacademy.org/graphs/concepts/gibbs_sampling#lfocus=gibbs_sampling)
+ * [Metropolis-Hastings](http://www.metacademy.org/graphs/concepts/metropolis_hastings#lfocus=metropolis_hastings)
+ 
+ ### Lecture 19: Approximate inference by particle methods
+ 
+ * [importance sampling](http://www.metacademy.org/graphs/concepts/importance_sampling#lfocus=importance_sampling)
+ * particle filter (TODO)
+ 
+ ### Lecture 20: Parameter estimation in directed graphs
+ 
+ * [maximum likelihood](http://www.metacademy.org/graphs/concepts/maximum_likelihood#lfocus=maximum_likelihood)
+ * [learning Bayes net parameters](http://www.metacademy.org/graphs/concepts/bayes_net_parameter_learning#lfocus=bayes_net_parameter_learning)
+ * [Bayesian parameter estimation](http://www.metacademy.org/graphs/concepts/bayesian_parameter_estimation#lfocus=bayesian_parameter_estimation)
+ * [Bayesian estimation of Bayes net parameters](http://www.metacademy.org/graphs/concepts/bayesian_estimation_bayes_net_params#lfocus=bayesian_estimation_bayes_net_params)
+ 
+ ### Lecture 21: Learning structure in directed graphs
+ 
+ * [Bayes net structure learning](http://www.metacademy.org/graphs/concepts/bayes_net_structure_learning#lfocus=bayes_net_structure_learning)
+ * [Chow-Liu trees](http://www.metacademy.org/graphs/concepts/chow_liu_trees#lfocus=chow_liu_trees)
+ 
+ ### Lecture 22: Modeling from partial observations
+ 
+ * [EM algorithm](http://www.metacademy.org/graphs/concepts/expectation_maximization#lfocus=expectation_maximization)
+ * [learning Bayes nets with missing data](http://www.metacademy.org/graphs/concepts/learning_bayes_nets_missing_data#lfocus=learning_bayes_nets_missing_data)
+ 
+ ### Lecture 23: Learning undirected graphical models
+ 
+ * [MRF parameter learning](http://www.metacademy.org/graphs/concepts/mrf_parameter_learning#lfocus=mrf_parameter_learning)
+ 
+ ### Lecture 24: Learning exponential family models
+ 
+ * [exponential families](http://www.metacademy.org/graphs/concepts/exponential_families#lfocus=exponential_families)
+ * [maximum likelihood in exponential families](http://www.metacademy.org/graphs/concepts/maximum_likelihood_in_exponential_families#lfocus=maximum_likelihood_in_exponential_families)