This  content of roadmap follows Prof. Jordan's lectures/textbook.
  
  
  ## Conditional Independence and Factorization
  
  *   Much of our early discussion focused on [[conditional independence]] in the context of [directed graphical models (Bayes nets)](Bayesian networks) and [undirected graphical models (Markov random fields - MRFs)](Markov random fields)
  * We can use the [[Bayes Ball]] algorithm to determine conditional independencies in  Bayes nets.
  * We can use simple [reachability algorithms](http://en.wikipedia.org/wiki/Reachability) to determine conditional independencies in MRFs
  *  We briefly discussed [[factor graphs]], which provide a more fine-grained representation of the independencies in a MRF
  
  ## Exact Inference
+ 
  *  The [[variable elimination]] algorithm  is based on interchanging sums and products in the definitions of marginals or partition functions but can perform many redundant calculations.
  *  [the sum product algorithm](sum_product_on_trees)  is a belief propagation algorithm based on dynamic programming. It has the advantage over naive variable elimination in that it reuses computations to compute marginals for all nodes in the graph
  *  [[junction trees]] generalize the the sum product algorithm to arbitrary graphs by  grouping variables together into cliques such that the cliques form a tree.
  
  
  ## Sampling-based inference
- *  [[rejection sampling]]
- *  [[importance sampling]]
- * metropolis hastings
+ 
+ *  [[rejection sampling]]  is a monte carlo method for sampling from a potentially complex distribution p(x) given a simpler distribution q(x)
+ *  [[importance sampling]]  is a way of estimating expectations under an intractable distribution p by sampling from a tractable distribution q and reweighting the samples according to the ratio of the probabilities
+ * We discussed some standard [[Markov chain Monte Carlo]] methods:
+ ** [[Metropolis-Hastings algorithm]] is a very general method for approximately sampling from a distribution p by defining a Markov chain which has p as a stationary distribution
+ ** 
+ ** [[Gibbs sampling]] is a MCMC algorithm where each random variable is iteratively resampled from its conditional distribution given the remaining variables -- it can be viewed [as a special case of Metropolis-Hastings](gibbs_as_mh)
+ ** we also touched on determining [[MCMC convergence]]