Bayesian estimation of Bayes net parameters
(1.5 hours to learn)
Bayesian parameter estimation techniques can be applied to learning Bayes net parameters. This leads to more stable estimates in situations with limited data and can improve generalization performance.
This concept has the prerequisites:
- Derive the formulas for Bayesian estimation of Bayes net parameters, and for the predictive distribution over new data, in the simplest case where all variables are fully observed.
- In particular, see why posterior inference and prediction can both decompose into independent problems associated with each CPT. What has to be true of the prior for the problem to decompose this way?
- Be able to represent the Bayesian parameter estimation problem itself as a Bayes net, i.e. build a Bayes net where the parameters and data are represented as separate sets of variables. This ties together the problems of learning and inference.
Core resources (read/watch one of the following)
→ Coursera: Probabilistic Graphical Models (2013)
An online course on probabilistic graphical models.
- Click on "Preview" to see the videos.
→ Probabilistic Graphical Models: Principles and Techniques
A very comprehensive textbook for a graduate-level course on probabilistic AI.
Location: Section 17.4, "Bayesian parameter estimation in Bayesian networks," not including 17.4.1, "MAP estimation," pages 741-751
- These techniques can be used to learn the Bayes net structures , i.e. the pattern of nodes and edges.
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation