Two random variables X and Y are conditionally independent given a random variable Z if they are independent in the conditional distribution given Z. Conditional independence is central notion in probabilistic modeling, because a model's conditional independence assumptions often lead to tractable algorithms for inference and learning in that model.
This concept has the prerequisites:
- Know the definition of conditional independence
- Give examples to show that conditional independence does not imply independence, and vice versa
- It's likely that you're seeing this page because you want to learn about graphical models. If so, don't bother memorizing the rules of conditional independence; you'll get more intution and practice with them when you learn about graphical models. Just convince yourself that the basic properties make intuitive sense.
Core resources (we're sorry, we haven't finished tracking down resources for this concept yet)
Supplemental resources (the following are optional, but you may find them useful)
- Conditional independence is fundamental to probabilistic graphical models, including:
- create concept: shift + click on graph
- change concept title: shift + click on existing concept
- link together concepts: shift + click drag from one concept to another
- remove concept from graph: click on concept then press delete/backspace
- add associated content to concept: click the small circle that appears on the node when hovering over it
- other actions: use the icons in the upper right corner to optimize the graph placement, preview the graph, or download a json representation