hidden Markov models

(1.5 hours to learn)


Hidden Markov models (HMMs) are a kind of probabilistic model widely used in speech and language processing. There is a discrete latent state which evolves over time as a Markov chain, and the current observations depend stochastically on the current latent state. HMMs are popular because they support efficient exact inference algorithms.


This concept has the prerequisites:

Core resources (read/watch one of the following)


A Revealing Introduction to Hidden Markov Models
Author: Mark Stamp
Mathematical Monk: Machine Learning (2011)
A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition
Author: Lawrence R. Rabiner


Supplemental resources (the following are optional, but you may find them useful)


See also