layout | title |
---|---|
post |
Contents |
These notes form a concise introductory course on probabilistic graphical models{% include sidenote.html id="note-pgm" note="Probabilistic graphical models are a subfield of machine learning that studies how to describe and reason about the world in terms of probabilities." %}. They are based on Stanford CS228, and are written by Volodymyr Kuleshov and Stefano Ermon, with the help of many students and course staff. {% include marginnote.html id='mn-construction' note='The notes are still under construction! Although we have written up most of the material, you will probably find several typos. If you do, please let us know, or submit a pull request with your fixes to our GitHub repository.'%} You too may help make these notes better by submitting your improvements to us via GitHub.
This course starts by introducing probabilistic graphical models from the very basics and concludes by explaining from first principles the variational auto-encoder, an important probabilistic model that is also one of the most influential recent results in deep learning.
-
Introduction: What is probabilistic graphical modeling? Overview of the course.
-
Review of probability theory: Probability distributions. Conditional probability. Random variables (under construction).
-
Examples of real-world applications: Image denoising. RNA structure prediction. Syntactic analysis of sentences. Optical character recognition (under construction).
-
Bayesian networks: Definitions. Representations via directed graphs. Independencies in directed models.
-
Markov random fields: Undirected vs directed models. Independencies in undirected models. Conditional random fields.
-
Variable elimination The inference problem. Variable elimination. Complexity of inference.
-
Belief propagation: The junction tree algorithm. Exact inference in arbitrary graphs. Loopy Belief Propagation.
-
MAP inference: Max-sum message passing. Graphcuts. Linear programming relaxations. Dual decomposition.
-
Sampling-based inference: Monte-Carlo sampling. Forward Sampling. Rejection Sampling. Importance sampling. Markov Chain Monte-Carlo. Applications in inference.
-
Variational inference: Variational lower bounds. Mean Field. Marginal polytope and its relaxations.
-
Learning in directed models: Maximum likelihood estimation. Learning theory basics. Maximum likelihood estimators for Bayesian networks.
-
Learning in undirected models: Exponential families. Maximum likelihood estimation with gradient descent. Learning in CRFs
-
Learning in latent variable models: Latent variable models. Gaussian mixture models. Expectation maximization.
-
Bayesian learning: Bayesian paradigm. Conjugate priors. Examples (under construction).
-
Structure learning: Chow-Liu algorithm. Akaike information criterion. Bayesian information criterion. Bayesian structure learning (under construction).
-
The variational autoencoder: Deep generative models. The reparametrization trick. Learning latent visual representations.
-
List of further readings: Structured support vector machines. Bayesian non-parametrics.