This course will cover the basics of Bayesian inference, modeling, and computing algorithms. It will begin with the introduction to Bayesian statistics, estimation and inference. We will learn to work with normal and non-normal approximations to likelihood and posteriors. Then, for more complicated posteriors and likelihoods, we will learn how to apply Bayesian computing and optimization algorithms, including data augmentation, Markov chain Monte Carlo (MCMC), and sequential Monte Carlo methods.
Multiple modeling examples (hierarchical linear models, generalized linear mixed models, and state space models) will be used throughout the course. The techniques discussed will be illustrated by many real life examples.
There will be 4 homeworks, and students will be expected to complete a data analysis project by the end of the course. There will be one final in-class presentation.
Variety of programming languages (eg, Python, Matlab, R, C) can be used to implement the algorithms and carry out data analyses. R will be used for prototype code presented in class.
Previous coursework in probability and statistics (statistical modeling) and programming experience are required.
If you wish to learn R, Coursera has a 4-week introductory course on R computing: taught by Prof. Roger Peng from Johns Hopkins University.
I will rely on my notes and several books in this course, all of which are available for free (as electronic versions) through the CU Library system:
The Bayesian choice: from decision-theoretic foundations to computational implementation by Robert, Christian P
Bayesian Core: A Practical Approach to Computational Bayesian Statistics Marin, Jean-Michel; Robert, Christian P.
Bayesian essentials with R by Marin, Jean-Michel; Robert, Christian P
Introducing Monte Carlo Methods with R by Robert, Christian P; Casella, George
Monte Carlo Statistical Methods by Robert, Christian P; Casella, George