SC1: BAYESIAN COMPUTATION VIA MONTE CARLO METHODS: A BRIEF INTRODUCTION
Author: Hedibert Freitas Lopes (Insper, Brasil)
This short course will present a review of basic Bayesian ideas via sampling importance resampling, with examples of linear regressions with Gaussian or student errors, and Gaussian or double exponential priors for the coefficients. It will then present the Gibbs sampler and the Metropolis-Hastings sampler, considering examples of forward filtering backward sampling algorithms for conditionally linear dynamic models and of single move Metropolis-Hastings algorithm for a simple version of the stochastic volatility model. The course will then move to the presentation of the particle filter for nonlinear and non-Gaussian dynamic models with an example consisting of the Bootstrap filter for deriving an approximation to the likelihood of the static parameters of a simple version of the stochastic volatility model as well as the particle learning filter for fully sequential learning of states and parameters. The course will also briefly cover other advanced topics, like reversible jump MCMC, Hamiltonian Monte Carlo, Adaptive Monte Carlo, Aproximate Bayesian Computations and likelihood-free methods.
SC2: BAYESIAN TIME SERIES ANALYSIS AND FORECASTING WITH DYNAMIC MODELS
Author: Raquel Prado (University of California Santa Cruz, USA)
This short course covers models and methods for time series analysis using Bayesian dynamic models. The main focus will be on dynamic linear models (DLMs). Model building and well as Bayesian inference and forecasting within the class of univariate DLMs will be discussed in detail. The use of these models for time series analysis and forecasting will be illustrated using a wide range of examples from neuroscience, environmetrics and econometrics. Extensions to more sophisticated dynamic models, as well as MCMC methods for inference in such modeling settings, will also be discussed.
SC3: EMPIRICAL BAYESIAN METHODS IN PERSONALIZED MEDICINE
Author: Francisco J. Diaz (University of Kansas, USA)
Considerable research suggests that a combination of empirical Bayes prediction and regression models with random effects can be used to establish a solid paradigm for the construction of the mathematics and statistics of personalized medicine research and practice, especially in the treatment of chronic diseases. In this approach, the distribution of the random effects is viewed as an objective prior distribution. This prior does not represent subjective knowledge about patient population parameters, but it represents the distribution of unmeasured but real-world random variables of clinical and biological relevance. Generalized linear mixed models have concepts that allow describing patient populations as a whole (the fixed effects) and, simultaneously, concepts that allow describing patients as individuals (the random effects). This highlights the importance of predicting the random effects appropriately for personalized medicine purposes. In this course, we explain how the empirical Bayesian approach is used to predict the random effects for specific patients and show applications to measuring individual benefits of medical treatments, patient subgroup identification and pharmacotherapy individualization.
2. Empirical Bayesian versus subjective Bayesian.
3. Review of regression models with random effects.
4. Best linear unbiased predictors (BLUPs).
5. The connections between the BLUPs and empirical Bayesian prediction.
6. Application: prediction of individual benefits of medical treatments.
6.1. Linear mixed model with normal response.
6.2. Logistic regression model with random effects.
7. Application: Identification of subgroups of patients who would benefit from a specific treatment.
8. Application: empirical Bayesian feedback in drug dosage individualization.
9. Comparison of empirical Bayesian predictors with predictors based on quadratic inference functions.