The premise of approximate MCMC in Bayesian deep learning (University of Warwick)


This talk identifies several characteristics of approximate MCMC in Bayesian deep learning. It proposes an approximate sampling algorithm for neural networks. By analogy to sampling data batches from big datasets, it is proposed to sample parameter subgroups from neural network parameter spaces of high dimensions. While the advantages of minibatch MCMC have been discussed in the literature, blocked Gibbs sampling has received less research attention in Bayesian deep learning.

Sep 8, 2022 9:10 AM — 9:35 AM
University of Warwick, Department of Statistics
Coventry, CV4 7AL
Theodore Papamarkou
Theodore Papamarkou
Reader in maths of data science

My research interests span probabilistic machine learning.