The premise of approximate MCMC in Bayesian deep learning (University of Warwick)

Abstract

This talk identifies several characteristics of approximate MCMC in Bayesian deep learning. It proposes an approximate sampling algorithm for neural networks. By analogy to sampling data batches from big datasets, it is proposed to sample parameter subgroups from neural network parameter spaces of high dimensions. While the advantages of minibatch MCMC have been discussed in the literature, blocked Gibbs sampling has received less research attention in Bayesian deep learning.

Date
Sep 8, 2022 9:10 AM — 9:35 AM
Location
University of Warwick, Department of Statistics
Coventry, CV4 7AL
Theodore Papamarkou
Theodore Papamarkou
Professor in maths of data science

Knowing is not enough, one must compute.