Σεμινάριο: "Recipes for Bayesian Deep Learning"

ΚΥΚΛΟΣ ΣΕΜΙΝΑΡΙΩΝ ΣΤΑΤΙΣΤΙΚΗΣ 2024-2025

Ομιλητής: Dimitrios Milios, Post Doctoral Fellow, EURECOM, FR

Recipes for Bayesian Deep Learning

ΑΜΦΙΘΕΑΤΡΟ ΤΡΟΙΑΣ

ΠΕΡΙΛΗΨΗ

Bayesian Inference offers a robust framework for machine learning, according to which, the process of learning is materialized as the transformation of a prior belief into a posterior distribution. The application of traditional Bayesian methods in a deep learning context poses significant challenges however, due to the nonlinear nature of neural network models. In this talk, I will discuss a recent line of work that investigates both theoretical and practical aspects of Bayesian Neural Networks (BNNs). The problem of posterior sampling is first examined: it is advocated that sampling can be performed by means of simulating Hamiltonian stochastic differential equations (SDEs). We analyze the impact of numerical errors from both time-discretization and noisy gradient estimates, and our convergence results show that this sampling approach is numerically stable as well as scalable. The second work focuses on the nature of the prior distribution for deep models. It is well known that non-linearities may induce non-interpretable behaviors on the functional output of a BNN. In this work, this pathology is treated by aligning neural network priors with functional priors using Gaussian Processes (GPs). Through an extensive experimental campaign, GP-aligned Bayesian neural networks are shown to offer systematically large performance improvements. Building on these ideas, we extend the concepts of SDE-based posterior sampling and prior alignment to a class of unsupervised models, Bayesian Auto-Encoders (BAEs), which offer a representation learning approach that supports uncertainty quantification. 

Ημερομηνία Εκδήλωσης: 
Δευτέρα, Μάρτιος 24, 2025 - 12:00