Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty from Pre-trained Models
Gianni Franchi
Olivier Laurent
Maxence Leguery
Andrei Bursuc
Andrea Pilzer
Angela Yao
[Paper]
[GitHub]
Illustration of the training process for the ABNN. The procedure begins with training a single DNN omega (w) MAP ,followed by architectural adjustments to transform it into an ABNN. The final step involves fine-tuning the ABNN model.

Abstract

Deep Neural Networks (DNNs) are powerful tools for various computer vision tasks, yet they often struggle with reliable uncertainty quantification — a critical requirement for real-world applications. Bayesian Neural Networks (BNN) are equipped for uncertainty estimation but cannot scale to large DNNs where they are highly unstable to train. To address this challenge, we introduce the Adaptable Bayesian Neural Network (ABNN), a simple and scalable strategy to seamlessly transform DNNs into BNNs in a post-hoc manner with minimal computational and training overheads. ABNNpreserves the main predictive properties of DNNs while enhancing their uncertainty quantification abilities through simple BNN adaptation layers (attached to normalization layers) and a few fine-tuning steps on pre-trained models. We conduct extensive experiments across multiple datasets for image classification and semantic segmentation tasks, and our results demonstrate that ABNN achieves state-of-the-art performance without the computational budget typically associated with ensemble methods.


Talk


[Slides]



Paper and Supplementary Material

G. Franchi, O. Laurent, M. Leguery, A. Bursuc, A. Pilzer, A. Yao
Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty from Pre-trained Models.
In IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2024.
(hosted on ArXiv)


[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.