Temperature Optimization for Bayesian Deep Learning

📅 2024-10-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In Bayesian deep learning, the cold posterior effect (CPE) demonstrates that lowering the temperature improves predictive performance, yet systematic selection of the optimal temperature remains lacking; existing grid search approaches are computationally inefficient. Method: We propose the first end-to-end, data-driven temperature optimization framework: treating temperature as a differentiable model parameter and optimizing it via gradient-based maximization of the test log predictive density. Contribution/Results: Our method uncovers a fundamental divergence between generalized Bayesian inference—emphasizing uncertainty calibration and robustness—and standard Bayesian deep learning practice—focused on predictive accuracy—in their respective temperature preferences. Experiments across regression and classification tasks show our approach achieves predictive performance comparable to grid search while reducing computational overhead significantly. Moreover, we empirically verify that the optimal temperature depends critically on the evaluation metric, thereby supporting task-adaptive temperature selection.

Technology Category

Application Category

📝 Abstract
The Cold Posterior Effect (CPE) is a phenomenon in Bayesian Deep Learning (BDL), where tempering the posterior to a cold temperature often improves the predictive performance of the posterior predictive distribution (PPD). Although the term `CPE' suggests colder temperatures are inherently better, the BDL community increasingly recognizes that this is not always the case. Despite this, there remains no systematic method for finding the optimal temperature beyond grid search. In this work, we propose a data-driven approach to select the temperature that maximizes test log-predictive density, treating the temperature as a model parameter and estimating it directly from the data. We empirically demonstrate that our method performs comparably to grid search, at a fraction of the cost, across both regression and classification tasks. Finally, we highlight the differing perspectives on CPE between the BDL and Generalized Bayes communities: while the former primarily focuses on predictive performance of the PPD, the latter emphasizes calibrated uncertainty and robustness to model misspecification; these distinct objectives lead to different temperature preferences.
Problem

Research questions and friction points this paper is trying to address.

Optimizing temperature for Bayesian Deep Learning performance
Addressing Cold Posterior Effect inconsistency in BDL
Data-driven temperature selection replacing grid search
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-driven temperature selection method
Optimizes test log-predictive density
Reduces cost compared to grid search
🔎 Similar Papers
No similar papers found.
K
Kenyon Ng
School of Mathematics and Statistics, University of Melbourne
C
Christopher van der Heide
Department of Electrical and Electronic Engineering, University of Melbourne
Liam Hodgkinson
Liam Hodgkinson
University of Melbourne
probabilistic machine learningdeep learning theory
Susan Wei
Susan Wei
Monash University
Statistics