🤖 AI Summary
In Bayesian deep learning, the cold posterior effect (CPE) demonstrates that lowering the temperature improves predictive performance, yet systematic selection of the optimal temperature remains lacking; existing grid search approaches are computationally inefficient. Method: We propose the first end-to-end, data-driven temperature optimization framework: treating temperature as a differentiable model parameter and optimizing it via gradient-based maximization of the test log predictive density. Contribution/Results: Our method uncovers a fundamental divergence between generalized Bayesian inference—emphasizing uncertainty calibration and robustness—and standard Bayesian deep learning practice—focused on predictive accuracy—in their respective temperature preferences. Experiments across regression and classification tasks show our approach achieves predictive performance comparable to grid search while reducing computational overhead significantly. Moreover, we empirically verify that the optimal temperature depends critically on the evaluation metric, thereby supporting task-adaptive temperature selection.
📝 Abstract
The Cold Posterior Effect (CPE) is a phenomenon in Bayesian Deep Learning (BDL), where tempering the posterior to a cold temperature often improves the predictive performance of the posterior predictive distribution (PPD). Although the term `CPE' suggests colder temperatures are inherently better, the BDL community increasingly recognizes that this is not always the case. Despite this, there remains no systematic method for finding the optimal temperature beyond grid search. In this work, we propose a data-driven approach to select the temperature that maximizes test log-predictive density, treating the temperature as a model parameter and estimating it directly from the data. We empirically demonstrate that our method performs comparably to grid search, at a fraction of the cost, across both regression and classification tasks. Finally, we highlight the differing perspectives on CPE between the BDL and Generalized Bayes communities: while the former primarily focuses on predictive performance of the PPD, the latter emphasizes calibrated uncertainty and robustness to model misspecification; these distinct objectives lead to different temperature preferences.