The Well-Tempered Classifier: Some Elementary Properties of Temperature Scaling

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of rigorous theoretical analysis of temperature scaling, particularly regarding its role in classifier calibration and diversity control in large language models (LLMs). We systematically investigate the fundamental properties of temperature scaling in probabilistic models and offer two novel characterizations: first, as an information projection onto a family of fixed-entropy models; second, as the unique linear scaling method that preserves hard predictions. Leveraging tools from information geometry and entropy analysis, we prove that increasing temperature universally elevates predictive uncertainty (entropy) in classification models. However, we challenge the prevailing assumption that higher temperatures necessarily enhance output diversity in LLMs. Our findings establish a solid theoretical foundation for understanding and applying temperature scaling across diverse modeling contexts.

Technology Category

Application Category

📝 Abstract
Temperature scaling is a simple method that allows to control the uncertainty of probabilistic models. It is mostly used in two contexts: improving the calibration of classifiers and tuning the stochasticity of large language models (LLMs). In both cases, temperature scaling is the most popular method for the job. Despite its popularity, a rigorous theoretical analysis of the properties of temperature scaling has remained elusive. We investigate here some of these properties. For classification, we show that increasing the temperature increases the uncertainty in the model in a very general sense (and in particular increases its entropy). However, for LLMs, we challenge the common claim that increasing temperature increases diversity. Furthermore, we introduce two new characterisations of temperature scaling. The first one is geometric: the tempered model is shown to be the information projection of the original model onto the set of models with a given entropy. The second characterisation clarifies the role of temperature scaling as a submodel of more general linear scalers such as matrix scaling and Dirichlet calibration: we show that temperature scaling is the only linear scaler that does not change the hard predictions of the model.
Problem

Research questions and friction points this paper is trying to address.

temperature scaling
model calibration
uncertainty
large language models
probabilistic models
Innovation

Methods, ideas, or system contributions that make the work stand out.

temperature scaling
model calibration
information projection
entropy
linear scaler
🔎 Similar Papers
No similar papers found.