🤖 AI Summary
To address the critical need for efficient and reliable uncertainty estimation in industrial deployments of LLM-based judges, this paper proposes a plug-and-play, fine-tuning-free linear probing method. It directly extracts features from the hidden states of inference-oriented judge models and trains a lightweight linear head under Brier score loss to achieve uncertainty calibration. This work is the first to adopt the Brier score as the probe training objective, yielding strong cross-domain generalization and safety-aware conservative uncertainty estimates. Experiments demonstrate substantial improvements in calibration performance—outperforming existing methods—while reducing computational overhead by approximately 10×. The approach is validated across diverse tasks, including reasoning, mathematics, factual consistency, code generation, and human preference ranking. It achieves higher accuracy on high-confidence predictions and lower false-positive rates, making it particularly suitable for safety-critical applications.
📝 Abstract
As LLM-based judges become integral to industry applications, obtaining well-calibrated uncertainty estimates efficiently has become critical for production deployment. However, existing techniques, such as verbalized confidence and multi-generation methods, are often either poorly calibrated or computationally expensive. We introduce linear probes trained with a Brier score-based loss to provide calibrated uncertainty estimates from reasoning judges' hidden states, requiring no additional model training. We evaluate our approach on both objective tasks (reasoning, mathematics, factuality, coding) and subjective human preference judgments. Our results demonstrate that probes achieve superior calibration compared to existing methods with $approx10$x computational savings, generalize robustly to unseen evaluation domains, and deliver higher accuracy on high-confidence predictions. However, probes produce conservative estimates that underperform on easier datasets but may benefit safety-critical deployments prioritizing low false-positive rates. Overall, our work demonstrates that interpretability-based uncertainty estimation provides a practical and scalable plug-and-play solution for LLM judges in production.