When Models Know When They Do Not Know: Calibration, Cascading, and Cleaning

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical challenge of enabling vision-language models to reliably recognize their own uncertainty, a key factor in enhancing system reliability and inference efficiency. The authors propose a training-free, general-purpose calibration method that aligns model confidence scores with true predictive uncertainty. For the first time, these calibrated confidence estimates are unified to guide both model cascade routing and multi-expert data cleaning. By leveraging confidence comparison and calibration-aware routing, the approach efficiently identifies mislabeled samples on ImageNet and MMLU benchmarks while significantly improving inference efficiency—nearly matching or surpassing the accuracy of a single strong model and achieving superior performance through cascaded deployment.

Technology Category

Application Category

📝 Abstract
When a model knows when it does not know, many possibilities emerge. The first question is how to enable a model to recognize that it does not know. A promising approach is to use confidence, computed from the model's internal signals, to reflect its ignorance. Prior work in specific domains has shown that calibration can provide reliable confidence estimates. In this work, we propose a simple, effective, and universal training-free method that applies to both vision and language models, performing model calibration, cascading, and data cleaning to better exploit a model's ability to recognize when it does not know. We first highlight two key empirical observations: higher confidence corresponds to higher accuracy within a single model, and models calibrated on the validation set remain calibrated on a held-out test set. These findings empirically establish the reliability and comparability of calibrated confidence. Building on this, we introduce two applications: (1) model cascading with calibrated advantage routing and (2) data cleaning based on model ensemble. Using the routing signal derived from the comparability of calibrated confidences, we cascade large and small models to improve efficiency with almost no compromise in accuracy, and we further cascade two models of comparable scale to achieve performance beyond either model alone. Leveraging multiple experts and their calibrated confidences, we design a simple yet effective data-cleaning method that balances precision and detection rate to identify mislabeled samples in ImageNet and Massive Multitask Language Understanding (MMLU) datasets. Our results demonstrate that enabling models to recognize when they do not know is a practical step toward more efficient, reliable, and trustworthy AI.
Problem

Research questions and friction points this paper is trying to address.

calibration
confidence
model uncertainty
data cleaning
model cascading
Innovation

Methods, ideas, or system contributions that make the work stand out.

model calibration
confidence estimation
model cascading
data cleaning
trustworthy AI
🔎 Similar Papers
No similar papers found.
C
Chenjie Hao
University of California, Davis
W
Weyl Lu
University of California, Davis
Y
Yuko Ishiwaka
SoftBank Corp.
Z
Zengyi Li
Aizip
Weier Wan
Weier Wan
Stanford University
computer-in-memoryAI acceleratorenergy-efficient hardware systemnon-volatile memoryneural network compression
Yubei Chen
Yubei Chen
UC Davis | Aizip.ai
Unsupervised LearningWorld ModelsScience 4 AI