🤖 AI Summary
Deep learning models often exhibit overconfidence under distributional shift, and existing post-hoc calibration methods fail to fundamentally address this issue. This paper proposes a retraining-free meta-model calibration framework—Post-hoc Evidential Learning (PEL)—which freezes the backbone network, identifies discriminative regions via feature saliency analysis, and constructs a noise-driven curriculum to explicitly guide the model to learn *when* it is uncertain and *how* to quantify uncertainty. PEL introduces no architectural or parametric modifications to the original model; instead, it learns an uncertainty representation mechanism solely from calibration data. Evaluated across multiple benchmarks, PEL improves out-of-distribution detection and adversarial example detection performance by approximately 77% and 80%, respectively, significantly surpassing current state-of-the-art methods. The approach achieves high reliability while maintaining zero intrusiveness—preserving model integrity and deployment compatibility.
📝 Abstract
Reliable uncertainty quantification remains a major obstacle to the deployment of deep learning models under distributional shift. Existing post-hoc approaches that retrofit pretrained models either inherit misplaced confidence or merely reshape predictions, without teaching the model when to be uncertain. We introduce GUIDE, a lightweight evidential learning meta-model approach that attaches to a frozen deep learning model and explicitly learns how and when to be uncertain. GUIDE identifies salient internal features via a calibration stage, and then employs these features to construct a noise-driven curriculum that teaches the model how and when to express uncertainty. GUIDE requires no retraining, no architectural modifications, and no manual intermediate-layer selection to the base deep learning model, thus ensuring broad applicability and minimal user intervention. The resulting model avoids distilling overconfidence from the base model, improves out-of-distribution detection by ~77% and adversarial attack detection by ~80%, while preserving in-distribution performance. Across diverse benchmarks, GUIDE consistently outperforms state-of-the-art approaches, evidencing the need for actively guiding uncertainty to close the gap between predictive confidence and reliability.