Uncertainty-Driven Reliability: Selective Prediction and Trustworthy Deployment in Modern Machine Learning

📅 2025-08-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine learning models often exhibit insufficient reliability in high-stakes settings, particularly when faced with distributional shifts or adversarial uncertainty manipulation. Method: This paper introduces a lightweight, post-hoc selective prediction framework that enables models to abstain from predictions under high uncertainty—while preserving differential privacy (DP). It leverages intermediate training checkpoints to model uncertainty via ensemble-based estimation, integrates DP analysis, calibration auditing, and verifiable inference, and establishes a finite-sample decomposition of the selective classification gap. Contribution/Results: The framework systematically identifies and quantifies five distinct error sources affecting selective prediction. It detects and mitigates uncertainty manipulation attacks, significantly improving uncertainty ranking quality and reliability assessment. Experiments across multiple tasks demonstrate state-of-the-art selective prediction performance, robustness under DP constraints, effective alleviation of the privacy–uncertainty trade-off, and enhanced model trustworthiness and adversarial resilience.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) systems are increasingly deployed in high-stakes domains where reliability is paramount. This thesis investigates how uncertainty estimation can enhance the safety and trustworthiness of ML, focusing on selective prediction -- where models abstain when confidence is low. We first show that a model's training trajectory contains rich uncertainty signals that can be exploited without altering its architecture or loss. By ensembling predictions from intermediate checkpoints, we propose a lightweight, post-hoc abstention method that works across tasks, avoids the cost of deep ensembles, and achieves state-of-the-art selective prediction performance. Crucially, this approach is fully compatible with differential privacy (DP), allowing us to study how privacy noise affects uncertainty quality. We find that while many methods degrade under DP, our trajectory-based approach remains robust, and we introduce a framework for isolating the privacy-uncertainty trade-off. Next, we then develop a finite-sample decomposition of the selective classification gap -- the deviation from the oracle accuracy-coverage curve -- identifying five interpretable error sources and clarifying which interventions can close the gap. This explains why calibration alone cannot fix ranking errors, motivating methods that improve uncertainty ordering. Finally, we show that uncertainty signals can be adversarially manipulated to hide errors or deny service while maintaining high accuracy, and we design defenses combining calibration audits with verifiable inference. Together, these contributions advance reliable ML by improving, evaluating, and safeguarding uncertainty estimation, enabling models that not only make accurate predictions -- but also know when to say "I do not know".
Problem

Research questions and friction points this paper is trying to address.

Enhancing ML reliability via uncertainty-driven selective prediction
Analyzing privacy-uncertainty trade-offs in differentially private ML
Defending against adversarial manipulation of uncertainty signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ensemble predictions from intermediate checkpoints
Framework for privacy-uncertainty trade-off isolation
Defenses combining calibration audits with verifiable inference
🔎 Similar Papers
No similar papers found.