Calibrated Decomposition of Aleatoric and Epistemic Uncertainty in Deep Features for Inference-Time Adaptation

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing uncertainty estimators conflate aleatoric and epistemic uncertainties into a single confidence score, hindering dynamic computational resource allocation and model-adaptive inference. This paper proposes the first lightweight, sampling-free, inference-time framework that disentangles these two uncertainty types directly in deep feature space. Disentanglement is achieved orthogonally along three dimensions: local support insufficiency, manifold spectral collapse, and cross-layer feature inconsistency. Aleatoric uncertainty is estimated via a regularized global density model, while epistemic uncertainty is modeled through joint analysis of local support, manifold geometry, and cross-layer consistency. Furthermore, a distribution-agnostic conformal calibration—requiring no additional forward passes—is introduced to produce tight prediction intervals. Evaluated on MOT17, our method achieves ~60% computational savings with negligible accuracy degradation and improves computational efficiency by 13.6 percentage points over total-uncertainty baselines, significantly enhancing self-regulation capability in visual inference.

Technology Category

Application Category

📝 Abstract
Most estimators collapse all uncertainty modes into a single confidence score, preventing reliable reasoning about when to allocate more compute or adjust inference. We introduce Uncertainty-Guided Inference-Time Selection, a lightweight inference time framework that disentangles aleatoric (data-driven) and epistemic (model-driven) uncertainty directly in deep feature space. Aleatoric uncertainty is estimated using a regularized global density model, while epistemic uncertainty is formed from three complementary components that capture local support deficiency, manifold spectral collapse, and cross-layer feature inconsistency. These components are empirically orthogonal and require no sampling, no ensembling, and no additional forward passes. We integrate the decomposed uncertainty into a distribution free conformal calibration procedure that yields significantly tighter prediction intervals at matched coverage. Using these components for uncertainty guided adaptive model selection reduces compute by approximately 60 percent on MOT17 with negligible accuracy loss, enabling practical self regulating visual inference. Additionally, our ablation results show that the proposed orthogonal uncertainty decomposition consistently yields higher computational savings across all MOT17 sequences, improving margins by 13.6 percentage points over the total-uncertainty baseline.
Problem

Research questions and friction points this paper is trying to address.

Disentangles aleatoric and epistemic uncertainty in deep feature space
Enables uncertainty-guided model selection to reduce compute costs
Provides tighter prediction intervals through decomposed uncertainty calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Disentangles aleatoric and epistemic uncertainty in features
Uses three orthogonal components without sampling or ensembling
Integrates uncertainty into conformal calibration for tighter intervals
🔎 Similar Papers
No similar papers found.