When three experiments are better than two: Avoiding intractable correlated aleatoric uncertainty by leveraging a novel bias--variance tradeoff

📅 2025-09-04
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses performance degradation in realistic batch active learning scenarios caused by correlated heteroscedastic aleatoric uncertainty. Methodologically, it proposes a novel bias–variance trade-off–driven active learning framework: (1) a cobias–covariance joint modeling mechanism to disentangle distinct uncertainty sources; (2) a feature-decomposition–based batch selection strategy that bypasses conventional information-theoretic heuristics (e.g., entropy or confidence); and (3) a historical-data–augmented double estimation scheme with a three-stage bias correction procedure. The key contribution is the first systematic integration of bias–variance decomposition into heteroscedastic batch sampling, substantially improving robustness to input-dependent noise and sampling efficiency. Empirical evaluation across multiple benchmark tasks demonstrates consistent superiority over strong baselines—including BALD and Least Confidence—effectively mitigating model performance deterioration induced by correlated aleatoric uncertainty.

Technology Category

Application Category

📝 Abstract
Real-world experimental scenarios are characterized by the presence of heteroskedastic aleatoric uncertainty, and this uncertainty can be correlated in batched settings. The bias--variance tradeoff can be used to write the expected mean squared error between a model distribution and a ground-truth random variable as the sum of an epistemic uncertainty term, the bias squared, and an aleatoric uncertainty term. We leverage this relationship to propose novel active learning strategies that directly reduce the bias between experimental rounds, considering model systems both with and without noise. Finally, we investigate methods to leverage historical data in a quadratic manner through the use of a novel cobias--covariance relationship, which naturally proposes a mechanism for batching through an eigendecomposition strategy. When our difference-based method leveraging the cobias--covariance relationship is utilized in a batched setting (with a quadratic estimator), we outperform a number of canonical methods including BALD and Least Confidence.
Problem

Research questions and friction points this paper is trying to address.

Reducing correlated aleatoric uncertainty in batched experiments
Proposing active learning strategies to minimize bias
Leveraging historical data through cobias-covariance relationship
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel active learning strategies reducing bias
Cobias-covariance relationship for historical data
Eigendecomposition strategy for batched settings
🔎 Similar Papers
No similar papers found.