Beyond the Mean: Distribution-Aware Loss Functions for Bimodal Regression

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard regression methods often assume unimodal Gaussian noise, leading to mean collapse and poor uncertainty quantification when the predictive confidence exhibits a bimodal distribution. To address this limitation, this work proposes a distribution-aware loss function that uniquely integrates Wasserstein and Cramér distances into a normalized RMSE formulation. This approach enables standard deep regression models to stably recover bimodal distributions without relying on mixture density networks (MDNs). The method achieves a favorable balance between training stability and distributional modeling capacity, reducing the Jensen–Shannon divergence by 45% on complex bimodal datasets while preserving competitive mean squared error performance. Moreover, it demonstrates significantly improved distributional fidelity and robustness compared to MDN-based alternatives.

Technology Category

Application Category

📝 Abstract
Despite the strong predictive performance achieved by machine learning models across many application domains, assessing their trustworthiness through reliable estimates of predictive confidence remains a critical challenge. This issue arises in scenarios where the likelihood of error inferred from learned representations follows a bimodal distribution, resulting from the coexistence of confident and ambiguous predictions. Standard regression approaches often struggle to adequately express this predictive uncertainty, as they implicitly assume unimodal Gaussian noise, leading to mean-collapse behavior in such settings. Although Mixture Density Networks (MDNs) can represent different distributions, they suffer from severe optimization instability. We propose a family of distribution-aware loss functions integrating normalized RMSE with Wasserstein and Cramér distances. When applied to standard deep regression models, our approach recovers bimodal distributions without the volatility of mixture models. Validated across four experimental stages, our results show that the proposed Wasserstein loss establishes a new Pareto efficiency frontier: matching the stability of standard regression losses like MSE in unimodal tasks while reducing Jensen-Shannon Divergence by 45% on complex bimodal datasets. Our framework strictly dominates MDNs in both fidelity and robustness, offering a reliable tool for aleatoric uncertainty estimation in trustworthy AI systems.
Problem

Research questions and friction points this paper is trying to address.

bimodal regression
predictive uncertainty
distribution-aware loss
aleatoric uncertainty
mean-collapse
Innovation

Methods, ideas, or system contributions that make the work stand out.

distribution-aware loss
bimodal regression
Wasserstein distance
aleatoric uncertainty
Pareto efficiency
🔎 Similar Papers
No similar papers found.
A
Abolfazl Mohammadi-Seif
Department of Engineering, Universitat Pompeu Fabra, Roc Boronat St., Barcelona, 08018, Barcelona, Spain.
C
Carlos Soares
Department of Engineering, University of Porto, Praça de Gomes Teixeira, Porto, 4099-002, Portugal.
Rita P. Ribeiro
Rita P. Ribeiro
Faculty of Sciences, University of Porto and INESC TEC
Imbalanced Domain LearningAnomaly DetectionExplainable AIAI for Social Good
Ricardo Baeza-Yates
Ricardo Baeza-Yates
KTH - Univ. Pompeu Fabra - Univ. de Chile; Sweden, Spain & Chile
Responsible AIInformation retrievalWeb searchWeb miningalgorithms and data structures