🤖 AI Summary
Standard regression methods often assume unimodal Gaussian noise, leading to mean collapse and poor uncertainty quantification when the predictive confidence exhibits a bimodal distribution. To address this limitation, this work proposes a distribution-aware loss function that uniquely integrates Wasserstein and Cramér distances into a normalized RMSE formulation. This approach enables standard deep regression models to stably recover bimodal distributions without relying on mixture density networks (MDNs). The method achieves a favorable balance between training stability and distributional modeling capacity, reducing the Jensen–Shannon divergence by 45% on complex bimodal datasets while preserving competitive mean squared error performance. Moreover, it demonstrates significantly improved distributional fidelity and robustness compared to MDN-based alternatives.
📝 Abstract
Despite the strong predictive performance achieved by machine learning models across many application domains, assessing their trustworthiness through reliable estimates of predictive confidence remains a critical challenge. This issue arises in scenarios where the likelihood of error inferred from learned representations follows a bimodal distribution, resulting from the coexistence of confident and ambiguous predictions. Standard regression approaches often struggle to adequately express this predictive uncertainty, as they implicitly assume unimodal Gaussian noise, leading to mean-collapse behavior in such settings. Although Mixture Density Networks (MDNs) can represent different distributions, they suffer from severe optimization instability. We propose a family of distribution-aware loss functions integrating normalized RMSE with Wasserstein and Cramér distances. When applied to standard deep regression models, our approach recovers bimodal distributions without the volatility of mixture models. Validated across four experimental stages, our results show that the proposed Wasserstein loss establishes a new Pareto efficiency frontier: matching the stability of standard regression losses like MSE in unimodal tasks while reducing Jensen-Shannon Divergence by 45% on complex bimodal datasets. Our framework strictly dominates MDNs in both fidelity and robustness, offering a reliable tool for aleatoric uncertainty estimation in trustworthy AI systems.