Fairness-Aware Multi-view Evidential Learning with Adaptive Prior

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a prevalent class bias in multi-view evidential learning: existing methods over-assign evidence to majority classes due to imbalanced data distributions, yielding unreliable uncertainty estimates. To address this, we formalize the Biased-Evidence Multi-View Learning (BEML) problem and propose a fairness-aware framework comprising three key components: (i) an adaptive prior modeled from training dynamics to mitigate initial bias; (ii) a class-wise evidence variance fairness constraint to enforce balanced evidence allocation across classes; and (iii) a view-aligned multi-view fusion mechanism to enhance evidence consistency. Our approach integrates deep evidential learning, adaptive regularization, and variance calibration. Evaluated on five real-world multi-view datasets, the framework achieves significant improvements—average classification accuracy increases by 1.8%, expected calibration error (ECE) decreases by 32%, and evidence distribution balance is substantially improved.

Technology Category

Application Category

📝 Abstract
Multi-view evidential learning aims to integrate information from multiple views to improve prediction performance and provide trustworthy uncertainty esitimation. Most previous methods assume that view-specific evidence learning is naturally reliable. However, in practice, the evidence learning process tends to be biased. Through empirical analysis on real-world data, we reveal that samples tend to be assigned more evidence to support data-rich classes, thereby leading to unreliable uncertainty estimation in predictions. This motivates us to delve into a new Biased Evidential Multi-view Learning (BEML) problem. To this end, we propose Fairness-Aware Multi-view Evidential Learning (FAML). FAML first introduces an adaptive prior based on training trajectory, which acts as a regularization strategy to flexibly calibrate the biased evidence learning process. Furthermore, we explicitly incorporate a fairness constraint based on class-wise evidence variance to promote balanced evidence allocation. In the multi-view fusion stage, we propose an opinion alignment mechanism to mitigate view-specific bias across views, thereby encouraging the integration of consistent and mutually supportive evidence. Extensive experiments on five real-world multi-view datasets demonstrate that FAML achieves more balanced evidence allocation and improves both prediction performance and the reliability of uncertainty estimation compared to state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Addressing biased evidence learning in multi-view classification
Mitigating unreliable uncertainty estimation from imbalanced evidence allocation
Reducing view-specific biases during multi-view evidence integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive prior calibration for biased evidence
Fairness constraint using class-wise evidence variance
Opinion alignment mechanism for multi-view bias mitigation
🔎 Similar Papers
No similar papers found.