Navigating Conflicting Views: Harnessing Trust for Learning

📅 2024-06-03
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address unreliable decision-making caused by inter-view information conflicts in multi-view classification, this paper proposes an instance-level dynamic discounting fusion method grounded in computational trust. Departing from conventional equal-weight alignment assumptions, it introduces a probability-sensitive trust evaluation mechanism and a novel multi-view consistency metric—MVAGT—to enable heterogeneous, view-specific trust modeling. The method integrates evidence theory with uncertainty-aware prediction to adaptively weight contributions from individual views. Evaluated on six real-world datasets, the approach achieves significant improvements in Top-1 accuracy and AUC-ROC. Furthermore, Fleiss’ Kappa and MVAGT analyses confirm its effectiveness in mitigating view conflicts, thereby enhancing model robustness and decision reliability.

Technology Category

Application Category

📝 Abstract
Resolving conflicts is essential to make the decisions of multi-view classification more reliable. Much research has been conducted on learning consistent informative representations among different views, assuming that all views are identically important and strictly aligned. However, real-world multi-view data may not always conform to these assumptions, as some views may express distinct information. To address this issue, we develop a computational trust-based discounting method to enhance the existing trustworthy framework in scenarios where conflicts between different views may arise. Its belief fusion process considers the trustworthiness of predictions made by individual views via an instance-wise probability-sensitive trust discounting mechanism. We evaluate our method on six real-world datasets, using Top-1 Accuracy, AUC-ROC for Uncertainty-Aware Prediction, Fleiss' Kappa, and a new metric called Multi-View Agreement with Ground Truth that takes into consideration the ground truth labels. The experimental results show that computational trust can effectively resolve conflicts, paving the way for more reliable multi-view classification models in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Resolving conflicts in multi-view classification decisions
Addressing non-identical importance and misalignment of views
Enhancing trustworthiness framework with computational trust method
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trust-based discounting method for conflict resolution
Probability-sensitive trust discounting mechanism
Multi-view agreement metric with ground truth
🔎 Similar Papers
No similar papers found.
Jueqing Lu
Jueqing Lu
Monash University
Machine Learning
L
Lan Du
Monash University
W
W. Buntine
VinUniversity
M
M. Jung
Monash University
J
Joanna Dipnall
Monash University
Belinda Gabbe
Belinda Gabbe
Monash University