Reliable Disentanglement Multi-view Learning Against View Adversarial Attacks

📅 2025-05-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the collapse of trustworthiness in multi-view learning under adversarial perturbations, this paper proposes Reliable Disentangled Multi-View Learning (RDML). The method introduces an evidence-guided view disentanglement mechanism—novel in multi-view learning—that achieves interpretable separation and robust fusion of adversarial perturbations via feature recalibration and view-level evidential attention. Crucially, uncertainty modeling is explicitly integrated into the disentanglement process, enhancing the model’s discriminative capability against and suppression capacity for view-level attacks. Extensive experiments on multiple benchmark datasets demonstrate that RDML significantly outperforms state-of-the-art methods under diverse multi-view adversarial attacks, while maintaining high classification accuracy and well-calibrated confidence estimates. These results establish RDML as a new paradigm for trustworthy multi-view learning in security-critical applications.

Technology Category

Application Category

📝 Abstract
Recently, trustworthy multi-view learning has attracted extensive attention because evidence learning can provide reliable uncertainty estimation to enhance the credibility of multi-view predictions. Existing trusted multi-view learning methods implicitly assume that multi-view data is secure. In practice, however, in safety-sensitive applications such as autonomous driving and security monitoring, multi-view data often faces threats from adversarial perturbations, thereby deceiving or disrupting multi-view learning models. This inevitably leads to the adversarial unreliability problem (AUP) in trusted multi-view learning. To overcome this tricky problem, we propose a novel multi-view learning framework, namely Reliable Disentanglement Multi-view Learning (RDML). Specifically, we first propose evidential disentanglement learning to decompose each view into clean and adversarial parts under the guidance of corresponding evidences, which is extracted by a pretrained evidence extractor. Then, we employ the feature recalibration module to mitigate the negative impact of adversarial perturbations and extract potential informative features from them. Finally, to further ignore the irreparable adversarial interferences, a view-level evidential attention mechanism is designed. Extensive experiments on multi-view classification tasks with adversarial attacks show that our RDML outperforms the state-of-the-art multi-view learning methods by a relatively large margin.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multi-view learning reliability against adversarial attacks
Decomposing views into clean and adversarial parts using evidence
Mitigating adversarial perturbations via feature recalibration and attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evidential disentanglement learning decomposes views
Feature recalibration mitigates adversarial perturbations
View-level evidential attention ignores irreparable interferences
🔎 Similar Papers
No similar papers found.
Xuyang Wang
Xuyang Wang
Australian National University
Generative Modeling3D VisionDeep Learning
S
Siyuan Duan
College of Computer Science, Sichuan University
Q
Qizhi Li
College of Computer Science, Sichuan University
G
Guiduo Duan
Laboratory of Intelligent Collaborative Computing, University of Electronic Science and Technology of China
Y
Yuan Sun
College of Computer Science, Sichuan University
Dezhong Peng
Dezhong Peng
Sichuan University
Multi-modal LearningMultimedia AnalysisNeural Network