🤖 AI Summary
To address the collapse of trustworthiness in multi-view learning under adversarial perturbations, this paper proposes Reliable Disentangled Multi-View Learning (RDML). The method introduces an evidence-guided view disentanglement mechanism—novel in multi-view learning—that achieves interpretable separation and robust fusion of adversarial perturbations via feature recalibration and view-level evidential attention. Crucially, uncertainty modeling is explicitly integrated into the disentanglement process, enhancing the model’s discriminative capability against and suppression capacity for view-level attacks. Extensive experiments on multiple benchmark datasets demonstrate that RDML significantly outperforms state-of-the-art methods under diverse multi-view adversarial attacks, while maintaining high classification accuracy and well-calibrated confidence estimates. These results establish RDML as a new paradigm for trustworthy multi-view learning in security-critical applications.
📝 Abstract
Recently, trustworthy multi-view learning has attracted extensive attention because evidence learning can provide reliable uncertainty estimation to enhance the credibility of multi-view predictions. Existing trusted multi-view learning methods implicitly assume that multi-view data is secure. In practice, however, in safety-sensitive applications such as autonomous driving and security monitoring, multi-view data often faces threats from adversarial perturbations, thereby deceiving or disrupting multi-view learning models. This inevitably leads to the adversarial unreliability problem (AUP) in trusted multi-view learning. To overcome this tricky problem, we propose a novel multi-view learning framework, namely Reliable Disentanglement Multi-view Learning (RDML). Specifically, we first propose evidential disentanglement learning to decompose each view into clean and adversarial parts under the guidance of corresponding evidences, which is extracted by a pretrained evidence extractor. Then, we employ the feature recalibration module to mitigate the negative impact of adversarial perturbations and extract potential informative features from them. Finally, to further ignore the irreparable adversarial interferences, a view-level evidential attention mechanism is designed. Extensive experiments on multi-view classification tasks with adversarial attacks show that our RDML outperforms the state-of-the-art multi-view learning methods by a relatively large margin.