🤖 AI Summary
Multimodal Entity Alignment (MMEA) often suffers from visual modality bias, leading to significant performance degradation when image similarity is low. To address this, we propose CDMEA—the first counterfactual debiasing framework for MMEA grounded in causal inference. CDMEA explicitly suppresses the direct causal effect of the visual modality by disentangling the total effect from the natural direct effect, thereby strengthening the synergistic indirect effect between graph structure and vision. Methodologically, it innovatively integrates causal intervention, counterfactual reasoning, and multimodal graph neural networks to achieve modality-fair fusion—moving beyond conventional feature concatenation or heuristic weighting paradigms. Evaluated on nine benchmark datasets, CDMEA consistently outperforms 14 state-of-the-art methods, with particularly pronounced gains under low-image-similarity, high-noise, and low-resource settings.
📝 Abstract
Multi-Modal Entity Alignment (MMEA) aims to retrieve equivalent entities from different Multi-Modal Knowledge Graphs (MMKGs), a critical information retrieval task. Existing studies have explored various fusion paradigms and consistency constraints to improve the alignment of equivalent entities, while overlooking that the visual modality may not always contribute positively. Empirically, entities with low-similarity images usually generate unsatisfactory performance, highlighting the limitation of overly relying on visual features. We believe the model can be biased toward the visual modality, leading to a shortcut image-matching task. To address this, we propose a counterfactual debiasing framework for MMEA, termed CDMEA, which investigates visual modality bias from a causal perspective. Our approach aims to leverage both visual and graph modalities to enhance MMEA while suppressing the direct causal effect of the visual modality on model predictions. By estimating the Total Effect (TE) of both modalities and excluding the Natural Direct Effect (NDE) of the visual modality, we ensure that the model predicts based on the Total Indirect Effect (TIE), effectively utilizing both modalities and reducing visual modality bias. Extensive experiments on 9 benchmark datasets show that CDMEA outperforms 14 state-of-the-art methods, especially in low-similarity, high-noise, and low-resource data scenarios.