🤖 AI Summary
This work addresses the challenge of redundant cross-modal representations in multimodal recommendation, where excessive overlap between modalities—particularly visual and textual—hinders the effective exploitation of complementary information and can even degrade performance when additional modalities are introduced. To this end, the authors propose CLEAR, a novel method that explicitly models the cross-modal covariance between visual and textual embeddings to identify and suppress redundant shared subspaces. By projecting representations onto the null space of the redundant components, CLEAR preserves modality-specific information while enabling de-redundant fusion. Notably, CLEAR is plug-and-play and requires no architectural modifications to existing models. Extensive experiments on three public datasets demonstrate that CLEAR consistently and significantly enhances the performance of various state-of-the-art multimodal recommendation models, confirming its effectiveness and generalizability.
📝 Abstract
Multimodal recommendation has emerged as an effective paradigm for enhancing collaborative filtering by incorporating heterogeneous content modalities. Existing multimodal recommenders predominantly focus on reinforcing cross-modal consistency to facilitate multimodal fusion. However, we observe that multimodal representations often exhibit substantial cross-modal redundancy, where dominant shared components overlap across modalities. Such redundancy can limit the effective utilization of complementary information, explaining why incorporating additional modalities does not always yield performance improvements. In this work, we propose CLEAR, a lightweight and plug-and-play cross-modal de-redundancy approach for multimodal recommendation. Rather than enforcing stronger cross-modal alignment, CLEAR explicitly characterizes the redundant shared subspace across modalities by modeling cross-modal covariance between visual and textual representations. By identifying dominant shared directions via singular value decomposition and projecting multimodal features onto the complementary null space, CLEAR reshapes the multimodal representation space by suppressing redundant cross-modal components while preserving modality-specific information. This subspace-level projection implicitly regulates representation learning dynamics, preventing the model from repeatedly amplifying redundant shared semantics during training. Notably, CLEAR can be seamlessly integrated into existing multimodal recommenders without modifying their architectures or training objectives. Extensive experiments on three public benchmark datasets demonstrate that explicitly reducing cross-modal redundancy consistently improves recommendation performance across a wide range of multimodal recommendation models.