Refining Contrastive Learning and Homography Relations for Multi-Modal Recommendation

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address data sparsity in multimodal recommendation—leading to noisy representations, insufficient modality-specific feature disentanglement, and inadequate modeling of homogeneous graph relations—this paper proposes MGCL, a framework integrating fine-grained contrastive learning with collaborative multi-homogeneous graph modeling. Methodologically, MGCL introduces a meta-network-guided contrastive learning framework with explicit orthogonality constraints to disentangle shared and modality-specific features; it further constructs user-interest and item-cooccurrence homogeneous graphs to jointly capture deep structural semantics in user-item interactions. Technically, MGCL unifies graph neural networks, multi-graph fusion, and interpretable contrastive mechanisms. Extensive experiments on three real-world datasets demonstrate significant improvements over state-of-the-art methods. Visualization analyses confirm effective feature disentanglement, and the source code is publicly available.

Technology Category

Application Category

📝 Abstract
Multi-modal recommender system focuses on utilizing rich modal information ( i.e., images and textual descriptions) of items to improve recommendation performance. The current methods have achieved remarkable success with the powerful structure modeling capability of graph neural networks. However, these methods are often hindered by sparse data in real-world scenarios. Although contrastive learning and homography ( i.e., homogeneous graphs) are employed to address the data sparsity challenge, existing methods still suffer two main limitations: 1) Simple multi-modal feature contrasts fail to produce effective representations, causing noisy modal-shared features and loss of valuable information in modal-unique features; 2) The lack of exploration of the homograph relations between user interests and item co-occurrence results in incomplete mining of user-item interplay. To address the above limitations, we propose a novel framework for extbf{R} extbf{E}fining multi-mod extbf{A}l cont extbf{R}astive learning and ho extbf{M}ography relations ( extbf{REARM}). Specifically, we complement multi-modal contrastive learning by employing meta-network and orthogonal constraint strategies, which filter out noise in modal-shared features and retain recommendation-relevant information in modal-unique features. To mine homogeneous relationships effectively, we integrate a newly constructed user interest graph and an item co-occurrence graph with the existing user co-occurrence and item semantic graphs for graph learning. The extensive experiments on three real-world datasets demonstrate the superiority of REARM to various state-of-the-art baselines. Our visualization further shows an improvement made by REARM in distinguishing between modal-shared and modal-unique features. Code is available href{https://github.com/MrShouxingMa/REARM}{here}.
Problem

Research questions and friction points this paper is trying to address.

Addressing data sparsity in multi-modal recommender systems
Improving contrastive learning for effective multi-modal representations
Enhancing homography relations mining for user-item interplay
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-network and orthogonal constraints refine contrastive learning
Integrates user interest and item co-occurrence graphs for homography
Filters modal-shared noise while preserving modal-unique features
🔎 Similar Papers
No similar papers found.
S
Shouxing Ma
University of Technology Sydney
Y
Yawen Zeng
Hunan University
Shiqing Wu
Shiqing Wu
Assistant Professor, City University of Macau
Multi-agent systemsRecommender systemsSocial influenceReinforcement Learning
G
Guandong Xu
The Education University of Hong Kong