Large Language Model Enhanced Graph Invariant Contrastive Learning for Out-of-Distribution Recommendation

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of generalization performance in graph-based recommender systems under out-of-distribution (OOD) scenarios—caused by learning spurious environment-dependent correlations—this paper proposes a synergistic framework integrating large language models (LLMs) with causal graph learning. Methodologically, it innovatively incorporates LLMs’ world knowledge and logical reasoning capabilities into causal discovery over user-item interaction graphs, enabling dynamic identification and correction of unstable associations. It further combines data-driven invariant learning to estimate causal confidence scores, which guide graph structure pruning and completion. Additionally, a causally guided contrastive learning objective is designed to optimize representation learning. Evaluated on four public benchmarks, the method significantly improves both robustness and accuracy of OOD recommendations, while ensuring interpretability and generalization stability.

Technology Category

Application Category

📝 Abstract
Out-of-distribution (OOD) generalization has emerged as a significant challenge in graph recommender systems. Traditional graph neural network algorithms often fail because they learn spurious environmental correlations instead of stable causal relationships, leading to substantial performance degradation under distribution shifts. While recent advancements in Large Language Models (LLMs) offer a promising avenue due to their vast world knowledge and reasoning capabilities, effectively integrating this knowledge with the fine-grained topology of specific graphs to solve the OOD problem remains a significant challenge. To address these issues, we propose {$ extbf{Inv}$ariant $ extbf{G}$raph $ extbf{C}$ontrastive Learning with $ extbf{LLM}$s for Out-of-Distribution Recommendation (InvGCLLM)}, an innovative causal learning framework that synergistically integrates the strengths of data-driven models and knowledge-driven LLMs. Our framework first employs a data-driven invariant learning model to generate causal confidence scores for each user-item interaction. These scores then guide an LLM to perform targeted graph refinement, leveraging its world knowledge to prune spurious connections and augment missing causal links. Finally, the structurally purified graphs provide robust supervision for a causality-guided contrastive learning objective, enabling the model to learn representations that are resilient to spurious correlations. Experiments conducted on four public datasets demonstrate that InvGCLLM achieves significant improvements in out-of-distribution recommendation, consistently outperforming state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Addressing performance degradation in recommender systems under distribution shifts
Integrating LLM knowledge with graph topology to identify causal relationships
Eliminating spurious correlations while preserving genuine user-item interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-guided graph refinement for causal link enhancement
Data-driven invariant learning generates causal confidence scores
Causality-guided contrastive learning on purified graph structures
🔎 Similar Papers
No similar papers found.
Jiahao Liang
Jiahao Liang
School of Computer Science and Engineering, South China University of Technology, China
Haoran Yang
Haoran Yang
Central South University
Graph Neural NetworksData MiningRecommendation Systems
X
Xiangyu Zhao
City University of Hong Kong, Department of Data Science, Hong Kong
Z
Zhiwen Yu
School of Computer Science and Engineering, South China University of Technology and the Pengcheng Lab, China
M
Mianjie Li
School of Electronics and Information, Guangdong Polytechnic Normal University, Guangzhou, 510665, China
Chuan Shi
Chuan Shi
Beijing University of Posts and Telecommunications
data miningmachine learningsocial network analysis
K
Kaixiang Yang
School of Computer Science and Engineering, South China University of Technology, China