Reasoning-guided Collaborative Filtering with Language Models for Explainable Recommendation

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing large language model (LLM)-based recommender systems, which often neglect collaborative filtering signals and decouple recommendation from explanation, resulting in high memory overhead and weak interpretability. To overcome these issues, the authors propose RGCF-XRec, a framework that integrates collaborative knowledge into a lightweight LLaMA-3.2-3B model via structured reasoning-guided prompts, enabling one-step interpretable sequential recommendation. Key innovations include a context-enhanced collaborative prompting mechanism, a four-dimensional (coherence, completeness, relevance, consistency) explanation quality evaluator to filter noisy reasoning paths, and a unified representation network that fuses collaborative and semantic signals. Experiments on multiple Amazon datasets demonstrate significant performance gains, with up to 7.38% improvement in HR@10 and 8.02% in ROUGE-L, along with notable enhancements of 14.5% and 23.16% in cold-start and zero-shot scenarios, respectively.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) exhibit potential for explainable recommendation systems but overlook collaborative signals, while prevailing methods treat recommendation and explanation as separate tasks, resulting in a memory footprint. We present RGCF-XRec, a hybrid framework that introduces reasoning-guided collaborative filtering (CF) knowledge into a language model to deliver explainable sequential recommendations in a single step. Theoretical grounding and empirical findings reveal that RGCF-XRec offers three key merits over leading CF-aware LLM-based methods: (1) reasoning-guided augmentation of CF knowledge through contextual prompting to discover latent preferences and interpretable reasoning paths; (2) an efficient scoring mechanism based on four dimensions: coherence, completeness, relevance, and consistency to mitigate noisy CF reasoning traces and retain high-quality explanations; (3) a unified representation learning network that encodes collaborative and semantic signals, enabling a structured prompt to condition the LLM for explainable sequential recommendation. RGCF-XRec demonstrates consistent improvements across Amazon datasets, Sports, Toys, and Beauty, comprising 642,503 user-item interactions. It improves HR@10 by 7.38\% in Sports and 4.59\% in Toys, along with ROUGE-L by 8.02\% and 3.49\%, respectively. It reduces the cold warm performance gap, achieving overall gains of 14.5\% in cold-start and 11.9\% in warm start scenarios, and enhances zero-shot HR@5 by 18.54\% in Beauty and 23.16\% in Toys, highlighting effective generalization and robustness. Moreover, RGCF-XRec achieves training efficiency with a lightweight LLaMA 3.2-3B backbone, ensuring scalability for real-world applications.
Problem

Research questions and friction points this paper is trying to address.

explainable recommendation
collaborative filtering
large language models
sequential recommendation
cold start
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reasoning-guided Collaborative Filtering
Explainable Recommendation
Large Language Models
Unified Representation Learning
Contextual Prompting