Generative Reasoning Recommendation via LLMs

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the core challenges of semantic-collaborative modeling disjunction and sparse, stochastic user feedback when employing large language models (LLMs) as generative reasoning recommendation models (GRRMs), this paper proposes GREAM. Methodologically, it introduces: (1) a semantic-collaborative alignment mechanism that jointly encodes interaction signals and textual semantics; (2) an inference curriculum activation strategy leveraging pedagogical data synthesis to enhance reasoning generalization; and (3) verifiable reward-driven group policy optimization, integrating residual-sensitive rewards with grouped advantage estimation to improve training stability under sparse feedback. GREAM supports end-to-end learning while enabling both direct sequence-based recommendations and interpretable, stepwise reasoning. Extensive experiments on three benchmark datasets demonstrate significant improvements over strong baselines, achieving state-of-the-art performance in both recommendation accuracy and reasoning interpretability.

Technology Category

Application Category

📝 Abstract
Despite their remarkable reasoning capabilities across diverse domains, large language models (LLMs) face fundamental challenges in natively functioning as generative reasoning recommendation models (GRRMs), where the intrinsic modeling gap between textual semantics and collaborative filtering signals, combined with the sparsity and stochasticity of user feedback, presents significant obstacles. This work explores how to build GRRMs by adapting pre-trained LLMs, which achieves a unified understanding-reasoning-prediction manner for recommendation tasks. We propose GREAM, an end-to-end framework that integrates three components: (i) Collaborative-Semantic Alignment, which fuses heterogeneous textual evidence to construct semantically consistent, discrete item indices and auxiliary alignment tasks that ground linguistic representations in interaction semantics; (ii) Reasoning Curriculum Activation, which builds a synthetic dataset with explicit Chain-of-Thought supervision and a curriculum that progresses through behavioral evidence extraction, latent preference modeling, intent inference, recommendation formulation, and denoised sequence rewriting; and (iii) Sparse-Regularized Group Policy Optimization (SRPO), which stabilizes post-training via Residual-Sensitive Verifiable Reward and Bonus-Calibrated Group Advantage Estimation, enabling end-to-end optimization under verifiable signals despite sparse successes. GREAM natively supports two complementary inference modes: Direct Sequence Recommendation for high-throughput, low-latency deployment, and Sequential Reasoning Recommendation that first emits an interpretable reasoning chain for causal transparency. Experiments on three datasets demonstrate consistent gains over strong baselines, providing a practical path toward verifiable-RL-driven LLM recommenders.
Problem

Research questions and friction points this paper is trying to address.

Adapting LLMs for generative reasoning recommendation tasks
Bridging semantic-textual and collaborative filtering modeling gaps
Addressing user feedback sparsity with verifiable optimization methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative-Semantic Alignment fuses heterogeneous textual evidence
Reasoning Curriculum Activation builds synthetic Chain-of-Thought dataset
Sparse-Regularized Group Policy Optimization stabilizes post-training rewards
🔎 Similar Papers
No similar papers found.
Minjie Hong
Minjie Hong
Zhejiang University
Multi-modal LearningLLMReinforcement learningGenerative RetrievalRecommendation
Z
Zetong Zhou
Shanghai Jiao Tong University, Shanghai, China
Z
Zirun Guo
Zhejiang University, Hangzhou, Zhejiang, China
Z
Ziang Zhang
Zhejiang University, Hangzhou, Zhejiang, China
Ruofan Hu
Ruofan Hu
Zhe Jiang University
Weinan Gan
Weinan Gan
Huawei Noah's Ark Lab
Large Language ModelGenerative IRAgent
J
Jieming Zhu
Huawei Noah’s Ark Lab, Shenzhen, Guangdong, China
Zhou Zhao
Zhou Zhao
Zhejiang University
Machine LearningData MiningMultimedia Computing