🤖 AI Summary
To address the core challenges of semantic-collaborative modeling disjunction and sparse, stochastic user feedback when employing large language models (LLMs) as generative reasoning recommendation models (GRRMs), this paper proposes GREAM. Methodologically, it introduces: (1) a semantic-collaborative alignment mechanism that jointly encodes interaction signals and textual semantics; (2) an inference curriculum activation strategy leveraging pedagogical data synthesis to enhance reasoning generalization; and (3) verifiable reward-driven group policy optimization, integrating residual-sensitive rewards with grouped advantage estimation to improve training stability under sparse feedback. GREAM supports end-to-end learning while enabling both direct sequence-based recommendations and interpretable, stepwise reasoning. Extensive experiments on three benchmark datasets demonstrate significant improvements over strong baselines, achieving state-of-the-art performance in both recommendation accuracy and reasoning interpretability.
📝 Abstract
Despite their remarkable reasoning capabilities across diverse domains, large language models (LLMs) face fundamental challenges in natively functioning as generative reasoning recommendation models (GRRMs), where the intrinsic modeling gap between textual semantics and collaborative filtering signals, combined with the sparsity and stochasticity of user feedback, presents significant obstacles. This work explores how to build GRRMs by adapting pre-trained LLMs, which achieves a unified understanding-reasoning-prediction manner for recommendation tasks. We propose GREAM, an end-to-end framework that integrates three components: (i) Collaborative-Semantic Alignment, which fuses heterogeneous textual evidence to construct semantically consistent, discrete item indices and auxiliary alignment tasks that ground linguistic representations in interaction semantics; (ii) Reasoning Curriculum Activation, which builds a synthetic dataset with explicit Chain-of-Thought supervision and a curriculum that progresses through behavioral evidence extraction, latent preference modeling, intent inference, recommendation formulation, and denoised sequence rewriting; and (iii) Sparse-Regularized Group Policy Optimization (SRPO), which stabilizes post-training via Residual-Sensitive Verifiable Reward and Bonus-Calibrated Group Advantage Estimation, enabling end-to-end optimization under verifiable signals despite sparse successes. GREAM natively supports two complementary inference modes: Direct Sequence Recommendation for high-throughput, low-latency deployment, and Sequential Reasoning Recommendation that first emits an interpretable reasoning chain for causal transparency. Experiments on three datasets demonstrate consistent gains over strong baselines, providing a practical path toward verifiable-RL-driven LLM recommenders.