🤖 AI Summary
Traditional sequential recommendation suffers from sparse collaborative signals, while existing LLM-augmented approaches are vulnerable to hallucination-induced noise. To address these limitations, we propose GRASP—a framework that jointly leverages generative enhancement and holographic attention. Specifically, GRASP injects world knowledge via LLMs to enrich behavioral semantics, and employs a multi-level attention mechanism to dynamically integrate contextual signals from genuinely similar users and items—explicitly suppressing hallucination bias and capturing interest evolution. The framework is plug-and-play, seamlessly integrating with mainstream sequential recommendation backbones. Extensive experiments on two public benchmarks and one industrial dataset demonstrate that GRASP achieves state-of-the-art performance across all settings, validating its generality, robustness, and effectiveness.
📝 Abstract
Sequential Recommendation System~(SRS) has become pivotal in modern society, which predicts subsequent actions based on the user's historical behavior. However, traditional collaborative filtering-based sequential recommendation models often lead to suboptimal performance due to the limited information of their collaborative signals. With the rapid development of LLMs, an increasing number of works have incorporated LLMs' world knowledge into sequential recommendation. Although they achieve considerable gains, these approaches typically assume the correctness of LLM-generated results and remain susceptible to noise induced by LLM hallucinations. To overcome these limitations, we propose GRASP (Generation Augmented Retrieval with Holistic Attention for Sequential Prediction), a flexible framework that integrates generation augmented retrieval for descriptive synthesis and similarity retrieval, and holistic attention enhancement which employs multi-level attention to effectively employ LLM's world knowledge even with hallucinations and better capture users' dynamic interests. The retrieved similar users/items serve as auxiliary contextual information for the later holistic attention enhancement module, effectively mitigating the noisy guidance of supervision-based methods. Comprehensive evaluations on two public benchmarks and one industrial dataset reveal that GRASP consistently achieves state-of-the-art performance when integrated with diverse backbones. The code is available at: https://anonymous.4open.science/r/GRASP-SRS.