Lost in Sequence: Do Large Language Models Understand Sequential Recommendation?

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a fundamental limitation of existing LLM4Rec models: their inadequate modeling of temporal dynamics in user interaction sequences for sequential recommendation. To address this, we propose LLM-SRec—a lightweight and efficient framework that avoids full-parameter fine-tuning of large language models (LLMs). Instead, it leverages prompt-based inference and representation distillation to extract semantic representations from LLMs, and fuses them with pre-trained collaborative filtering–based temporal representations (from CF-SRec) via a lightweight MLP adapter. Crucially, it incorporates temporally aware user representations. LLM-SRec is the first approach to systematically diagnose and rectify LLMs’ temporal understanding deficit in sequential recommendation. It achieves state-of-the-art performance across multiple benchmarks while significantly reducing computational overhead and deployment costs compared to fine-tuning-based alternatives.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have recently emerged as promising tools for recommendation thanks to their advanced textual understanding ability and context-awareness. Despite the current practice of training and evaluating LLM-based recommendation (LLM4Rec) models under a sequential recommendation scenario, we found that whether these models understand the sequential information inherent in users' item interaction sequences has been largely overlooked. In this paper, we first demonstrate through a series of experiments that existing LLM4Rec models do not fully capture sequential information both during training and inference. Then, we propose a simple yet effective LLM-based sequential recommender, called LLM-SRec, a method that enhances the integration of sequential information into LLMs by distilling the user representations extracted from a pre-trained CF-SRec model into LLMs. Our extensive experiments show that LLM-SRec enhances LLMs' ability to understand users' item interaction sequences, ultimately leading to improved recommendation performance. Furthermore, unlike existing LLM4Rec models that require fine-tuning of LLMs, LLM-SRec achieves state-of-the-art performance by training only a few lightweight MLPs, highlighting its practicality in real-world applications. Our code is available at https://github.com/Sein-Kim/LLM-SRec.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' sequential understanding.
Improving sequential information integration.
Enhancing recommendation performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-SRec enhances sequential information integration
Distills user representations from CF-SRec model
Trains lightweight MLPs for state-of-the-art performance
🔎 Similar Papers
No similar papers found.
Sein Kim
Sein Kim
KAIST
Recommender SystemsPersonalizationLarge Language Models
H
Hongseok Kang
KAIST, Daejeon, Republic of Korea
K
Kibum Kim
KAIST, Daejeon, Republic of Korea
J
Jiwan Kim
KAIST, Daejeon, Republic of Korea
D
Donghyun Kim
NAVER Corperation, Seongnam, Republic of Korea
M
Minchul Yang
NAVER Corperation, Seongnam, Republic of Korea
K
Kwangjin Oh
NAVER Corperation, Seongnam, Republic of Korea
Julian McAuley
Julian McAuley
Professor, UC San Diego
Recommender SystemsNatural Language ProcessingPersonalizationComputer Music
Chanyoung Park
Chanyoung Park
Associate Professor, KAIST
Artificial intelligenceGraph data miningRecommender systemAI for Science