LLM4DSR: Leveraing Large Language Model for Denoising Sequential Recommendation

📅 2024-08-15
🏛️ arXiv.org
📈 Citations: 9
Influential: 2
📄 PDF
🤖 AI Summary
To address performance degradation in sequential recommendation caused by noisy user interactions in behavioral sequences, this paper proposes an unsupervised, large language model (LLM)-driven sequence denoising framework. Methodologically, it introduces (1) a novel self-supervised fine-tuning task to activate the LLM’s semantic capability for implicit noise identification; (2) an uncertainty estimation module to effectively mitigate LLM hallucination; and (3) a model-agnostic denoising result reuse mechanism compatible with arbitrary downstream sequential recommenders. Extensive experiments across multiple benchmark datasets demonstrate consistent improvements: mainstream sequential recommendation models achieve average gains of 3.2%–7.8% in Recall@10 and NDCG@10. The framework establishes a new paradigm for noise-robust sequential recommendation, advancing both interpretability and reliability in real-world deployment scenarios.

Technology Category

Application Category

📝 Abstract
Sequential Recommenders generate recommendations based on users' historical interaction sequences. However, in practice, these collected sequences are often contaminated by noisy interactions, which significantly impairs recommendation performance. Accurately identifying such noisy interactions without additional information is particularly challenging due to the absence of explicit supervisory signals indicating noise. Large Language Models (LLMs), equipped with extensive open knowledge and semantic reasoning abilities, offer a promising avenue to bridge this information gap. However, employing LLMs for denoising in sequential recommendation presents notable challenges: 1) Direct application of pretrained LLMs may not be competent for the denoising task, frequently generating nonsensical responses; 2) Even after fine-tuning, the reliability of LLM outputs remains questionable, especially given the complexity of the denoising task and the inherent hallucinatory issue of LLMs. To tackle these challenges, we propose LLM4DSR, a tailored approach for denoising sequential recommendation using LLMs. We constructed a self-supervised fine-tuning task to activate LLMs' capabilities to identify noisy items and suggest replacements. Furthermore, we developed an uncertainty estimation module that ensures only high-confidence responses are utilized for sequence corrections. Remarkably, LLM4DSR is model-agnostic, allowing corrected sequences to be flexibly applied across various recommendation models. Extensive experiments validate the superiority of LLM4DSR over existing methods.
Problem

Research questions and friction points this paper is trying to address.

Identifying noisy interactions in sequential recommendations without explicit signals
Overcoming LLMs' limitations in generating reliable denoising outputs
Enhancing recommendation performance by correcting contaminated user interaction sequences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised fine-tuning for LLM denoising
Uncertainty estimation for reliable corrections
Model-agnostic sequence correction approach
🔎 Similar Papers
2024-02-02ACM Transactions on Recommender SystemsCitations: 1
Bohao Wang
Bohao Wang
College of Information Science & Electronic Engineering, Zhejiang University
Wireless AICommunication6GDigital TwinRay Tracing
F
Feng Liu
OPPO Co Ltd, Shenzhen, China
J
Jiawei Chen
Zhejiang University, Hangzhou, China
Y
Yudi Wu
Zhejiang University, Hangzhou, China
X
Xingyu Lou
OPPO Co Ltd, Shenzhen, China
J
Jun Wang
OPPO Co Ltd, Shenzhen, China
Yan Feng
Yan Feng
Hangzhou Institute of Advanced Study, UCAS
Raman lasersfiber lasersnonlinear photonicslaser guide staroptical magnetometry
C
Chun Chen
Zhejiang University, Hangzhou, China
C
Can Wang
Zhejiang University, Hangzhou, China