DLCRec: A Novel Approach for Managing Diversity in LLM-Based Recommender Systems

📅 2024-08-22
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address declining recommendation diversity—and consequent user dissatisfaction—in large language model (LLM)-based recommender systems, this paper proposes DLCRec, a fine-grained controllable diversity recommendation framework. Methodologically, it introduces a novel three-stage task decomposition: type prediction → type filling → item prediction, enabling precise user-specified diversity control over target types. It further designs noise-robust and distribution-balanced data augmentation strategies to mitigate data sparsity in diversity-related behaviors under few-shot and long-tailed settings. The framework employs sequential subtask modeling, controllable prompting, and an independent-training–sequential-inference architecture. Experiments across multiple benchmarks demonstrate that DLCRec significantly outperforms state-of-the-art methods: diversity control error is bounded within ±0.8%, NDCG@10 improves by 3.2%, and—crucially—it achieves simultaneous optimization of both diversity accuracy and recommendation quality for the first time.

Technology Category

Application Category

📝 Abstract
The integration of Large Language Models (LLMs) into recommender systems has led to substantial performance improvements. However, this often comes at the cost of diminished recommendation diversity, which can negatively impact user satisfaction. To address this issue, controllable recommendation has emerged as a promising approach, allowing users to specify their preferences and receive recommendations that meet their diverse needs. Despite its potential, existing controllable recommender systems frequently rely on simplistic mechanisms, such as a single prompt, to regulate diversity-an approach that falls short of capturing the full complexity of user preferences. In response to these limitations, we propose DLCRec, a novel framework designed to enable fine-grained control over diversity in LLM-based recommendations. Unlike traditional methods, DLCRec adopts a fine-grained task decomposition strategy, breaking down the recommendation process into three sequential sub-tasks: genre prediction, genre filling, and item prediction. These sub-tasks are trained independently and inferred sequentially according to user-defined control numbers, ensuring more precise control over diversity. Furthermore, the scarcity and uneven distribution of diversity-related user behavior data pose significant challenges for fine-tuning. To overcome these obstacles, we introduce two data augmentation techniques that enhance the model's robustness to noisy and out-of-distribution data. These techniques expose the model to a broader range of patterns, improving its adaptability in generating recommendations with varying levels of diversity. Our extensive empirical evaluation demonstrates that DLCRec not only provides precise control over diversity but also outperforms state-of-the-art baselines across multiple recommendation scenarios.
Problem

Research questions and friction points this paper is trying to address.

Recommendation Diversity
User Demand Complexity
Imbalanced Data Challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

DLCRec
Data Augmentation
Controllable Diversity
🔎 Similar Papers
No similar papers found.
Jiaju Chen
Jiaju Chen
Computer Science, Rice University
Human-Computer InteractionNatural Language Processing
C
Chongming Gao
University of Science and Technology of China, Hefei, China
S
Shuai Yuan
Hong Kong University of Science and Technology, Hong Kong, China
S
Shuchang Liu
Independent, Beijing, China
Qingpeng Cai
Qingpeng Cai
Kuaishou Technology
Reinforcement LearningLLMRecommender SystemComputational Advertising
P
Peng Jiang
Independent, Beijing, China