Principled Synthetic Data Enables the First Scaling Laws for LLMs in Recommendation

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of predictable scaling laws for large language models (LLMs) in recommender systems, a limitation primarily caused by noise, bias, and incompleteness inherent in real user interaction data. To overcome this, the authors propose a hierarchical synthetic data generation framework integrated with curriculum-based instructional design for continual pretraining of LLMs. This approach is the first to reveal and realize power-law scaling behavior in recommendation tasks, substantially enhancing model generalization. Empirical results demonstrate a 130% improvement in Recall@100 over models trained on real-world data, alongside consistent and predictable reductions in perplexity across multiple synthetic data modalities, thereby establishing a reliable pathway for scalable LLM-based recommendation systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) represent a promising frontier for recommender systems, yet their development has been impeded by the absence of predictable scaling laws, which are crucial for guiding research and optimizing resource allocation. We hypothesize that this may be attributed to the inherent noise, bias, and incompleteness of raw user interaction data in prior continual pre-training (CPT) efforts. This paper introduces a novel, layered framework for generating high-quality synthetic data that circumvents such issues by creating a curated, pedagogical curriculum for the LLM. We provide powerful, direct evidence for the utility of our curriculum by showing that standard sequential models trained on our principled synthetic data significantly outperform ($+130\%$ on recall@100 for SasRec) models trained on real data in downstream ranking tasks, demonstrating its superiority for learning generalizable user preference patterns. Building on this, we empirically demonstrate, for the first time, robust power-law scaling for an LLM that is continually pre-trained on our high-quality, recommendation-specific data. Our experiments reveal consistent and predictable perplexity reduction across multiple synthetic data modalities. These findings establish a foundational methodology for reliable scaling LLM capabilities in the recommendation domain, thereby shifting the research focus from mitigating data deficiencies to leveraging high-quality, structured information.
Problem

Research questions and friction points this paper is trying to address.

scaling laws
large language models
recommender systems
synthetic data
continual pre-training
Innovation

Methods, ideas, or system contributions that make the work stand out.

synthetic data
scaling laws
large language models
recommendation systems
curriculum learning
🔎 Similar Papers
2024-02-02ACM Transactions on Recommender SystemsCitations: 1