π€ AI Summary
Data augmentation in generative recommendation has long been underappreciated, with existing methods lacking systematic modeling and failing to balance generalizability and efficiency. To address this, we propose GenPASβa novel framework that formally characterizes sequential data augmentation as a bias-controlled, three-stage stochastic sampling process: input sampling, sequence sampling, and target sampling. GenPAS unifies mainstream augmentation strategies and enables explicit control over the training distribution. By decoupling structural fidelity from semantic bias in augmentation, it enhances model robustness to long-tail patterns and sparse user-item interactions. Extensive experiments on multiple public and industrial benchmarks demonstrate that GenPAS consistently improves Recall@K and NDCG while reducing required training data by over 30% and model parameters by 20%, validating its synergistic gains in accuracy, data efficiency, and parameter efficiency.
π Abstract
Generative recommendation plays a crucial role in personalized systems, predicting users' future interactions from their historical behavior sequences. A critical yet underexplored factor in training these models is data augmentation, the process of constructing training data from user interaction histories. By shaping the training distribution, data augmentation directly and often substantially affects model generalization and performance. Nevertheless, in much of the existing work, this process is simplified, applied inconsistently, or treated as a minor design choice, without a systematic and principled understanding of its effects.
Motivated by our empirical finding that different augmentation strategies can yield large performance disparities, we conduct an in-depth analysis of how they reshape training distributions and influence alignment with future targets and generalization to unseen inputs. To systematize this design space, we propose GenPAS, a generalized and principled framework that models augmentation as a stochastic sampling process over input-target pairs with three bias-controlled steps: sequence sampling, target sampling, and input sampling. This formulation unifies widely used strategies as special cases and enables flexible control of the resulting training distribution. Our extensive experiments on benchmark and industrial datasets demonstrate that GenPAS yields superior accuracy, data efficiency, and parameter efficiency compared to existing strategies, providing practical guidance for principled training data construction in generative recommendation.