From Past To Path: Masked History Learning for Next-Item Prediction in Generative Recommendation

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing generative recommender systems rely solely on autoregressive prediction, neglecting the intrinsic structural patterns within user interaction histories and thus struggling to model latent user intents. Method: We propose Masked History Learning (MHL), a novel paradigm that explicitly models the causal structure of behavioral trajectories by reconstructing masked historical interactions. MHL introduces an entropy-guided dynamic masking strategy and a curriculum learning scheduler to jointly optimize historical reconstruction and future prediction. Integrated seamlessly into standard autoregressive frameworks, it requires no architectural modifications. Contribution/Results: Evaluated on three public benchmark datasets, MHL significantly outperforms state-of-the-art generative recommendation models. Empirical results demonstrate that deep understanding of historical behavior is critical for improving predictive accuracy. This work establishes a new paradigm for generative recommendation—shifting from mere sequential modeling toward intent-aware recommendation.

Technology Category

Application Category

📝 Abstract
Generative recommendation, which directly generates item identifiers, has emerged as a promising paradigm for recommendation systems. However, its potential is fundamentally constrained by the reliance on purely autoregressive training. This approach focuses solely on predicting the next item while ignoring the rich internal structure of a user's interaction history, thus failing to grasp the underlying intent. To address this limitation, we propose Masked History Learning (MHL), a novel training framework that shifts the objective from simple next-step prediction to deep comprehension of history. MHL augments the standard autoregressive objective with an auxiliary task of reconstructing masked historical items, compelling the model to understand ``why'' an item path is formed from the user's past behaviors, rather than just ``what'' item comes next. We introduce two key contributions to enhance this framework: (1) an entropy-guided masking policy that intelligently targets the most informative historical items for reconstruction, and (2) a curriculum learning scheduler that progressively transitions from history reconstruction to future prediction. Experiments on three public datasets show that our method significantly outperforms state-of-the-art generative models, highlighting that a comprehensive understanding of the past is crucial for accurately predicting a user's future path. The code will be released to the public.
Problem

Research questions and friction points this paper is trying to address.

Autoregressive training ignores rich user history structure in recommendations
Generative models fail to understand underlying user intent from past behaviors
Current approaches focus on next-item prediction without history comprehension
Innovation

Methods, ideas, or system contributions that make the work stand out.

Masked History Learning reconstructs masked historical items
Entropy-guided masking targets most informative historical items
Curriculum learning transitions from reconstruction to prediction
🔎 Similar Papers
K
KaiWen Wei
College of Computer Science, Chongqing University
K
Kejun He
College of Computer Science, Chongqing University
X
Xiaomian Kang
MAIS, Institute of Automation, Chinese Academy of Sciences
J
Jie Zhang
Independent Researcher
Yuming Yang
Yuming Yang
Fudan University
Natural Language ProcessingLarge Language Models
J
Jiang Zhong
College of Computer Science, Chongqing University
He Bai
He Bai
Oklahoma State University
controlroboticsestimationplanning
Junnan Zhu
Junnan Zhu
Institute of Automation Chinese Academy of Sciences
Natural Language Processing