🤖 AI Summary
This work addresses a key limitation of large language models (LLMs) in recommender systems: the mismatch between training objectives (supervised fine-tuning) and inference procedures (beam search), which can cause ground-truth positive samples to be prematurely pruned during beam search due to low prefix probabilities. To bridge this gap, the paper proposes the first beam search-aware regularization method that explicitly models the pruning mechanism of beam search. During training, it enforces a lightweight constraint ensuring that every token of a positive sample remains within the top-B candidates at each decoding step, thereby aligning training with inference behavior. Notably, this approach avoids costly beam search simulation and incurs negligible computational overhead. Experiments on four real-world datasets demonstrate substantial performance gains over strong baselines with almost no additional computational cost.
📝 Abstract
Recent years have witnessed a rapid surge in research leveraging Large Language Models (LLMs) for recommendation. These methods typically employ supervised fine-tuning (SFT) to adapt LLMs to recommendation scenarios, and utilize beam search during inference to efficiently retrieve $B$ top-ranked recommended items. However, we identify a critical training-inference inconsistency: while SFT optimizes the overall probability of positive items, it does not guarantee that such items will be retrieved by beam search even if they possess high overall probabilities. Due to the greedy pruning mechanism, beam search can prematurely discard a positive item once its prefix probability is insufficient. To address this inconsistency, we propose BEAR (Beam-SEarch-Aware Regularization), a novel fine-tuning objective that explicitly accounts for beam search behavior during training. Rather than directly simulating beam search for each instance during training, which is computationally prohibitive, BEAR enforces a relaxed necessary condition: each token in a positive item must rank within the top-$B$ candidate tokens at each decoding step. This objective effectively mitigates the risk of incorrect pruning while incurring negligible computational overhead compared to standard SFT. Extensive experiments across four real-world datasets demonstrate that BEAR significantly outperforms strong baselines. Code will be released upon acceptance.