🤖 AI Summary
This work addresses the performance degradation of short-sequence users in existing click-through rate (CTR) prediction models when handling mixed-length user behavior sequences, a problem exacerbated by attention polarization and imbalanced training sequence length distributions. To mitigate this, the authors propose LAIN, a novel framework that explicitly conditions sequence modeling on sequence length. LAIN introduces three lightweight, plug-and-play components—spectral length encoder, length-conditioned prompt, and length-modulated attention—to dynamically adapt representation learning for both short and long sequences. Without compromising performance on long sequences, LAIN uniformly enhances user representations across varying sequence lengths. Experiments on three real-world datasets demonstrate consistent improvements, achieving an average AUC gain of 1.15% and a log loss reduction of 2.25%, while remaining compatible with various mainstream CTR backbone architectures.
📝 Abstract
User behavior sequences in modern recommendation systems exhibit significant length heterogeneity, ranging from sparse short-term interactions to rich long-term histories. While longer sequences provide more context, we observe that increasing the maximum input sequence length in existing CTR models paradoxically degrades performance for short-sequence users due to attention polarization and length imbalance in training data. To address this, we propose LAIN(Length-Adaptive Interest Network), a plug-and-play framework that explicitly incorporates sequence length as a conditioning signal to balance long- and short-sequence modeling. LAIN consists of three lightweight components: a Spectral Length Encoder that maps length into continuous representations, Length-Conditioned Prompting that injects global contextual cues into both long- and short-term behavior branches, and Length-Modulated Attention that adaptively adjusts attention sharpness based on sequence length. Extensive experiments on three real-world benchmarks across five strong CTR backbones show that LAIN consistently improves overall performance, achieving up to 1.15% AUC gain and 2.25% log loss reduction. Notably, our method significantly improves accuracy for short-sequence users without sacrificing longsequence effectiveness. Our work offers a general, efficient, and deployable solution to mitigate length-induced bias in sequential recommendation.