CTR-Sink: Attention Sink for Language Models in Click-Through Rate Prediction

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In click-through rate (CTR) prediction, treating user behavior sequences as textual inputs for language models leads to semantic fragmentation and attention dispersion, as behaviors are concatenated solely via semantically void delimiters. To address this, we propose a behavior-level attention anchoring mechanism: (1) introducing learnable sink tokens as explicit behavioral boundary anchors; (2) constructing dynamic attention aggregation guided by recommendation-aware priors (e.g., temporal distances); and (3) adopting a two-stage training strategy to strengthen inter-behavior dependency modeling. This approach significantly enhances the semantic coherence of user behavior sequence representation in language models. Extensive experiments on MovieLens, Kuairec, and an industrial dataset demonstrate consistent and substantial improvements in AUC and other key metrics. Attention visualization further confirms that the model effectively concentrates on behavioral boundaries and critical interaction patterns.

Technology Category

Application Category

📝 Abstract
Click-Through Rate (CTR) prediction, a core task in recommendation systems, estimates user click likelihood using historical behavioral data. Modeling user behavior sequences as text to leverage Language Models (LMs) for this task has gained traction, owing to LMs' strong semantic understanding and contextual modeling capabilities. However, a critical structural gap exists: user behavior sequences consist of discrete actions connected by semantically empty separators, differing fundamentally from the coherent natural language in LM pre-training. This mismatch causes semantic fragmentation, where LM attention scatters across irrelevant tokens instead of focusing on meaningful behavior boundaries and inter-behavior relationships, degrading prediction performance. To address this, we propose $ extit{CTR-Sink}$, a novel framework introducing behavior-level attention sinks tailored for recommendation scenarios. Inspired by attention sink theory, it constructs attention focus sinks and dynamically regulates attention aggregation via external information. Specifically, we insert sink tokens between consecutive behaviors, incorporating recommendation-specific signals such as temporal distance to serve as stable attention sinks. To enhance generality, we design a two-stage training strategy that explicitly guides LM attention toward sink tokens and a attention sink mechanism that amplifies inter-sink dependencies to better capture behavioral correlations. Experiments on one industrial dataset and two open-source datasets (MovieLens, Kuairec), alongside visualization results, validate the method's effectiveness across scenarios.
Problem

Research questions and friction points this paper is trying to address.

Bridges gap between LM pre-training and user behavior sequences
Reduces semantic fragmentation in LM attention for CTR prediction
Enhances behavior boundary focus and inter-behavior relationship modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Behavior-level attention sinks for recommendation
Two-stage training strategy for attention guidance
Dynamic attention aggregation with external signals
🔎 Similar Papers
No similar papers found.
Zixuan Li
Zixuan Li
Assistant Professor at ICT, UCAS
Knowledge GraphLarge Language Model
B
Binzong Geng
Ant Group
Y
Yong He
Ant Group
J
Jing Xiong
University of Hong Kong
Y
Yuxuan Hu
Sun Yat-sen University
Dingwei Chen
Dingwei Chen
Sun Yat-sen University
NLPLarge Language Model
L
Liang Zhang
Ant Group
X
Xiyu Chang
Ant Group
C
Chengming Li
SMBU
C
Chuan Yuan
Ant Group
Linjian Mo
Linjian Mo
Ant Group
Zhenan Sun
Zhenan Sun
Institute of Automation, Chinese Academy of Sciences
BiometricsPattern RecognitionComputer Vision
J
Jian Chen
University of Hong Kong