How Do Decoder-Only LLMs Perceive Users? Rethinking Attention Masking for User Representation Learning

πŸ“… 2026-02-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited user representation capacity in decoder-only large language models by systematically investigating the effects of causal, mixed, and bidirectional attention masks within a unified contrastive learning framework. To overcome the constraints of conventional hard masking, the authors propose Gradient-Guided Soft Maskingβ€”a novel mechanism that leverages gradient warm-up and linear scheduling to smoothly transition training from causal to bidirectional attention. This approach enables more stable optimization and yields high-quality bidirectional user representations. Evaluated on large-scale real-world user behavior data from Alipay, the method significantly outperforms existing baselines across nine industrial-scale user understanding benchmarks while remaining fully compatible with standard decoder pretraining paradigms.

Technology Category

Application Category

πŸ“ Abstract
Decoder-only large language models are increasingly used as behavioral encoders for user representation learning, yet the impact of attention masking on the quality of user embeddings remains underexplored. In this work, we conduct a systematic study of causal, hybrid, and bidirectional attention masks within a unified contrastive learning framework trained on large-scale real-world Alipay data that integrates long-horizon heterogeneous user behaviors. To improve training dynamics when transitioning from causal to bidirectional attention, we propose Gradient-Guided Soft Masking, a gradient-based pre-warmup applied before a linear scheduler that gradually opens future attention during optimization. Evaluated on 9 industrial user cognition benchmarks covering prediction, preference, and marketing sensitivity tasks, our approach consistently yields more stable training and higher-quality bidirectional representations compared with causal, hybrid, and scheduler-only baselines, while remaining compatible with decoder pretraining. Overall, our findings highlight the importance of masking design and training transition in adapting decoder-only LLMs for effective user representation learning. Our code is available at https://github.com/JhCircle/Deepfind-GGSM.
Problem

Research questions and friction points this paper is trying to address.

decoder-only LLMs
user representation learning
attention masking
user embeddings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-Guided Soft Masking
attention masking
user representation learning
decoder-only LLMs
contrastive learning
πŸ”Ž Similar Papers
No similar papers found.
J
Jiahao Yuan
DeepFind Team, Ant Group; East China Normal University
Y
Yike Xu
DeepFind Team, Ant Group
J
Jinyong Wen
DeepFind Team, Ant Group
B
Baokun Wang
DeepFind Team, Ant Group
Y
Yang Chen
DeepFind Team, Ant Group
Xiaotong Lin
Xiaotong Lin
Sun Yat-sen University
computer vision
W
Wuliang Huang
DeepFind Team, Ant Group
Z
Ziyi Gao
DeepFind Team, Ant Group
Xing Fu
Xing Fu
Ant Group
Y
Yu Cheng
DeepFind Team, Ant Group
Weiqiang Wang
Weiqiang Wang
ant financials
Machine LearningSimulation