🤖 AI Summary
In recommender systems, user behavior sequence modeling and feature interaction modeling have traditionally been decoupled, leading to information fragmentation, optimization difficulties, and computational redundancy. This paper proposes OneTrans, the first framework to jointly encode behavioral sequences alongside sparse and dense features into a unified token sequence, processed end-to-end via a shared-parameter Transformer architecture. Its key innovations include: (1) a unified tokenizer that aligns heterogeneous features into a coherent embedding space; (2) causal self-attention to preserve temporal validity in sequence modeling; and (3) cross-request KV caching to enable precomputation and reuse of intermediate representations. Evaluated on industrial-scale datasets, OneTrans significantly outperforms strong baselines—including Wukong and LONGER—achieving a 5.68% lift in per-user GMV in online A/B tests, while simultaneously improving both training and inference efficiency.
📝 Abstract
In recommendation systems, scaling up feature-interaction modules (e.g., Wukong, RankMixer) or user-behavior sequence modules (e.g., LONGER) has achieved notable success. However, these efforts typically proceed on separate tracks, which not only hinders bidirectional information exchange but also prevents unified optimization and scaling. In this paper, we propose OneTrans, a unified Transformer backbone that simultaneously performs user-behavior sequence modeling and feature interaction. OneTrans employs a unified tokenizer to convert both sequential and non-sequential attributes into a single token sequence. The stacked OneTrans blocks share parameters across similar sequential tokens while assigning token-specific parameters to non-sequential tokens. Through causal attention and cross-request KV caching, OneTrans enables precomputation and caching of intermediate representations, significantly reducing computational costs during both training and inference. Experimental results on industrial-scale datasets demonstrate that OneTrans scales efficiently with increasing parameters, consistently outperforms strong baselines, and yields a 5.68% lift in per-user GMV in online A/B tests.