OneTrans: Unified Feature Interaction and Sequence Modeling with One Transformer in Industrial Recommender

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In recommender systems, user behavior sequence modeling and feature interaction modeling have traditionally been decoupled, leading to information fragmentation, optimization difficulties, and computational redundancy. This paper proposes OneTrans, the first framework to jointly encode behavioral sequences alongside sparse and dense features into a unified token sequence, processed end-to-end via a shared-parameter Transformer architecture. Its key innovations include: (1) a unified tokenizer that aligns heterogeneous features into a coherent embedding space; (2) causal self-attention to preserve temporal validity in sequence modeling; and (3) cross-request KV caching to enable precomputation and reuse of intermediate representations. Evaluated on industrial-scale datasets, OneTrans significantly outperforms strong baselines—including Wukong and LONGER—achieving a 5.68% lift in per-user GMV in online A/B tests, while simultaneously improving both training and inference efficiency.

Technology Category

Application Category

📝 Abstract
In recommendation systems, scaling up feature-interaction modules (e.g., Wukong, RankMixer) or user-behavior sequence modules (e.g., LONGER) has achieved notable success. However, these efforts typically proceed on separate tracks, which not only hinders bidirectional information exchange but also prevents unified optimization and scaling. In this paper, we propose OneTrans, a unified Transformer backbone that simultaneously performs user-behavior sequence modeling and feature interaction. OneTrans employs a unified tokenizer to convert both sequential and non-sequential attributes into a single token sequence. The stacked OneTrans blocks share parameters across similar sequential tokens while assigning token-specific parameters to non-sequential tokens. Through causal attention and cross-request KV caching, OneTrans enables precomputation and caching of intermediate representations, significantly reducing computational costs during both training and inference. Experimental results on industrial-scale datasets demonstrate that OneTrans scales efficiently with increasing parameters, consistently outperforms strong baselines, and yields a 5.68% lift in per-user GMV in online A/B tests.
Problem

Research questions and friction points this paper is trying to address.

Unifies feature interaction and sequence modeling in recommendation systems
Enables bidirectional information exchange through shared Transformer backbone
Reduces computational costs with precomputation and caching mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Transformer for sequence modeling and feature interaction
Shared parameters for sequential tokens and token-specific parameters
Causal attention and KV caching reduce computational costs
🔎 Similar Papers
No similar papers found.
Z
Zhaoqi Zhang
Nanyang Technological University, ByteDance, Singapore, Singapore
H
Haolei Pei
ByteDance, Singapore, Singapore
J
Jun Guo
ByteDance, Singapore, Singapore
T
Tianyu Wang
ByteDance, Singapore, Singapore
Yufei Feng
Yufei Feng
Bytedance
Information RetrievalRecommender SystemClick-Through Rate Prediction
H
Hui Sun
ByteDance, Hangzhou, China
Shaowei Liu
Shaowei Liu
University of Illinois Urbana-Champaign
Computer VisionRobotics
A
Aixin Sun
Nanyang Technological University, Singapore, Singapore