Equip Pre-ranking with Target Attention by Residual Quantization

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In industrial recommendation systems, the pre-ranking stage—constrained by stringent latency requirements—struggles to adopt expressive Target Attention (TA) models, resulting in a substantial capability gap between pre-ranking and ranking models. To address this, we propose TARQ, the first framework to introduce TA into pre-ranking. TARQ approximates TA’s complex feature interactions via residual quantization and employs a generative architecture design, achieving vector-dot-product-level inference efficiency while significantly enhancing feature modeling capacity. This bridges the accuracy gap between pre-ranking and ranking, establishing a new efficiency–effectiveness trade-off. Evaluated in large-scale A/B tests on Taobao, TARQ delivers statistically significant improvements across key ranking metrics. It has been deployed in production, serving tens of millions of daily active users and yielding substantial business gains.

Technology Category

Application Category

📝 Abstract
The pre-ranking stage in industrial recommendation systems faces a fundamental conflict between efficiency and effectiveness. While powerful models like Target Attention (TA) excel at capturing complex feature interactions in the ranking stage, their high computational cost makes them infeasible for pre-ranking, which often relies on simplistic vector-product models. This disparity creates a significant performance bottleneck for the entire system. To bridge this gap, we propose TARQ, a novel pre-ranking framework. Inspired by generative models, TARQ's key innovation is to equip pre-ranking with an architecture approximate to TA by Residual Quantization. This allows us to bring the modeling power of TA into the latency-critical pre-ranking stage for the first time, establishing a new state-of-the-art trade-off between accuracy and efficiency. Extensive offline experiments and large-scale online A/B tests at Taobao demonstrate TARQ's significant improvements in ranking performance. Consequently, our model has been fully deployed in production, serving tens of millions of daily active users and yielding substantial business improvements.
Problem

Research questions and friction points this paper is trying to address.

Resolving efficiency-effectiveness conflict in recommendation pre-ranking systems
Bridging performance gap between simple pre-ranking and complex ranking models
Enabling Target Attention modeling under strict latency constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Residual Quantization approximates Target Attention architecture
TARQ framework bridges efficiency-effectiveness gap in pre-ranking
Generative model inspiration enables latency-critical TA modeling
Y
Yutong Li
Taobao & Tmall Group of Alibaba, Hangzhou, China
Y
Yu Zhu
Taobao & Tmall Group of Alibaba, Hangzhou, China
Y
Yichen Qiao
Shanghai Jiao Tong University, Shanghai, China
Ziyu Guan
Ziyu Guan
Xidian University
Data miningmachine learningsocial media
L
Lv Shao
Taobao & Tmall Group of Alibaba, Hangzhou, China
T
Tong Liu
Taobao & Tmall Group of Alibaba, Hangzhou, China
B
Bo Zheng
Taobao & Tmall Group of Alibaba, Hangzhou, China