🤖 AI Summary
This work addresses the performance bottlenecks of Transformer-based models in industrial-scale recommendation systems, which stem from high feature sparsity and low label density. To overcome these challenges, the authors propose a systematic optimization framework incorporating request-centric sampling, localized attention mechanisms, query pruning, and generative pretraining, alongside module-level enhancements to tokenization, multi-head attention (MHA), and feed-forward networks (FFN). The proposed approach substantially improves training stability and model capacity while enabling efficient hardware utilization. Online A/B experiments demonstrate significant gains in key business metrics—orders increased by 6.35%, buyers by 5.97%, and GMV by 5.47%—alongside a 44.67% reduction in inference latency and a 121.33% improvement in throughput.
📝 Abstract
While Transformers have achieved remarkable success in LLMs through superior scalability, their application in industrial-scale ranking models remains nascent, hindered by the challenges of high feature sparsity and low label density. In this paper, we propose SORT (Systematically Optimized Ranking Transformer), a scalable model designed to bridge the gap between Transformers and industrial-scale ranking models. We address the high feature sparsity and low label density challenges through a series of optimizations, including request-centric sample organization, local attention, query pruning and generative pre-training. Furthermore, we introduce a suite of refinements to the tokenization, multi-head attention (MHA), and feed-forward network (FFN) modules, which collectively stabilize the training process and enlarge the model capacity. To maximize hardware efficiency, we optimize our training system to elevate the model FLOPs utilization (MFU) to 22%. Extensive experiments demonstrate that SORT outperforms strong baselines and exhibits excellent scalability across data size, model size and sequence length, while remaining flexible at integrating diverse features. Finally, online A/B testing in large-scale e-commerce scenarios confirms that SORT achieves significant gains in key business metrics, including orders (+6.35%), buyers (+5.97%) and GMV (+5.47%), while simultaneously halving latency (-44.67%) and doubling throughput (+121.33%).