🤖 AI Summary
Modeling ultra-long user behavior sequences (up to millions of events) incurs severe industrial scalability bottlenecks—including high inference latency, low queries-per-second (QPS), and prohibitive GPU costs—due to quadratic complexity in standard attention mechanisms.
Method: We propose a two-stage decoupled modeling paradigm: (1) a lightweight summarization network generates cacheable, fixed-length user history summary tokens; (2) target attention is decomposed into summary-aware candidate scoring, decoupling training and inference complexity from sequence length.
Contribution/Results: Our method enables lifelong user memory modeling without introducing sequence-length-dependent computational overhead. Deployed on an industrial recommendation platform serving billions of users, it achieves significant offline AUC gains, concurrent improvements in online CTR (+X%) and QPS (+Y%), and reduces GPU resource consumption by over 40%.
📝 Abstract
Modern large-scale recommendation systems rely heavily on user interaction history sequences to enhance the model performance. The advent of large language models and sequential modeling techniques, particularly transformer-like architectures, has led to significant advancements recently (e.g., HSTU, SIM, and TWIN models). While scaling to ultra-long user histories (10k to 100k items) generally improves model performance, it also creates significant challenges on latency, queries per second (QPS) and GPU cost in industry-scale recommendation systems. Existing models do not adequately address these industrial scalability issues. In this paper, we propose a novel two-stage modeling framework, namely VIrtual Sequential Target Attention (VISTA), which decomposes traditional target attention from a candidate item to user history items into two distinct stages: (1) user history summarization into a few hundred tokens; followed by (2) candidate item attention to those tokens. These summarization token embeddings are then cached in storage system and then utilized as sequence features for downstream model training and inference. This novel design for scalability enables VISTA to scale to lifelong user histories (up to one million items) while keeping downstream training and inference costs fixed, which is essential in industry. Our approach achieves significant improvements in offline and online metrics and has been successfully deployed on an industry leading recommendation platform serving billions of users.