Vectorizing the Trie: Efficient Constrained Decoding for LLM-based Generative Retrieval on Accelerators

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enforcing hard constraints—such as restricting retrieval to specific subsets of items defined by category or recency—in generative retrieval with large language models, where conventional autoregressive decoding struggles to do so efficiently. To overcome this limitation, the authors propose STATIC, a method that flattens the prefix tree (Trie) into a static compressed sparse row (CSR) matrix, thereby transforming irregular tree traversals into vectorizable sparse matrix operations amenable to parallel execution on TPU/GPU hardware. Deployed in a billion-scale user video recommendation system, STATIC enables production-grade constrained generative retrieval with only 0.033 ms of additional latency per decoding step (0.25% of total inference time), achieving a 948× speedup over a CPU-based Trie implementation and outperforming a hardware-accelerated binary search baseline by 47–1033×, while also substantially improving cold-start performance.

Technology Category

Application Category

📝 Abstract
Generative retrieval has emerged as a powerful paradigm for LLM-based recommendation. However, industrial recommender systems often benefit from restricting the output space to a constrained subset of items based on business logic (e.g. enforcing content freshness or product category), which standard autoregressive decoding cannot natively support. Moreover, existing constrained decoding methods that make use of prefix trees (Tries) incur severe latency penalties on hardware accelerators (TPUs/GPUs). In this work, we introduce STATIC (Sparse Transition Matrix-Accelerated Trie Index for Constrained Decoding), an efficient and scalable constrained decoding technique designed specifically for high-throughput LLM-based generative retrieval on TPUs/GPUs. By flattening the prefix tree into a static Compressed Sparse Row (CSR) matrix, we transform irregular tree traversals into fully vectorized sparse matrix operations, unlocking massive efficiency gains on hardware accelerators. We deploy STATIC on a large-scale industrial video recommendation platform serving billions of users. STATIC produces significant product metric impact with minimal latency overhead (0.033 ms per step and 0.25% of inference time), achieving a 948x speedup over a CPU trie implementation and a 47-1033x speedup over a hardware-accelerated binary-search baseline. Furthermore, the runtime overhead of STATIC remains extremely low across a wide range of practical configurations. To the best of our knowledge, STATIC enables the first production-scale deployment of strictly constrained generative retrieval. In addition, evaluation on academic benchmarks demonstrates that STATIC can considerably improve cold-start performance for generative retrieval. Our code is available at https://github.com/youtube/static-constraint-decoding.
Problem

Research questions and friction points this paper is trying to address.

constrained decoding
generative retrieval
prefix tree
hardware accelerators
LLM-based recommendation
Innovation

Methods, ideas, or system contributions that make the work stand out.

constrained decoding
generative retrieval
vectorized sparse matrix
Trie optimization
hardware acceleration
Z
Zhengyang Su
YouTube
Isay Katsman
Isay Katsman
PhD Student, Yale University
Machine LearningGeometric Deep Learning
Y
Yueqi Wang
YouTube
R
Ruining He
Google DeepMind
L
Lukasz Heldt
YouTube
R
Raghunandan Keshavan
YouTube
S
Shao-Chuan Wang
Google DeepMind
Xinyang Yi
Xinyang Yi
Google DeepMind
Machine LearningLLMsRecommendations
M
Mingyan Gao
YouTube
Onkar Dalal
Onkar Dalal
Stanford University
graphical modelsoptimization algorithmsmachine learningdata mining
Lichan Hong
Lichan Hong
Google DeepMind
Recommendation SystemLLMDeep LearningSocial ComputingVisualization
E
Ed Chi
Google DeepMind
N
Ningren Han
YouTube