🤖 AI Summary
Large language model (LLM) agents increasingly require structured, parseable outputs—such as code, function calls, or embodied instructions—yet conventional context-free grammar (CFG)-based constrained decoding suffers from high computational overhead due to full-vocabulary traversal and multi-stack state management.
Method: This paper introduces a CFG-driven constrained decoding engine that is context-free grammar–aware and computationally efficient. We propose a novel “lexical divide-and-conquer” strategy: pre-filtering context-free tokens while dynamically interpreting context-sensitive ones; further, we design grammar-context expansion transformations and a persistent stack mechanism to tightly couple grammar computation with GPU-based inference in a pipelined fashion.
Contribution/Results: Experiments demonstrate up to 100× speedup over state-of-the-art approaches. The engine achieves near-zero-overhead structured generation in end-to-end low-latency serving, significantly improving the reliability and real-time responsiveness of LLM agents on complex, structured tasks.
📝 Abstract
The applications of LLM Agents are becoming increasingly complex and diverse, leading to a high demand for structured outputs that can be parsed into code, structured function calls, and embodied agent commands. These developments bring significant demands for structured generation in LLM inference. Context-free grammar is a flexible approach to enable structured generation via constrained decoding. However, executing context-free grammar requires going through several stack states over all tokens in vocabulary during runtime, bringing non-negligible overhead for structured generation. In this paper, we propose XGrammar, a flexible and efficient structure generation engine for large language models. XGrammar accelerates context-free grammar execution by dividing the vocabulary into context-independent tokens that can be prechecked and context-dependent tokens that need to be interpreted during runtime. We further build transformations to expand the grammar context and reduce the number of context-independent tokens. Additionally, we build an efficient persistent stack to accelerate the context-dependent token checks. Finally, we co-design the grammar engine with LLM inference engine to overlap grammar computation with GPU executions. Evaluation results show that XGrammar can achieve up to 100x speedup over existing solutions. Combined with an LLM inference engine, it can generate near-zero overhead structure generation in end-to-end low-LLM serving.