🤖 AI Summary
This paper identifies a novel security blind spot in large language models (LLMs) regarding structured output generation—specifically, under JSON Schema constraints—where syntactic constraints themselves can be weaponized as an attack surface. Method: The authors introduce the “control-plane attack” paradigm and systematically formulate Constraint Decoding Attacks (CDA), embedding malicious intent within schema-level grammatical rules to evade mainstream safety mechanisms. Their approach integrates grammar-guided constrained decoding, a Chain Enum technique for adversarial schema construction, and a multi-model, cross-architecture evaluation framework. Contribution/Results: Experiments demonstrate that CDA achieves an average single-query success rate of 96.2% across five major safety benchmarks on models including GPT-4o and Gemini-2.0-flash. Crucially, this attack operates orthogonally to conventional prompt-level jailbreaking, exposing critical vulnerabilities in LLMs’ structured interface security and establishing a new research direction for constraint-aware adversarial robustness.
📝 Abstract
Content Warning: This paper may contain unsafe or harmful content generated by LLMs that may be offensive to readers. Large Language Models (LLMs) are extensively used as tooling platforms through structured output APIs to ensure syntax compliance so that robust integration with existing softwares like agent systems, could be achieved. However, the feature enabling functionality of grammar-guided structured output presents significant security vulnerabilities. In this work, we reveal a critical control-plane attack surface orthogonal to traditional data-plane vulnerabilities. We introduce Constrained Decoding Attack (CDA), a novel jailbreak class that weaponizes structured output constraints to bypass safety mechanisms. Unlike prior attacks focused on input prompts, CDA operates by embedding malicious intent in schema-level grammar rules (control-plane) while maintaining benign surface prompts (data-plane). We instantiate this with a proof-of-concept Chain Enum Attack, achieves 96.2% attack success rates across proprietary and open-weight LLMs on five safety benchmarks with a single query, including GPT-4o and Gemini-2.0-flash. Our findings identify a critical security blind spot in current LLM architectures and urge a paradigm shift in LLM safety to address control-plane vulnerabilities, as current mechanisms focused solely on data-plane threats leave critical systems exposed.