CRANE: Reasoning with constrained LLM generation

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental trade-off between syntactic/semantic correctness and complex reasoning capability in constrained large language model (LLM) generation. We propose a theoretically grounded, reasoning-enhanced constrained decoding paradigm. First, we formally characterize the intrinsic mechanism by which conventional constraints impair reasoning—revealing, from a formal grammar perspective, that constraint imposition restricts the model’s latent reasoning space. We prove that extending the output grammar preserves both strict syntactic and semantic correctness and fully retains the model’s inherent reasoning capacity. Methodologically, we design a grammar-extended constrained decoding algorithm that seamlessly integrates grammar-guided generation with chain-of-thought (CoT) structural modeling. On symbolic reasoning benchmarks—including GSM-symbolic and FOLIO—our approach achieves up to a 10-percentage-point accuracy improvement over state-of-the-art constrained decoding methods and unconstrained baselines, thereby achieving joint optimization of correctness and reasoning performance.

Technology Category

Application Category

📝 Abstract
Code generation, symbolic math reasoning, and other tasks require LLMs to produce outputs that are both syntactically and semantically correct. Constrained LLM generation is a promising direction to enforce adherence to formal grammar, but prior works have empirically observed that strict enforcement of formal constraints often diminishes the reasoning capabilities of LLMs. In this work, we first provide a theoretical explanation for why constraining LLM outputs to very restrictive grammars that only allow syntactically valid final answers reduces the reasoning capabilities of the model. Second, we demonstrate that by augmenting the output grammar with carefully designed additional rules, it is always possible to preserve the reasoning capabilities of the LLM while ensuring syntactic and semantic correctness in its outputs. Building on these theoretical insights, we propose a reasoning-augmented constrained decoding algorithm, CRANE, which effectively balances the correctness of constrained generation with the flexibility of unconstrained generation. Experiments on multiple open-source LLMs and benchmarks show that CRANE significantly outperforms both state-of-the-art constrained decoding strategies and standard unconstrained decoding, showing up to 10% points accuracy improvement over baselines on challenging symbolic reasoning benchmarks GSM-symbolic and FOLIO.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLM output correctness with constraints
Balance reasoning capabilities and formal grammar adherence
Improve accuracy in symbolic and code generation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constrained LLM generation
Reasoning-augmented decoding algorithm
Enhancing syntax and semantic correctness
🔎 Similar Papers
No similar papers found.