Decoupling Task-Solving and Output Formatting in LLM Generation

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often exhibit task deviation and format violations when executing complex instructions that simultaneously demand logical reasoning and strict syntactic adherence, due to inherent conflicts between task objectives and formatting constraints. To address this, we propose Deco-G—a novel framework that decouples task solving from format compliance: the LLM focuses exclusively on reasoning, while a tractable probabilistic model—built upon an enhanced Trie structure and a pruned Hidden Markov Model—explicitly encodes and enforces output syntax. Instruction-aware knowledge distillation and decoding-time probability fusion enable efficient coordination between the two components. Evaluated on mathematical reasoning, LLM-as-a-judge, and event argument extraction, Deco-G achieves 1.0–6.0% relative performance gains while maintaining 100% format compliance, substantially mitigating the instruction-interference-induced decline in instruction following fidelity.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly adept at following instructions containing task descriptions to solve complex problems, such as mathematical reasoning and automatic evaluation (LLM-as-a-Judge). However, as prompts grow more complex, models often struggle to adhere to all instructions. This difficulty is especially common when instructive prompts intertwine reasoning directives -- specifying what the model should solve -- with rigid formatting requirements that dictate how the solution must be presented. The entanglement creates competing goals for the model, suggesting that more explicit separation of these two aspects could lead to improved performance. To this front, we introduce Deco-G, a decoding framework that explicitly decouples format adherence from task solving. Deco-G handles format compliance with a separate tractable probabilistic model (TPM), while prompts LLMs with only task instructions. At each decoding step, Deco-G combines next token probabilities from the LLM with the TPM calculated format compliance likelihood to form the output probability. To make this approach both practical and scalable for modern instruction-tuned LLMs, we introduce three key innovations: instruction-aware distillation, a flexible trie-building algorithm, and HMM state pruning for computational efficiency. We demonstrate the effectiveness of Deco-G across a wide range of tasks with diverse format requirements, including mathematical reasoning, LLM-as-a-judge, and event argument extraction. Overall, our approach yields 1.0% to 6.0% relative gain over regular prompting practice with guaranteed format compliance.
Problem

Research questions and friction points this paper is trying to address.

Decoupling task-solving from formatting requirements in LLM generation
Addressing performance issues from entangled reasoning and formatting instructions
Ensuring guaranteed format compliance while maintaining task-solving accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupling task-solving and formatting with separate models
Using tractable probabilistic model for format compliance
Combining LLM probabilities with format likelihood per step
H
Haikang Deng
University of California, Los Angeles
Po-Nien Kung
Po-Nien Kung
UCLA CS Ph.D. Student
NLPMTL
N
Nanyun Peng
University of California, Los Angeles