π€ AI Summary
This work addresses the challenge that large language models often struggle to simultaneously satisfy content relevance and formal constraints, leading to procedural errors. To overcome this, the authors propose a multi-agent workflow that, for the first time, decouples the primary task description from fine-grained constraints and iteratively refines prompts through an evaluation-driven collaborative mechanism. By integrating automated scoring feedback, prompt rewriting, and multi-agent coordination, the approach significantly enhances adherence to formal constraints in model outputs. Experiments on Llama 3.1 8B and Mixtral-8x 7B demonstrate substantial improvements, validating the effectiveness of constraint decoupling and evaluation-guided refinement in boosting instruction-following performance.
π Abstract
Large Language Models (LLMs) often generate substantively relevant content but fail to adhere to formal constraints, leading to outputs that are conceptually correct but procedurally flawed. Traditional prompt refinement approaches focus on rephrasing the description of the primary task an LLM has to perform, neglecting the granular constraints that function as acceptance criteria for its response. We propose a novel multi-agentic workflow that decouples optimization of the primary task description from its constraints, using quantitative scores as feedback to iteratively rewrite and improve them. Our evaluation demonstrates this method produces revised prompts that yield significantly higher compliance scores from models like Llama 3.1 8B and Mixtral-8x 7B.