🤖 AI Summary
To address the reliability deficits of large language models (LLMs) on complex tasks—stemming from heuristic or manual task decomposition—this paper proposes ACONIC, the first systematic task decomposition framework grounded in constraint modeling and formal complexity analysis. Methodologically, ACONIC introduces constraint-induced complexity analysis: it formalizes tasks as constraint satisfaction problems and leverages computational complexity metrics to automatically determine optimal decomposition granularity and branching paths. Crucially, it eliminates reliance on agent orchestration or human priors, thereby enhancing decomposition stability and interpretability. Evaluated on SATBench (combinatorial optimization) and Spider (text-to-SQL), ACONIC improves LLM accuracy by 10–40 percentage points, demonstrating both effectiveness in complex reasoning and strong generalization across diverse domains.
📝 Abstract
Large Language Models (LLMs) suffer from reliability issues on complex tasks, as existing decomposition methods are heuristic and rely on agent or manual decomposition. This work introduces a novel, systematic decomposition framework that we call Analysis of CONstraint-Induced Complexity (ACONIC), which models the task as a constraint problem and leveraging formal complexity measures to guide decomposition. On combinatorial (SATBench) and LLM database querying tasks (Spider), we find that by decomposing the tasks following the measure of complexity, agent can perform considerably better (10-40 percentage point).