🤖 AI Summary
Existing multi-agent frameworks face two key bottlenecks in complex planning tasks: unverifiable plans and high variability in instance complexity. This paper proposes Converge, the first multi-agent framework featuring a constraint-guided, iterative verification paradigm built upon a novel tri-agent architecture—Constraint, Verify, and Select. It introduces an instance-complexity-aware reasoning algorithm selection mechanism that dynamically adapts inference strategies. By integrating formal constraint modeling with advanced reasoning enhancements—including Best-of-N, Tree-of-Thought, and REBASE—the framework establishes a closed-loop verification process and enables self-adaptive scheduling of reasoning algorithms. Converge achieves state-of-the-art performance across four challenging benchmarks: NATURAL PLAN (+8%), OlympiadBench (+4%), DocFinQA (+7%), and GPQA (+1%). Empirical results demonstrate substantial improvements in plan verifiability, robustness to input variation, and generalization across diverse complex reasoning tasks.
📝 Abstract
Recent agent frameworks and inference-time algorithms often struggle with complex planning problems due to limitations in verifying generated plans or reasoning and varying complexity of instances within a single task. Many existing methods for these tasks either perform task-level verification without considering constraints or apply inference-time algorithms without adapting to instance-level complexity. To address these limitations, we propose PlanGEN, a model-agnostic and easily scalable agent framework with three key components: constraint, verification, and selection agents. Specifically, our approach proposes constraint-guided iterative verification to enhance performance of inference-time algorithms--Best of N, Tree-of-Thought, and REBASE. In PlanGEN framework, the selection agent optimizes algorithm choice based on instance complexity, ensuring better adaptability to complex planning problems. Experimental results demonstrate significant improvements over the strongest baseline across multiple benchmarks, achieving state-of-the-art results on NATURAL PLAN ($sim$8%$uparrow$), OlympiadBench ($sim$4%$uparrow$), DocFinQA ($sim$7%$uparrow$), and GPQA ($sim$1%$uparrow$). Our key finding highlights that constraint-guided iterative verification improves inference-time algorithms, and adaptive selection further boosts performance on complex planning and reasoning problems.