DebFlow: Automating Agent Creation via Agent Debate

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address weak reasoning capabilities and high computational/resource overhead in LLM-driven workflow generation, this paper proposes a collaborative optimization framework integrating multi-agent debate and experience-based reflection. It innovatively introduces a structured debate mechanism into workflow generation, establishing a dual-path optimization paradigm where debate serves as the primary driver and reflection as the auxiliary component. The framework unifies reflexive learning (Reflexion), workflow graph optimization, and end-to-end benchmark training and evaluation. Evaluated on six benchmarks—including HotpotQA and MATH—it achieves an average accuracy improvement of 3% and reduces training resource consumption by 37%. Ablation studies demonstrate that the debate module contributes twice the performance gain of the reflection module, confirming its central role in the framework.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated strong potential and impressive performance in automating the generation and optimization of workflows. However, existing approaches are marked by limited reasoning capabilities, high computational demands, and significant resource requirements. To address these issues, we propose DebFlow, a framework that employs a debate mechanism to optimize workflows and integrates reflexion to improve based on previous experiences. We evaluated our method across six benchmark datasets, including HotpotQA, MATH, and ALFWorld. Our approach achieved a 3% average performance improvement over the latest baselines, demonstrating its effectiveness in diverse problem domains. In particular, during training, our framework reduces resource consumption by 37% compared to the state-of-the-art baselines. Additionally, we performed ablation studies. Removing the Debate component resulted in a 4% performance drop across two benchmark datasets, significantly greater than the 2% drop observed when the Reflection component was removed. These findings strongly demonstrate the critical role of Debate in enhancing framework performance, while also highlighting the auxiliary contribution of reflexion to overall optimization.
Problem

Research questions and friction points this paper is trying to address.

Automating workflow generation with limited reasoning capabilities
Reducing high computational demands in LLM-based workflows
Optimizing agent creation via debate and reflexion mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Debate mechanism optimizes workflows
Reflexion integrates past experiences
Reduces resource consumption significantly