AgentGroupChat-V2: Divide-and-Conquer Is What LLM-Based Multi-Agent System Need

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-based multi-agent systems suffer from limited architectural scalability, poor cross-domain generalization, and unstable performance. To address these challenges, this paper proposes an efficient collaborative framework for complex task solving. Our method introduces three core innovations: (1) a fully parallel hierarchical task forest architecture enabling dynamic task decomposition and dependency-aware concurrent execution; (2) an adaptive heterogeneous LLM collaboration engine that dynamically selects and composes models based on task semantics; and (3) agent organization optimization strategies to enhance collaboration efficiency and robustness. Extensive experiments demonstrate substantial improvements over strong baselines: +5.6 points absolute gain on GSM8K (91.50%), nearly doubling performance on AIME (30.4%), 79.20% pass@1 on HumanEval, and over 11 percentage-point improvement on MATH Level 5. These results validate the framework’s superior generalization capability, scalability, and stability across diverse reasoning and coding benchmarks.

Technology Category

Application Category

📝 Abstract
Large language model based multi-agent systems have demonstrated significant potential in social simulation and complex task resolution domains. However, current frameworks face critical challenges in system architecture design, cross-domain generalizability, and performance guarantees, particularly as task complexity and number of agents increases. We introduces AgentGroupChat-V2, a novel framework addressing these challenges through three core innovations: (1) a divide-and-conquer fully parallel architecture that decomposes user queries into hierarchical task forest structures enabling dependency management and distributed concurrent processing. (2) an adaptive collaboration engine that dynamically selects heterogeneous LLM combinations and interaction modes based on task characteristics. (3) agent organization optimization strategies combining divide-and-conquer approaches for efficient problem decomposition. Extensive experiments demonstrate AgentGroupChat-V2's superior performance across diverse domains, achieving 91.50% accuracy on GSM8K (exceeding the best baseline by 5.6 percentage points), 30.4% accuracy on competition-level AIME (nearly doubling other methods), and 79.20% pass@1 on HumanEval. Performance advantages become increasingly pronounced with higher task difficulty, particularly on Level 5 MATH problems where improvements exceed 11 percentage points compared to state-of-the-art baselines. These results confirm that AgentGroupChat-V2 provides a comprehensive solution for building efficient, general-purpose LLM multi-agent systems with significant advantages in complex reasoning scenarios. Code is available at https://github.com/MikeGu721/AgentGroupChat-V2.
Problem

Research questions and friction points this paper is trying to address.

Enhance multi-agent system architecture for complex tasks
Improve cross-domain generalizability in LLM-based systems
Ensure performance guarantees with increasing task complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Divide-and-conquer parallel architecture for task decomposition
Adaptive collaboration engine for dynamic LLM selection
Agent organization optimization for efficient problem solving
🔎 Similar Papers
No similar papers found.
Zhouhong Gu
Zhouhong Gu
Fudan University
Language ModelingAutomated SocietyModel Editing
X
Xiaoxuan Zhu
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University
Y
Yin Cai
Fudan University
H
Hao Shen
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University
X
Xingzhou Chen
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University
Q
Qingyi Wang
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University
J
Jialin Li
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University
X
Xiaoran Shi
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University
H
Haoran Guo
East China Normal University
Wenxuan Huang
Wenxuan Huang
CUHK & ECNU
Artificial General IntelligenceMLLMLLMAIGCModel Acceleration
Hongwei Feng
Hongwei Feng
Fudan University
knowledge management,AI,big data
Y
Yanghua Xiao
Fudan University
Zheyu Ye
Zheyu Ye
Imperial College London
Language ModelsAI Agents
Yao Hu
Yao Hu
浙江大学
Machine Learning
Shaosheng Cao
Shaosheng Cao
Xiaohongshu, DiDi Chuxing, Ant Financial, Microsoft Research
LLMsMultimodal LLMsReinforcement LearningNLPGraph Neural Networks