On the Importance of Task Complexity in Evaluating LLM-Based Multi-Agent Systems

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing studies lack a systematic characterization of task complexity, limiting the generalizability of conclusions regarding the relative effectiveness of large language model–based multi-agent systems (LLM-MAS) versus single-agent systems (LLM-SAS). Method: We propose the first theoretical framework for task complexity grounded in two orthogonal dimensions—reasoning depth and capability breadth—and conduct rigorous theoretical modeling alongside multidimensional empirical evaluation across both discriminative and generative tasks. Contribution/Results: We uncover an asymmetric impact of these dimensions on LLM-MAS performance: system advantages increase markedly with greater reasoning depth, while capability breadth exerts a comparatively modest effect. Using a multi-agent debate system as a canonical case study, we demonstrate the framework’s practical utility for LLM-MAS design, evaluation, and task alignment. This work establishes an interpretable, scalable, and principled foundation for analyzing MAS efficacy.

Technology Category

Application Category

📝 Abstract
Large language model multi-agent systems (LLM-MAS) offer a promising paradigm for harnessing collective intelligence to achieve more advanced forms of AI behaviour. While recent studies suggest that LLM-MAS can outperform LLM single-agent systems (LLM-SAS) on certain tasks, the lack of systematic experimental designs limits the strength and generality of these conclusions. We argue that a principled understanding of task complexity, such as the degree of sequential reasoning required and the breadth of capabilities involved, is essential for assessing the effectiveness of LLM-MAS in task solving. To this end, we propose a theoretical framework characterising tasks along two dimensions: depth, representing reasoning length, and width, representing capability diversity. We theoretically examine a representative class of LLM-MAS, namely the multi-agent debate system, and empirically evaluate its performance in both discriminative and generative tasks with varying depth and width. Theoretical and empirical results show that the benefit of LLM-MAS over LLM-SAS increases with both task depth and width, and the effect is more pronounced with respect to depth. This clarifies when LLM-MAS are beneficial and provides a principled foundation for designing future LLM-MAS methods and benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM multi-agent systems lacks systematic task complexity analysis
Proposing a framework to characterize tasks by reasoning depth and capability width
Determining when multi-agent systems outperform single-agent systems on complex tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes a theoretical framework for task complexity
Evaluates multi-agent debate system performance empirically
Shows multi-agent benefits increase with task complexity
🔎 Similar Papers
No similar papers found.