🤖 AI Summary
Existing studies lack a systematic characterization of task complexity, limiting the generalizability of conclusions regarding the relative effectiveness of large language model–based multi-agent systems (LLM-MAS) versus single-agent systems (LLM-SAS).
Method: We propose the first theoretical framework for task complexity grounded in two orthogonal dimensions—reasoning depth and capability breadth—and conduct rigorous theoretical modeling alongside multidimensional empirical evaluation across both discriminative and generative tasks.
Contribution/Results: We uncover an asymmetric impact of these dimensions on LLM-MAS performance: system advantages increase markedly with greater reasoning depth, while capability breadth exerts a comparatively modest effect. Using a multi-agent debate system as a canonical case study, we demonstrate the framework’s practical utility for LLM-MAS design, evaluation, and task alignment. This work establishes an interpretable, scalable, and principled foundation for analyzing MAS efficacy.
📝 Abstract
Large language model multi-agent systems (LLM-MAS) offer a promising paradigm for harnessing collective intelligence to achieve more advanced forms of AI behaviour. While recent studies suggest that LLM-MAS can outperform LLM single-agent systems (LLM-SAS) on certain tasks, the lack of systematic experimental designs limits the strength and generality of these conclusions. We argue that a principled understanding of task complexity, such as the degree of sequential reasoning required and the breadth of capabilities involved, is essential for assessing the effectiveness of LLM-MAS in task solving. To this end, we propose a theoretical framework characterising tasks along two dimensions: depth, representing reasoning length, and width, representing capability diversity. We theoretically examine a representative class of LLM-MAS, namely the multi-agent debate system, and empirically evaluate its performance in both discriminative and generative tasks with varying depth and width. Theoretical and empirical results show that the benefit of LLM-MAS over LLM-SAS increases with both task depth and width, and the effect is more pronounced with respect to depth. This clarifies when LLM-MAS are beneficial and provides a principled foundation for designing future LLM-MAS methods and benchmarks.