π€ AI Summary
This work addresses the critical yet underexplored impact of architectural design on the performance of multi-agent large language model (LLM) frameworks, where a lack of standardized evaluation methodologies has hindered systematic comparison. To bridge this gap, we propose the first comprehensive architectural taxonomy for multi-agent LLM systems and introduce MAFBench, a unified benchmark that enables controlled cross-framework evaluation through standardized execution protocols. Our experiments reveal that architectural choices can lead to over 100-fold increases in latency, up to 30% degradation in planning accuracy, and a dramatic drop in collaboration success ratesβfrom above 90% to below 30%. Based on these findings, we derive practical architectural design principles and framework selection guidelines to inform real-world deployment.
π Abstract
Multi-agent LLM frameworks are widely used to accelerate the development of agent systems powered by large language models (LLMs). These frameworks impose distinct architectural structures that govern how agents interact, store information, and coordinate tasks. However, their impact on system performance remains poorly understood. This gap is critical, as architectural choices alone can induce order-of-magnitude differences in latency and throughput, as well as substantial variation in accuracy and scalability. Addressing this challenge requires (i) jointly evaluating multiple capabilities, such as orchestration overhead, memory behavior, planning, specialization, and coordination, and (ii) conducting these evaluations under controlled, framework-level conditions to isolate architectural effects. Existing benchmarks focus on individual capabilities and lack standardized framework-level evaluation. We address these limitations by (i) introducing an architectural taxonomy for systematically comparing multi-agent LLM frameworks along fundamental dimensions, and (ii) developing MAFBench, a unified evaluation suite that integrates existing benchmarks under a standardized execution pipeline. Using MAFBench, we conduct a controlled empirical study across several widely used frameworks. Our results show that framework-level design choices alone can increase latency by over 100x, reduce planning accuracy by up to 30%, and lower coordination success from above 90% to below 30%. Finally, we translate our findings into concrete architectural design principles and framework selection guidance, and outline promising future research directions.