MASEval: Extending Multi-Agent Evaluation from Models to Systems

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical gap in current evaluation benchmarks for multi-agent systems, which predominantly focus on individual models while neglecting the substantial impact of implementation-level factors—such as system frameworks, topological structures, and orchestration logic—on overall performance. To bridge this gap, the authors propose a framework-agnostic, system-level evaluation methodology that treats the entire agent system, rather than just the underlying model, as the unit of assessment. Through controlled experiments across three benchmarks, three language models, and three widely used frameworks (including AutoGen, LangGraph, and CAMEL), the study demonstrates that system implementation details can influence performance to an extent comparable to model selection itself. These findings provide empirical grounding for informed architectural design and framework selection in practical multi-agent system development.

Technology Category

Application Category

📝 Abstract
The rapid adoption of LLM-based agentic systems has produced a rich ecosystem of frameworks (smolagents, LangGraph, AutoGen, CAMEL, LlamaIndex, i.a.). Yet existing benchmarks are model-centric: they fix the agentic setup and do not compare other system components. We argue that implementation decisions substantially impact performance, including choices such as topology, orchestration logic, and error handling. MASEval addresses this evaluation gap with a framework-agnostic library that treats the entire system as the unit of analysis. Through a systematic system-level comparison across 3 benchmarks, 3 models, and 3 frameworks, we find that framework choice matters as much as model choice. MASEval allows researchers to explore all components of agentic systems, opening new avenues for principled system design, and practitioners to identify the best implementation for their use case. MASEval is available under the MIT licence https://github.com/parameterlab/MASEval.
Problem

Research questions and friction points this paper is trying to address.

multi-agent evaluation
agentic systems
system-level comparison
framework-agnostic
LLM-based systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent evaluation
system-level benchmarking
framework-agnostic
agentic systems
LLM-based systems
🔎 Similar Papers
No similar papers found.