GEMMAS: Graph-based Evaluation Metrics for Multi Agent Systems

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods for multi-agent systems (MAS) focus solely on output correctness, neglecting inference redundancy and computational overhead caused by inefficient communication and poor collaboration. This paper proposes GEMMAS—the first DAG-based evaluation framework that models agent interaction processes to enable fine-grained diagnostic analysis. It introduces two novel process-level metrics: Information Diversity Score (measuring semantic coverage breadth) and Unnecessary Path Ratio (quantifying redundant reasoning), thereby shifting evaluation from outcome-oriented to process-aware paradigms. By integrating semantic analysis with path redundancy detection, GEMMAS is validated across five benchmarks—including GSM8K—revealing substantial disparities in information diversity and redundancy among systems with comparable accuracy. These findings underscore the critical role of process-level assessment in enhancing MAS efficiency and interpretability.

Technology Category

Application Category

📝 Abstract
Multi-agent systems built on language models have shown strong performance on collaborative reasoning tasks. However, existing evaluations focus only on the correctness of the final output, overlooking how inefficient communication and poor coordination contribute to redundant reasoning and higher computational costs. We introduce GEMMAS, a graph-based evaluation framework that analyzes the internal collaboration process by modeling agent interactions as a directed acyclic graph. To capture collaboration quality, we propose two process-level metrics: Information Diversity Score (IDS) to measure semantic variation in inter-agent messages, and Unnecessary Path Ratio (UPR) to quantify redundant reasoning paths. We evaluate GEMMAS across five benchmarks and highlight results on GSM8K, where systems with only a 2.1% difference in accuracy differ by 12.8% in IDS and 80% in UPR, revealing substantial variation in internal collaboration. These findings demonstrate that outcome-only metrics are insufficient for evaluating multi-agent performance and highlight the importance of process-level diagnostics in designing more interpretable and resource-efficient collaborative AI systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multi-agent collaboration beyond final output correctness
Measuring inefficiency in communication and coordination costs
Assessing redundant reasoning paths and semantic diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-based framework models agent interactions
Measures semantic diversity in agent messages
Quantifies redundant reasoning paths efficiency
🔎 Similar Papers
No similar papers found.
Jisoo Lee
Jisoo Lee
Indiana University
Human-AI collaborationCybersecurity
R
Raeyoung Chang
Sogang University
Dongwook Kwon
Dongwook Kwon
MS in Computer Engineering from Kwangwoon University
Deep LearningAnomaly Detection
H
Harmanpreet Singh
LG Electronics, Toronto AI Lab
N
Nikhil Verma
LG Electronics, Toronto AI Lab