🤖 AI Summary
This work addresses the inherent ambiguity and multifactorial nature of fault attribution in multi-agent systems, a complexity often overlooked by existing methods that assume a single deterministic root cause. To better capture this realism, the paper proposes a multi-perspective fault attribution paradigm, introduces MP-Bench—the first benchmark designed to support attribution diversity—and establishes a corresponding evaluation protocol. Leveraging multi-agent trajectory analysis, large language model (LLM) assessment, and multi-view modeling, experiments reveal that current benchmarks significantly underestimate LLMs’ attribution capabilities. The findings underscore the critical influence of benchmark design on evaluation outcomes and demonstrate both the necessity and effectiveness of the proposed multi-perspective approach.
📝 Abstract
Failure attribution is essential for diagnosing and improving multi-agent systems (MAS), yet existing benchmarks and methods largely assume a single deterministic root cause for each failure. In practice, MAS failures often admit multiple plausible attributions due to complex inter-agent dependencies and ambiguous execution trajectories. We revisit MAS failure attribution from a multi-perspective standpoint and propose multi-perspective failure attribution, a practical paradigm that explicitly accounts for attribution ambiguity. To support this setting, we introduce MP-Bench, the first benchmark designed for multi-perspective failure attribution in MAS, along with a new evaluation protocol tailored to this paradigm. Through extensive experiments, we find that prior conclusions suggesting LLMs struggle with failure attribution are largely driven by limitations in existing benchmark designs. Our results highlight the necessity of multi-perspective benchmarks and evaluation protocols for realistic and reliable MAS debugging.