🤖 AI Summary
Debugging LLM-based multi-agent systems remains challenging due to ambiguous fault attribution arising from long-horizon, branching interaction trajectories; existing log-driven approaches lack hypothesis validation and suffer from low single-step or single-agent attribution accuracy. This paper proposes an intervention-driven automated debugging framework that actively validates fault hypotheses via targeted interventions—such as message editing or plan rewriting—and evaluates correctness solely based on task success, enabling collaborative multi-agent repair. Implemented within frameworks like Magnetic-One, the method is evaluated on benchmarks including GAIA and AssistantBench. Experiments show it recovers 18%–49% of originally failed tasks, improves milestone completion rates by up to 16%, and reveal that 30%–60% of initial fault hypotheses require correction—demonstrating a significant departure from conventional attribution paradigms.
📝 Abstract
Large language model (LLM)-based multi-agent systems are challenging to debug because failures often arise from long, branching interaction traces. The prevailing practice is to leverage LLMs for log-based failure localization, attributing errors to a specific agent and step. However, this paradigm has two key limitations: (i) log-only debugging lacks validation, producing untested hypotheses, and (ii) single-step or single-agent attribution is often ill-posed, as we find that multiple distinct interventions can independently repair the failed task. To address the first limitation, we introduce DoVer, an intervention-driven debugging framework, which augments hypothesis generation with active verification through targeted interventions (e.g., editing messages, altering plans). For the second limitation, rather than evaluating on attribution accuracy, we focus on measuring whether the system resolves the failure or makes quantifiable progress toward task success, reflecting a more outcome-oriented view of debugging. Within the Magnetic-One agent framework, on the datasets derived from GAIA and AssistantBench, DoVer flips 18-28% of failed trials into successes, achieves up to 16% milestone progress, and validates or refutes 30-60% of failure hypotheses. DoVer also performs effectively on a different dataset (GSMPlus) and agent framework (AG2), where it recovers 49% of failed trials. These results highlight intervention as a practical mechanism for improving reliability in agentic systems and open opportunities for more robust, scalable debugging methods for LLM-based multi-agent systems. Project website and code will be available at https://aka.ms/DoVer.