DoVer: Intervention-Driven Auto Debugging for LLM Multi-Agent Systems

📅 2025-12-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Debugging LLM-based multi-agent systems remains challenging due to ambiguous fault attribution arising from long-horizon, branching interaction trajectories; existing log-driven approaches lack hypothesis validation and suffer from low single-step or single-agent attribution accuracy. This paper proposes an intervention-driven automated debugging framework that actively validates fault hypotheses via targeted interventions—such as message editing or plan rewriting—and evaluates correctness solely based on task success, enabling collaborative multi-agent repair. Implemented within frameworks like Magnetic-One, the method is evaluated on benchmarks including GAIA and AssistantBench. Experiments show it recovers 18%–49% of originally failed tasks, improves milestone completion rates by up to 16%, and reveal that 30%–60% of initial fault hypotheses require correction—demonstrating a significant departure from conventional attribution paradigms.

Technology Category

Application Category

📝 Abstract
Large language model (LLM)-based multi-agent systems are challenging to debug because failures often arise from long, branching interaction traces. The prevailing practice is to leverage LLMs for log-based failure localization, attributing errors to a specific agent and step. However, this paradigm has two key limitations: (i) log-only debugging lacks validation, producing untested hypotheses, and (ii) single-step or single-agent attribution is often ill-posed, as we find that multiple distinct interventions can independently repair the failed task. To address the first limitation, we introduce DoVer, an intervention-driven debugging framework, which augments hypothesis generation with active verification through targeted interventions (e.g., editing messages, altering plans). For the second limitation, rather than evaluating on attribution accuracy, we focus on measuring whether the system resolves the failure or makes quantifiable progress toward task success, reflecting a more outcome-oriented view of debugging. Within the Magnetic-One agent framework, on the datasets derived from GAIA and AssistantBench, DoVer flips 18-28% of failed trials into successes, achieves up to 16% milestone progress, and validates or refutes 30-60% of failure hypotheses. DoVer also performs effectively on a different dataset (GSMPlus) and agent framework (AG2), where it recovers 49% of failed trials. These results highlight intervention as a practical mechanism for improving reliability in agentic systems and open opportunities for more robust, scalable debugging methods for LLM-based multi-agent systems. Project website and code will be available at https://aka.ms/DoVer.
Problem

Research questions and friction points this paper is trying to address.

Debugging failures in LLM multi-agent systems with long interaction traces
Addressing limitations of log-only debugging by adding active verification
Measuring debugging success by task resolution rather than attribution accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces intervention-driven debugging with active verification
Focuses on outcome-oriented success rather than attribution accuracy
Uses targeted interventions like editing messages and altering plans
🔎 Similar Papers
No similar papers found.
M
Ming Ma
Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
Jue Zhang
Jue Zhang
Microsoft, Peking University, IHEP, Univ. of Florida, USTC
particle physicsLLMsAIOps
F
Fangkai Yang
Microsoft
Y
Yu Kang
Microsoft
Qingwei Lin
Qingwei Lin
Microsoft
S
Saravan Rajmohan
Microsoft
Dongmei Zhang
Dongmei Zhang
Microsoft Research
Software EngineeringMachine LearningInformation Visualization