DiLLS: Interactive Diagnosis of LLM-based Multi-agent Systems via Layered Summary of Agent Behaviors

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of diagnosing failures in multi-agent systems powered by large language models (LLMs), where complex agent behaviors render traditional log-based analysis inefficient for root cause identification. To this end, the authors propose DiLLS, a natural language–driven diagnostic framework that constructs structured behavioral summaries across three hierarchical levels—activities, actions, and operations—enabling multi-granular modeling and interpretable visualization of system behavior. Through user studies, DiLLS demonstrates significant improvements in developers’ efficiency and accuracy when identifying, diagnosing, and understanding faults in multi-agent LLM systems, offering a more intuitive and effective approach to system debugging compared to conventional methods.

Technology Category

Application Category

📝 Abstract
Large language model (LLM)-based multi-agent systems have demonstrated impressive capabilities in handling complex tasks. However, the complexity of agentic behaviors makes these systems difficult to understand. When failures occur, developers often struggle to identify root causes and to determine actionable paths for improvement. Traditional methods that rely on inspecting raw log records are inefficient, given both the large volume and complexity of data. To address this challenge, we propose a framework and an interactive system, DiLLS, designed to reveal and structure the behaviors of multi-agent systems. The key idea is to organize information across three levels of query completion: activities, actions, and operations. By probing the multi-agent system through natural language, DiLLS derives and organizes information about planning and execution into a structured, multi-layered summary. Through a user study, we show that DiLLS significantly improves developers'effectiveness and efficiency in identifying, diagnosing, and understanding failures in LLM-based multi-agent systems.
Problem

Research questions and friction points this paper is trying to address.

LLM-based multi-agent systems
failure diagnosis
agent behavior complexity
debugging
system interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent systems
LLM-based diagnosis
layered behavior summary
interactive debugging
structured execution tracing
🔎 Similar Papers
No similar papers found.