AgentStepper: Interactive Debugging of Software Development Agents

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of debugging large language model (LLM)-driven software development agents, whose complex behaviors and opaque intermediate processes hinder effective troubleshooting. The paper introduces the first interactive debugging approach tailored for LLM-based software engineering agents, modeling agent execution traces as structured dialogues. This method enables developers to set breakpoints and step through high-level agent actions, while dynamically editing prompts and tool invocations during execution. Requiring only 39–42 lines of code, the technique seamlessly integrates into existing agent frameworks such as ExecutionAgent, SWE-Agent, and RepairAgent. User studies demonstrate that the approach increases bug identification success rates from 17% to 60% and substantially reduces user frustration.

Technology Category

Application Category

📝 Abstract
Software development agents powered by large language models (LLMs) have shown great promise in automating tasks like environment setup, issue solving, and program repair. Unfortunately, understanding and debugging such agents remain challenging due to their complex and dynamic nature. Developers must reason about trajectories of LLM queries, tool calls, and code modifications, but current techniques reveal little of this intermediate process in a comprehensible format. The key insight of this paper is that debugging software development agents shares many similarities with conventional debugging of software programs, yet requires a higher level of abstraction that raises the level from low-level implementation details to high-level agent actions. Drawing on this insight, we introduce AgentStepper, the first interactive debugger for LLM-based software engineering agents. AgentStepper enables developers to inspect, control, and interactively manipulate agent trajectories. AgentStepper represents trajectories as structured conversations among an LLM, the agent program, and tools. It supports breakpoints, stepwise execution, and live editing of prompts and tool invocations, while capturing and displaying intermediate repository-level code changes. Our evaluation applies AgentStepper to three state-of-the-art software development agents, ExecutionAgent, SWE-Agent, and RepairAgent, showing that integrating the approach into existing agents requires minor code changes (39-42 edited lines). Moreover, we report on a user study with twelve participants, indicating that AgentStepper improves the ability of participants to interpret trajectories (64% vs. 67% mean performance) and identify bugs in the agent's implementation (17% vs. 60% success rate), while reducing perceived workload (e.g., frustration reduced from 5.4/7.0 to 2.4/7.0) compared to conventional tools.
Problem

Research questions and friction points this paper is trying to address.

debugging
software development agents
large language models
agent trajectories
interactive debugging
Innovation

Methods, ideas, or system contributions that make the work stand out.

interactive debugging
software development agents
LLM-based agents
agent trajectory inspection
structured conversation
🔎 Similar Papers
No similar papers found.