How Developers Use AI Agents: When They Work, When They Don't, and Why

📅 2025-06-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates developer–IDE-embedded SWE agent collaboration patterns and barriers when solving complex tasks in real-world open-source projects. Through field observations of 19 developers addressing 33 authentic GitHub issues—supported by screen recordings, session logs, semi-structured interviews, and qualitative coding—the work empirically identifies progressive iterative collaboration as significantly improving task success rates (up to 50%), outperforming single-shot generation. It uncovers six core human–agent collaboration challenges (e.g., lack of trust, difficulty in joint debugging) and four effective collaborative practices. Contributions include: (1) the first empirical framework grounded in authentic development contexts; (2) a fine-grained characterization of dynamic collaboration processes; and (3) systematically derived, transferable design principles for human-centered SWE agents—providing both theoretical foundations and actionable guidance for future agent development.

Technology Category

Application Category

📝 Abstract
Software Engineering Agents (SWE agents) can autonomously perform development tasks on benchmarks like SWE Bench, but still face challenges when tackling complex and ambiguous real-world tasks. Consequently, SWE agents are often designed to allow interactivity with developers, enabling collaborative problem-solving. To understand how developers collaborate with SWE agents and the communication challenges that arise in such interactions, we observed 19 developers using an in-IDE agent to resolve 33 open issues in repositories to which they had previously contributed. Participants successfully resolved about half of these issues, with participants solving issues incrementally having greater success than those using a one-shot approach. Participants who actively collaborated with the agent and iterated on its outputs were also more successful, though they faced challenges in trusting the agent's responses and collaborating on debugging and testing. These results have implications for successful developer-agent collaborations, and for the design of more effective SWE agents.
Problem

Research questions and friction points this paper is trying to address.

Understanding developer-AI agent collaboration challenges
Assessing SWE agents' effectiveness in real-world tasks
Improving trust and debugging in developer-agent interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive SWE agents for collaborative problem-solving
Incremental issue resolution boosts success rates
Developer-agent collaboration enhances debugging and testing
🔎 Similar Papers
No similar papers found.