🤖 AI Summary
This survey systematically examines the current state and potential of large language model (LLM)-driven agents in software engineering (SE). Addressing the lack of a unified analytical framework and unclear human-agent collaboration mechanisms in prior work, we analyze 124 studies to propose the first two-dimensional taxonomy—“SE Tasks × Agent Capabilities”—spanning the full SE lifecycle: requirements analysis, coding, testing, and maintenance. Methodologically, we synthesize key techniques including tool use, multi-agent systems, reflection mechanisms, and external knowledge retrieval, and release an open-source literature repository, Agent4SE-Paper-List. Our contributions include: (1) clarifying evolutionary trajectories and bottlenecks in multi-agent coordination and human-agent interaction; (2) identifying six open challenges; and (3) proposing three future research directions—scalable evaluation, domain alignment, and trustworthy collaboration.
📝 Abstract
The recent advance in Large Language Models (LLMs) has shaped a new paradigm of AI agents, i.e., LLM-based agents. Compared to standalone LLMs, LLM-based agents substantially extend the versatility and expertise of LLMs by enhancing LLMs with the capabilities of perceiving and utilizing external resources and tools. To date, LLM-based agents have been applied and shown remarkable effectiveness in Software Engineering (SE). The synergy between multiple agents and human interaction brings further promise in tackling complex real-world SE problems. In this work, we present a comprehensive and systematic survey on LLM-based agents for SE. We collect 124 papers and categorize them from two perspectives, i.e., the SE and agent perspectives. In addition, we discuss open challenges and future directions in this critical domain. The repository of this survey is at https://github.com/FudanSELab/Agent4SE-Paper-List.