🤖 AI Summary
Current LLM-based agents are typically confined to isolated software engineering tasks—e.g., debugging or testing—and lack cross-task coordination capabilities. To address this, we propose and implement USEagent, the first end-to-end unified software engineering agent. It integrates tool invocation, multi-step reasoning, repository-aware context modeling, and dynamic task scheduling to autonomously orchestrate diverse tasks—including coding, testing, debugging, patch generation, and feature enhancement—while preserving contextual consistency across task boundaries. We introduce USEbench, a hybrid benchmark unifying SWE-bench, SWT-bench, and REPOCOD, comprising 1,271 real-world, repository-level tasks. On USEbench, USEagent significantly outperforms general-purpose agents such as OpenHands CodeActAgent. Empirical evaluation validates the efficacy of our unified architecture and reveals critical limitations of LLMs in long-horizon reasoning and context fidelity under complex development scenarios.
📝 Abstract
The growth of Large Language Model (LLM) technology has raised expectations for automated coding. However, software engineering is more than coding and is concerned with activities including maintenance and evolution of a project. In this context, the concept of LLM agents has gained traction, which utilize LLMs as reasoning engines to invoke external tools autonomously. But is an LLM agent the same as an AI software engineer? In this paper, we seek to understand this question by developing a Unified Software Engineering agent or USEagent. Unlike existing work which builds specialized agents for specific software tasks such as testing, debugging, and repair, our goal is to build a unified agent which can orchestrate and handle multiple capabilities. This gives the agent the promise of handling complex scenarios in software development such as fixing an incomplete patch, adding new features, or taking over code written by others. We envision USEagent as the first draft of a future AI Software Engineer which can be a team member in future software development teams involving both AI and humans. To evaluate the efficacy of USEagent, we build a Unified Software Engineering bench (USEbench) comprising of myriad tasks such as coding, testing, and patching. USEbench is a judicious mixture of tasks from existing benchmarks such as SWE-bench, SWT-bench, and REPOCOD. In an evaluation on USEbench consisting of 1,271 repository-level software engineering tasks, USEagent shows improved efficacy compared to existing general agents such as OpenHands CodeActAgent. There exist gaps in the capabilities of USEagent for certain coding tasks, which provides hints on further developing the AI Software Engineer of the future.