🤖 AI Summary
This study investigates whether large language models can autonomously construct production-grade software systems based on explicit specifications, with a focus on their architectural reasoning capabilities in complex, long-horizon engineering tasks. To this end, we introduce SWE-AGI—the first high-complexity, specification-driven, end-to-end benchmark built within the MoonBit ecosystem under low data leakage risk—requiring agents to strictly adhere to RFCs and authoritative standards while implementing modules such as parsers, interpreters, binary decoders, and SAT solvers under fixed API constraints. Experimental results show that GPT-5.3-Codex achieves the highest performance (86.4%), followed by Claude-Opus-4.6 (68.2%), with Kimi-2.5 leading among open-source models. Performance drops significantly with increasing task difficulty, revealing that current models still struggle to reliably support production-level development, with code comprehension emerging as the primary bottleneck.
📝 Abstract
Although large language models (LLMs) have demonstrated impressive coding capabilities, their ability to autonomously build production-scale software from explicit specifications remains an open question. We introduce SWE-AGI, an open-source benchmark for evaluating end-to-end, specification-driven construction of software systems written in MoonBit. SWE-AGI tasks require LLM-based agents to implement parsers, interpreters, binary decoders, and SAT solvers strictly from authoritative standards and RFCs under a fixed API scaffold. Each task involves implementing 1,000-10,000 lines of core logic, corresponding to weeks or months of engineering effort for an experienced human developer. By leveraging the nascent MoonBit ecosystem, SWE-AGI minimizes data leakage, forcing agents to rely on long-horizon architectural reasoning rather than code retrieval. Across frontier models, gpt-5.3-codex achieves the best overall performance (solving 19/22 tasks, 86.4%), outperforming claude-opus-4.6 (15/22, 68.2%), and kimi-2.5 exhibits the strongest performance among open-source models. Performance degrades sharply with increasing task difficulty, particularly on hard, specification-intensive systems. Behavioral analysis further reveals that as codebases scale, code reading, rather than writing, becomes the dominant bottleneck in AI-assisted development. Overall, while specification-driven autonomous software engineering is increasingly viable, substantial challenges remain before it can reliably support production-scale development.