The Rise of AI Teammates in Software Engineering (SE) 3.0: How Autonomous Coding Agents Are Reshaping Software Engineering

📅 2025-07-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the trust and utility gap arising from human-AI collaboration in Software Engineering 3.0, where autonomous AI coding agents serve as teammates. To this end, we construct and open-source AIDev—the first large-scale, real-world empirical dataset for AI-assisted development—comprising 456,000 GitHub pull requests, with fine-grained metadata including PR authors, review timelines, code changes, merge outcomes, and complexity metrics, thereby overcoming the limitations of synthetic benchmarks like SWE-bench. Using collaborative network modeling and multidimensional analysis, we find that AI-generated submissions exhibit higher velocity but lower acceptance rates, produce structurally simpler code, and trigger significant short-term productivity surges among individual developers. AIDev uniquely enables systematic evaluation of agent readiness, human-AI collaboration modeling, and AI governance research, advancing SE 3.0 from theoretical frameworks toward evidence-based practice.

Technology Category

Application Category

📝 Abstract
The future of software engineering--SE 3.0--is unfolding with the rise of AI teammates: autonomous, goal-driven systems collaborating with human developers. Among these, autonomous coding agents are especially transformative, now actively initiating, reviewing, and evolving code at scale. This paper introduces AIDev, the first large-scale dataset capturing how such agents operate in the wild. Spanning over 456,000 pull requests by five leading agents--OpenAI Codex, Devin, GitHub Copilot, Cursor, and Claude Code--across 61,000 repositories and 47,000 developers, AIDev provides an unprecedented empirical foundation for studying autonomous teammates in software development. Unlike prior work that has largely theorized the rise of AI-native software engineering, AIDev offers structured, open data to support research in benchmarking, agent readiness, optimization, collaboration modeling, and AI governance. The dataset includes rich metadata on PRs, authorship, review timelines, code changes, and integration outcomes--enabling exploration beyond synthetic benchmarks like SWE-bench. For instance, although agents often outperform humans in speed, their PRs are accepted less frequently, revealing a trust and utility gap. Furthermore, while agents accelerate code submission--one developer submitted as many PRs in three days as they had in three years--these are structurally simpler (via code complexity metrics). We envision AIDev as a living resource: extensible, analyzable, and ready for the SE and AI communities. Grounding SE 3.0 in real-world evidence, AIDev enables a new generation of research into AI-native workflows and supports building the next wave of symbiotic human-AI collaboration. The dataset is publicly available at https://github.com/SAILResearch/AI_Teammates_in_SE3. > AI Agent, Agentic AI, Coding Agent, Agentic Coding, Software Engineering Agent
Problem

Research questions and friction points this paper is trying to address.

How autonomous coding agents transform software engineering workflows
Measuring trust and utility gaps in AI-human coding collaboration
Providing empirical data to benchmark AI agent performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces AIDev dataset for autonomous coding agents
Analyzes 456,000 pull requests across 61,000 repositories
Measures agent performance and human-AI collaboration gaps
🔎 Similar Papers
No similar papers found.