🤖 AI Summary
This study presents the first systematic investigation into the impact of repository-level configuration files—specifically AGENTS.md—on the efficiency of AI programming agents in handling GitHub Pull Requests. Through empirical evaluation across 10 repositories and 124 Pull Requests, the research compares agent performance with and without AGENTS.md, measuring execution time and output token consumption using mainstream AI agents such as Codex and Claude Code. The results demonstrate that incorporating AGENTS.md reduces median execution time by 28.64% and decreases token usage by 16.58%, while maintaining stable task completion rates. These findings provide the first empirical evidence and practical guidance for designing repository-level instructions to enhance the performance of AI programming agents in real-world software development workflows.
📝 Abstract
AI coding agents such as Codex and Claude Code are increasingly used to autonomously contribute to software repositories. However, little is known about how repository-level configuration artifacts affect operational efficiency of the agents. In this paper, we study the impact of AGENTS.md files on the runtime and token consumption of AI coding agents operating on GitHub pull requests. We analyze 10 repositories and 124 pull requests, executing agents under two conditions: with and without an AGENTS.md file. We measure wall-clock execution time and token usage during agent execution. Our results show that the presence of AGENTS.md is associated with a lower median runtime ($\Delta 28.64$%) and reduced output token consumption ($\Delta 16.58$%), while maintaining a comparable task completion behavior. Based on these results, we discuss immediate implications for the configuration and deployment of AI coding agents in practice, and outline a broader research agenda on the role of repository-level instructions in shaping the behavior, efficiency, and integration of AI coding agents in software development workflows.