🤖 AI Summary
The practical adoption and impact of generative AI (GenAI) in open-source development remain poorly understood. Method: We introduce the concept of “self-reported GenAI usage” and identify 1,292 instances across 156 projects from over 250,000 GitHub repositories. We construct a fine-grained taxonomy covering 32 tasks, 10 content types, and 11 purposes, and integrate qualitative coding, policy analysis, developer surveys, and longitudinal code-change analysis to characterize GenAI integration patterns in commits, comments, and documentation. Contribution/Results: Contrary to prevailing assumptions, GenAI usage does not significantly increase code change frequency—challenging the hypothesis that AI undermines code stability. Moreover, we systematically uncover project-level practice needs concerning transparency, attribution management, and quality assurance—constituting the first empirical identification of such requirements at scale. Our findings provide foundational insights for evidence-based governance and tooling design in AI-augmented open-source development.
📝 Abstract
The widespread adoption of generative AI (GenAI) tools such as GitHub Copilot and ChatGPT is transforming software development. Since generated source code is virtually impossible to distinguish from manually written code, their real-world usage and impact on open-source software development remain poorly understood. In this paper, we introduce the concept of self-admitted GenAI usage, that is, developers explicitly referring to the use of GenAI tools for content creation in software artifacts. Using this concept as a lens to study how GenAI tools are integrated into open-source software projects, we analyze a curated sample of more than 250,000 GitHub repositories, identifying 1,292 such self-admissions across 156 repositories in commit messages, code comments, and project documentation. Using a mixed methods approach, we derive a taxonomy of 32 tasks, 10 content types, and 11 purposes associated with GenAI usage based on 284 qualitatively coded mentions. We then analyze 13 documents with policies and usage guidelines for GenAI tools and conduct a developer survey to uncover the ethical, legal, and practical concerns behind them. Our findings reveal that developers actively manage how GenAI is used in their projects, highlighting the need for project-level transparency, attribution, and quality control practices in the new era of AI-assisted software development. Finally, we examine the longitudinal impact of GenAI adoption on code churn in 151 repositories with self-admitted GenAI usage and find no general increase, contradicting popular narratives on the impact of GenAI on software development.