🤖 AI Summary
This study addresses the unclear review mechanisms and lack of usage norms governing human-AI collaboration in open-source projects. By extending the AIDev dataset and integrating code ownership identification, pull request (PR) metadata analysis, and comparative examination of human- versus AI-generated PRs, this work systematically reveals that over 67.5% of AI-assisted PRs are submitted by contributors without code ownership, and approximately 80% of these are merged rapidly without explicit review—significantly faster than human-authored PRs. These findings underscore a critical gap in current open-source ecosystems regarding standardized AI usage policies and rigorous review practices, providing empirical grounding for the development of trustworthy human-AI collaborative software engineering paradigms.
📝 Abstract
Large Language Models (LLMs) increasingly automate software engineering tasks. While recent studies highlight the accelerated adoption of ``AI as a teammate''in Open Source Software (OSS), developer interaction patterns remain under-explored. In this work, we investigated project-level guidelines and developers'interactions with AI-assisted pull requests (PRs) by expanding the AIDev dataset to include finer-grained contributor code ownership and a comparative baseline of human-created PRs. We found that over 67.5\% of AI-co-authored PRs originate from contributors without prior code ownership. Despite this, the majority of repositories lack guidelines for AI-coding agent usage. Notably, we observed a distinct interaction pattern: AI-co-authored PRs are merged significantly faster with minimal feedback. In contrast to human-created PRs where non-owner developers receive the most feedback, AI-co-authored PRs from non-owners receive the least, with approximately 80\% merged without any explicit review. Finally, we discuss implications for developers and researchers.