From Developer Pairs to AI Copilots: A Comparative Study on Knowledge Transfer

📅 2025-06-05
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This study investigates knowledge transfer between AI programming assistants (GitHub Copilot) and human developers, benchmarking it against traditional human–human pair programming. Method: Through a controlled experiment, an extended knowledge transfer analytical framework, and a semi-automated evaluation pipeline—integrating qualitative coding and topic modeling—we systematically quantify and compare the frequency, topical coverage, and underlying mechanisms of knowledge transfer across both collaboration paradigms. Contribution/Results: We provide the first empirical evidence that human–AI collaboration achieves knowledge transfer frequency and topical overlap comparable to human–human pairing. However, it exhibits a dual cognitive effect: (1) reduced critical scrutiny of AI suggestions—indicating trust bias—and (2) AI’s capacity to proactively surface salient yet easily overlooked code details—demonstrating cognitive augmentation. These findings constitute the first systematic, empirically grounded characterization of knowledge transfer mechanisms in AI-augmented programming.

Technology Category

Application Category

📝 Abstract
Knowledge transfer is fundamental to human collaboration and is therefore common in software engineering. Pair programming is a prominent instance. With the rise of AI coding assistants, developers now not only work with human partners but also, as some claim, with AI pair programmers. Although studies confirm knowledge transfer during human pair programming, its effectiveness with AI coding assistants remains uncertain. To analyze knowledge transfer in both human-human and human-AI settings, we conducted an empirical study where developer pairs solved a programming task without AI support, while a separate group of individual developers completed the same task using the AI coding assistant GitHub Copilot. We extended an existing knowledge transfer framework and employed a semi-automated evaluation pipeline to assess differences in knowledge transfer episodes across both settings. We found a similar frequency of successful knowledge transfer episodes and overlapping topical categories across both settings. Two of our key findings are that developers tend to accept GitHub Copilot's suggestions with less scrutiny than those from human pair programming partners, but also that GitHub Copilot can subtly remind developers of important code details they might otherwise overlook.
Problem

Research questions and friction points this paper is trying to address.

Comparing knowledge transfer in human-human vs human-AI pair programming
Assessing effectiveness of GitHub Copilot in code knowledge transfer
Evaluating developer scrutiny of AI vs human programming suggestions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comparative study on human-human and human-AI knowledge transfer
Extended framework and semi-automated evaluation pipeline
Analyzed GitHub Copilot suggestion acceptance patterns
A
Alisa Welter
Saarland University, SaarbrĂźcken, Germany
Niklas Schneider
Niklas Schneider
Saarland University, SaarbrĂźcken, Germany
T
Tobias Dick
Saarland University, SaarbrĂźcken, Germany
K
Kallistos Weis
Saarland University, SaarbrĂźcken, Germany
C
Christof Tinnes
Saarland University, SaarbrĂźcken, Germany; Siemens AG, MĂźnchen, Germany
Marvin Wyrich
Marvin Wyrich
Saarland University
Software EngineeringProgram ComprehensionHuman aspectsScience Communication
Sven Apel
Sven Apel
Professor of Computer Science, Saarland University, Saarland Informatics Campus
Software EngineeringProgram ComprehensionAI4SESoftware AnalyticsEmpirical Methods