🤖 AI Summary
This study addresses the prevailing tendency of AI programming assistants to position themselves as substitutes for human collaborators, thereby overlooking the social interaction and learning dynamics inherent in collaborative coding. The authors propose a Human-Human-AI (HHAI) collaboration paradigm that treats AI as an additional collaborator rather than a replacement. Through a within-subjects experiment comparing HHAI with traditional Human-AI (HAI) collaboration, the research examines differences in code dependency, social presence, and learning outcomes. Introducing a novel three-party collaboration framework and integrating quantitative behavioral analysis with subjective measures, the findings reveal that HHAI significantly enhances collaborative learning quality and social presence. Participants were less likely to directly adopt AI-generated code and, under shared responsibility conditions, demonstrated greater motivation to understand and a stronger sense of accountability—highlighting that AI, when designed to augment rather than replace human collaboration, can activate shared regulatory mechanisms for learning.
📝 Abstract
As AI assistance becomes embedded in programming practice, researchers have increasingly examined how these systems help learners generate code and work more efficiently. However, these studies often position AI as a replacement for human collaboration and overlook the social and learning-oriented aspects that emerge in collaborative programming. Our work introduces human-human-AI (HHAI) triadic programming, where an AI agent serves as an additional collaborator rather than a substitute for a human partner. Through a within-subjects study with 20 participants, we show that triadic collaboration enhances collaborative learning and social presence compared to the dyadic human-AI (HAI) baseline. In the triadic HHAI conditions, participants relied significantly less on AI-generated code in their work. This effect was strongest in the HHAI-shared condition, where participants had an increased sense of responsibility to understand AI suggestions before applying them. These findings demonstrate how triadic settings activate socially shared regulation of learning by making AI use visible and accountable to a human peer, suggesting that AI systems that augment rather than automate peer collaboration can better preserve the learning processes that collaborative programming relies on.