🤖 AI Summary
This study critically examines the current state and future evolution risks of code review practice. Drawing on qualitative research, we conducted in-depth interviews with 52 professional developers to explore their expectations and concerns regarding AI-powered automation, process efficiency, and shifting collaboration norms. Results reveal a dual sentiment: while practitioners widely anticipate AI-assisted review to enhance efficiency, they express significant concern that over-automation may erode knowledge transfer and undermine team trust. Concurrently, the reviewer role is transitioning from a quality gatekeeper to a collaborative learning facilitator—introducing systemic risks such as accountability ambiguity and skill atrophy. To address this, we propose the “Review Role Evolution Risk Framework,” the first systematic model identifying long-term tensions across technical, process, and cultural dimensions. Grounded in empirical evidence, the framework informs design principles for sustainable, human-centered code review practices.
📝 Abstract
Code review has long been a core practice in collaborative software engineering. In this research, we explore how practitioners reflect on code review today and what changes they anticipate in the near future. We then discuss the potential long-term risks of these anticipated changes for the evolution of code review and its role in collaborative software engineering.