Engagement in Code Review: Emotional, Behavioral, and Cognitive Dimensions in Peer vs. LLM Interactions

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how software engineers’ affective, behavioral, and cognitive engagement differs in LLM-assisted code review versus traditional human peer review. Using a two-phase qualitative approach—semi-structured interviews followed by contextualized prompt-based experiments—we develop an integrative engagement model that reveals dynamic interplay between emotion self-regulation and behavioral participation. Results show that LLM assistance significantly reduces affective load, enabling engineers to shift attention from emotional regulation to cognitive load management. When LLM feedback aligns with engineers’ cognitive expectations, issue detection efficiency increases and feedback adoption rates rise. The key contribution is the first systematic deconstruction of the three-dimensional engagement shift under AI mediation: affective, behavioral, and cognitive. We empirically confirm that LLMs function as “supportive partners”—alleviating dual cognitive-affective load while preserving human judgment responsibility and collaborative meaning. (149 words)

Technology Category

Application Category

📝 Abstract
Code review is a socio-technical practice, yet how software engineers engage in Large Language Model (LLM)-assisted code reviews compared to human peer-led reviews is less understood. We report a two-phase qualitative study with 20 software engineers to understand this. In Phase I, participants exchanged peer reviews and were interviewed about their affective responses and engagement decisions. In Phase II, we introduced a new prompt matching engineers' preferences and probed how characteristics shaped their reactions. We develop an integrative account linking emotional self-regulation to behavioral engagement and resolution. We identify self-regulation strategies that engineers use to regulate their emotions in response to negative feedback: reframing, dialogic regulation, avoidance, and defensiveness. Engagement proceeds through social calibration; engineers align their responses and behaviors to the relational climate and team norms. Trajectories to resolution, in the case of peer-led review, vary by locus (solo/dyad/team) and an internal sense-making process. With the LLM-assisted review, emotional costs and the need for self-regulation seem lower. When LLM feedback aligned with engineers' cognitive expectations, participants reported reduced processing effort and a potentially higher tendency to adopt. We show that LLM-assisted review redirects engagement from emotion management to cognitive load management. We contribute an integrative model of engagement that links emotional self-regulation to behavioral engagement and resolution, showing how affective and cognitive processes influence feedback adoption in peer-led and LLM-assisted code reviews. We conclude that AI is best positioned as a supportive partner to reduce cognitive and emotional load while preserving human accountability and the social meaning of peer review and similar socio-technical activities.
Problem

Research questions and friction points this paper is trying to address.

Compares emotional and cognitive engagement in peer vs. LLM-assisted code reviews.
Examines how engineers self-regulate emotions and process feedback in code review.
Models engagement linking emotional regulation to behavioral outcomes and resolution.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-phase qualitative study with 20 engineers
Integrative model linking self-regulation to engagement
LLM redirects engagement from emotion to cognitive load