I'm Not Reading All of That: Understanding Software Engineers' Level of Cognitive Engagement with Agentic Coding Assistants

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current autonomous coding assistants often reduce software engineers’ cognitive engagement, fostering overreliance and cognitive passivity. This study addresses this issue through a qualitative user investigation involving task observations and interviews, systematically uncovering—for the first time—a persistent decline in cognitive investment during human-AI collaboration. It further identifies a critical gap in existing tools: the absence of interaction mechanisms that support reflection, validation, and sensemaking. Grounded in cognitive science and human-computer interaction theories, this work proposes a paradigm shift—reconceptualizing AI assistants from mere “task executors” to “thinking tools.” By integrating cognitive forcing functions to sustain deep reasoning, the research articulates actionable design opportunities that effectively enhance engineers’ cognitive engagement in AI-assisted programming.

Technology Category

Application Category

📝 Abstract
Over-reliance on AI systems can undermine users' critical thinking and promote complacency, a risk intensified by the emergence of agentic AI systems that operate with minimal human involvement. In software engineering, agentic coding assistants are rapidly becoming embedded in everyday development workflows. Since software engineers create systems deployed across diverse and high-stakes real-world contexts, these assistants must function not merely as autonomous task performers but as Tools for Thought that actively support human reasoning and sensemaking. We conducted a formative study examining software engineers' cognitive engagement and sensemaking processes when working with an agentic coding assistant. Our findings reveal that cognitive engagement consistently declines as tasks progress, and that current agentic coding assistants' designs provide limited affordances for reflection, verification, and meaning-making. Based on these findings, e identify concrete design opportunities leveraging richer interaction modalities and cognitive-forcing mechanisms to sustain engagement and promote deeper thinking in AI-assisted programming.
Problem

Research questions and friction points this paper is trying to address.

cognitive engagement
agentic AI
software engineering
over-reliance
sensemaking
Innovation

Methods, ideas, or system contributions that make the work stand out.

agentic coding assistants
cognitive engagement
Tools for Thought
cognitive-forcing mechanisms
sensemaking
🔎 Similar Papers
No similar papers found.
C
Carlos Rafael Catalan
Samsung R&D Institute Philippines
L
Lheane Marie Dizon
Samsung R&D Institute Philippines
P
Patricia Nicole Monderin
Samsung R&D Institute Philippines
Emily Kuang
Emily Kuang
York University
Human-Computer Interaction