Bug Detective and Quality Coach: Developers' Mental Models of AI-Assisted IDE Tools

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how developers’ mental models of AI-augmented IDE tools—such as those for defect detection and readability assessment—influence their trust, sense of control, and adoption intention. Addressing the problem of mental model mismatch, we conducted a qualitative study with 58 developers across six co-design workshops. We identified two archetypal role-based mental models: “Defect Detective” and “Quality Coach.” Results reveal that explanation clarity, feedback timing, and user controllability are core dimensions shaping trust; meanwhile, key design tensions emerge between automation and human agency, and between supportive utility and cognitive interference. Based on these findings, we propose human-centered AI collaboration principles for IDEs—grounded in empirical evidence—to guide the development of trustworthy, controllable, and adoptable AI tools in software engineering environments.

Technology Category

Application Category

📝 Abstract
AI-assisted tools support developers in performing cognitively demanding tasks such as bug detection and code readability assessment. Despite the advancements in the technical characteristics of these tools, little is known about how developers mentally model them and how mismatches affect trust, control, and adoption. We conducted six co-design workshops with 58 developers to elicit their mental models about AI-assisted bug detection and readability features. It emerged that developers conceive bug detection tools as extit{bug detectives}, which warn users only in case of critical issues, guaranteeing transparency, actionable feedback, and confidence cues. Readability assessment tools, on the other hand, are envisioned as extit{quality coaches}, which provide contextual, personalized, and progressive guidance. Trust, in both tasks, depends on the clarity of explanations, timing, and user control. A set of design principles for Human-Centered AI in IDEs has been distilled, aiming to balance disruption with support, conciseness with depth, and automation with human agency.
Problem

Research questions and friction points this paper is trying to address.

Understanding developers' mental models of AI bug detection tools
Exploring how mental model mismatches affect trust and tool adoption
Investigating developer perceptions of AI readability assessment features
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI bug detection with transparency and actionable feedback
Readability assessment providing contextual personalized guidance
Human-centered AI balancing automation with user control
🔎 Similar Papers
No similar papers found.
Paolo Buono
Paolo Buono
Associate Professor, Computer Science Department, University of Bari Aldo Moro
Information VisualizationHuman-Computer InteractionVisual AnalyticsMobile Computing
M
Mary Cerullo
University of Salerno, Via Giovanni Paolo II 132, Fisciano (Salerno), 84084, Italy
S
Stefano Cirillo
University of Salerno, Via Giovanni Paolo II 132, Fisciano (Salerno), 84084, Italy
Giuseppe Desolda
Giuseppe Desolda
University of Bari Aldo Moro
Novel Interaction TechniquesInternet of ThingsUsable Security
F
Francesco Greco
Department of Computer Science, University of Bari Aldo Moro, Via E. Orabona 4, Bari, 70125, Italy
E
Emanuela Guglielmi
University of Molise, Via Duca degli Abruzzi, 67, Termoli (Campobasso), 86039, Italy
G
Grazia Margarella
University of Salerno, Via Giovanni Paolo II 132, Fisciano (Salerno), 84084, Italy
G
Giuseppe Polese
University of Salerno, Via Giovanni Paolo II 132, Fisciano (Salerno), 84084, Italy
Simone Scalabrino
Simone Scalabrino
Assistant Professor, University of Molise, Italy
Software QualitySoftware TestingSoftware Security
C
Cesare Tucci
Department of Computer Science, University of Bari Aldo Moro, Via E. Orabona 4, Bari, 70125, Italy