🤖 AI Summary
This study investigates how developers’ mental models of AI-augmented IDE tools—such as those for defect detection and readability assessment—influence their trust, sense of control, and adoption intention. Addressing the problem of mental model mismatch, we conducted a qualitative study with 58 developers across six co-design workshops. We identified two archetypal role-based mental models: “Defect Detective” and “Quality Coach.” Results reveal that explanation clarity, feedback timing, and user controllability are core dimensions shaping trust; meanwhile, key design tensions emerge between automation and human agency, and between supportive utility and cognitive interference. Based on these findings, we propose human-centered AI collaboration principles for IDEs—grounded in empirical evidence—to guide the development of trustworthy, controllable, and adoptable AI tools in software engineering environments.
📝 Abstract
AI-assisted tools support developers in performing cognitively demanding tasks such as bug detection and code readability assessment. Despite the advancements in the technical characteristics of these tools, little is known about how developers mentally model them and how mismatches affect trust, control, and adoption. We conducted six co-design workshops with 58 developers to elicit their mental models about AI-assisted bug detection and readability features. It emerged that developers conceive bug detection tools as extit{bug detectives}, which warn users only in case of critical issues, guaranteeing transparency, actionable feedback, and confidence cues. Readability assessment tools, on the other hand, are envisioned as extit{quality coaches}, which provide contextual, personalized, and progressive guidance. Trust, in both tasks, depends on the clarity of explanations, timing, and user control. A set of design principles for Human-Centered AI in IDEs has been distilled, aiming to balance disruption with support, conciseness with depth, and automation with human agency.