From Use to Oversight: How Mental Models Influence User Behavior and Output in AI Writing Assistants

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how users’ functional versus structural mental models of AI writing assistants influence their oversight behaviors and writing quality. Through a controlled experiment that induced distinct mental models and employed an AI assistant seeded with predefined errors, the research examined participants’ tendencies to request, accept, or edit suggestions and to detect inaccuracies. Findings reveal that while a structural mental model enhanced users’ system understanding and perceived usability, it concurrently reduced their vigilance toward erroneous suggestions, leading to a significant increase in grammatical errors in final outputs. These results challenge the prevailing assumption that greater system understanding inherently improves performance, instead highlighting a complex tension among comprehension, trust, and effective human supervision in human-AI collaboration.
📝 Abstract
AI-based writing assistants are ubiquitous, yet little is known about how users' mental models shape their use. We examine two types of mental models -- functional or related to what the system does, and structural or related to how the system works -- and how they affect control behavior -- how users request, accept, or edit AI suggestions as they write -- and writing outcomes. We primed participants ($N = 48$) with different system descriptions to induce these mental models before asking them to complete a cover letter writing task using a writing assistant that occasionally offered preconfigured ungrammatical suggestions to test whether the mental models affected participants' critical oversight. We find that while participants in the structural mental model condition demonstrate a better understanding of the system, this can have a backfiring effect: while these participants judged the system as more usable, they also produced letters with more grammatical errors, highlighting a complex relationship between system understanding, trust, and control in contexts that require user oversight of error-prone AI outputs.
Problem

Research questions and friction points this paper is trying to address.

mental models
AI writing assistants
user oversight
control behavior
writing outcomes
Innovation

Methods, ideas, or system contributions that make the work stand out.

mental models
AI writing assistants
user oversight
human-AI interaction
trust and control
🔎 Similar Papers
No similar papers found.