🤖 AI Summary
Generative AI risks inducing cognitive passivity and diminished critical thinking during learning. Method: We propose and empirically evaluate “provocations”—a lightweight design friction mechanism intended to stimulate critical reflection and metacognitive monitoring in AI-augmented learning. A between-subjects controlled experiment (n = 24) integrates cognitively grounded task design, qualitative coding, and theory-driven dimensional modeling. Contribution/Results: Provocations significantly enhance deep thinking and self-awareness. We identify five key moderating dimensions—task urgency, importance, user expertise, AI credibility, and interactional autonomy—that systematically influence their efficacy. Furthermore, we establish theoretical linkages between provocations and distributed cognition and micro-boundary frameworks. This work provides empirical grounding and actionable design principles for embedding scalable, interpretable critical-thinking interventions into AI tools.
📝 Abstract
Recent research suggests that the use of Generative AI tools may result in diminished critical thinking during knowledge work. We study the effect on knowledge work of provocations: brief textual prompts that offer critiques for and propose alternatives to AI suggestions. We conduct a between-subjects study (n=24) in which participants completed AI-assisted shortlisting tasks with and without provocations. We find that provocations can induce critical and metacognitive thinking. We derive five dimensions that impact the user experience of provocations: task urgency, task importance, user expertise, provocation actionability, and user responsibility. We connect our findings to related work on design frictions, microboundaries, and distributed cognition. We draw design implications for critical thinking interventions in AI-assisted knowledge work.