🤖 AI Summary
This study addresses a critical yet underexplored foundation of AI literacy: children’s mental models of AI reasoning mechanisms. Employing a two-phase mixed-methods design—comprising an 8-participant co-design workshop followed by fieldwork with 106 students across Grades 3–8 (including semi-structured interviews and thematic coding)—the research systematically identifies three distinct mental models of AI reasoning among children: deductive, inductive, and “inherent intelligence” attribution. It further reveals pronounced age-related differentiation: younger children favor anthropomorphic attributions, whereas older children progressively adopt pattern-recognition–based understandings. Three core cognitive tensions underlying these models are also articulated. Critically, the study establishes the first empirically grounded, educationally oriented classification framework for children’s mental models of AI reasoning. This framework provides both theoretical grounding and empirical evidence to inform AI literacy curriculum development and the design of explainable AI (XAI) educational tools for K–12 contexts.
📝 Abstract
As artificial intelligence (AI) advances in reasoning capabilities, most recently with the emergence of Large Reasoning Models (LRMs), understanding how children conceptualize AI's reasoning processes becomes critical for fostering AI literacy. While one of the"Five Big Ideas"in AI education highlights reasoning algorithms as central to AI decision-making, less is known about children's mental models in this area. Through a two-phase approach, consisting of a co-design session with 8 children followed by a field study with 106 children (grades 3-8), we identified three models of AI reasoning: Deductive, Inductive, and Inherent. Our findings reveal that younger children (grades 3-5) often attribute AI's reasoning to inherent intelligence, while older children (grades 6-8) recognize AI as a pattern recognizer. We highlight three tensions that surfaced in children's understanding of AI reasoning and conclude with implications for scaffolding AI curricula and designing explainable AI tools.