🤖 AI Summary
This study addresses the common misuse of generative artificial intelligence (GenAI) by students as a mere answer-providing tool, which stems from a lack of learning-oriented prompting skills and leads to insufficient reflection and declining academic performance. Conducting a large-scale randomized controlled trial (N = 979) in a CS1 course, the authors designed a scaffolded prompting intervention grounded in the ICAP (Interactive, Constructive, Active, Passive) framework of cognitive engagement—the first RCT to empirically validate this framework’s efficacy in enhancing prompt literacy. Mixed-methods analyses revealed that all intervention conditions significantly improved students’ prompting skills, with gains increasing monotonically with higher levels of cognitive engagement. Moreover, the acquired prompting skills significantly predicted final exam performance, demonstrating the intervention’s scalability and transferability across contexts.
📝 Abstract
Despite universal GenAI adoption, students cannot distinguish task performance from actual learning and lack skills to leverage AI for learning, leading to worse exam performance when AI use remains unreflective. Yet few interventions teaching students to prompt AI as a tutor rather than solution provider have been validated at scale through randomized controlled trials (RCTs). To bridge this gap, we conducted a semester-long RCT (N=979) with four ICAP framework-based instructional conditions varying in engagement intensity with a pre-test, immediate and delayed post-test and surveys. Mixed methods analysis results showed: (1) All conditions significantly improved prompting skills, with gains increasing progressively from Condition 1 to Condition 4, validating ICAP's cognitive engagement hierarchy; (2) for students with similar pre-test scores, higher learning gain in immediate post-test predict higher final exam score, though no direct between-group differences emerged; (3) Our interventions are suitable and scalable solutions for diverse educational contexts, resources and learners. Together, this study makes empirical and theoretical contributions: (1) theoretically, we provided one of the first large-scale RCTs examining how cognitive engagement shapes learning in prompting literacy and clarifying the relationship between learning-oriented prompting skills and broader academic performance; (2) empirically, we offered timely design guidance for transforming GenAI classroom policies into scalable, actionable prompting literacy instruction to advance learning in the era of Generative AI.