Learning to Use AI for Learning: How Can We Effectively Teach and Measure Prompting Literacy for K-12 Students?

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the development and assessment of prompting literacy—the capacity of K–12 students to responsibly understand, design, apply, and critically evaluate AI interaction prompts. Method: We propose the first pedagogical framework for prompting literacy in compulsory education, leveraging large language models (LLMs) as intelligent teaching agents. The framework integrates contextualized tasks, iterative classroom experiments, and an LLM-based automated scoring system. Contribution/Results: Our empirical findings reveal that true/false and open-ended questions significantly outperform multiple-choice items in measuring prompting competence; furthermore, LLM-generated scores achieve pedagogically acceptable reliability and validity. Intervention results demonstrate substantial improvements in students’ prompting skills and AI collaboration awareness. This work establishes a scalable, evidence-based instructional model and assessment paradigm for AI literacy education.

Technology Category

Application Category

📝 Abstract
As Artificial Intelligence (AI) becomes increasingly integrated into daily life, there is a growing need to equip the next generation with the ability to apply, interact with, evaluate, and collaborate with AI systems responsibly. Prior research highlights the urgent demand from K-12 educators to teach students the ethical and effective use of AI for learning. To address this need, we designed an Large-Language Model (LLM)-based module to teach prompting literacy. This includes scenario-based deliberate practice activities with direct interaction with intelligent LLM agents, aiming to foster secondary school students' responsible engagement with AI chatbots. We conducted two iterations of classroom deployment in 11 authentic secondary education classrooms, and evaluated 1) AI-based auto-grader's capability; 2) students' prompting performance and confidence changes towards using AI for learning; and 3) the quality of learning and assessment materials. Results indicated that the AI-based auto-grader could grade student-written prompts with satisfactory quality. In addition, the instructional materials supported students in improving their prompting skills through practice and led to positive shifts in their perceptions of using AI for learning. Furthermore, data from Study 1 informed assessment revisions in Study 2. Analyses of item difficulty and discrimination in Study 2 showed that True/False and open-ended questions could measure prompting literacy more effectively than multiple-choice questions for our target learners. These promising outcomes highlight the potential for broader deployment and highlight the need for broader studies to assess learning effectiveness and assessment design.
Problem

Research questions and friction points this paper is trying to address.

Teaching effective AI prompting skills to K-12 students
Developing assessment methods for measuring prompting literacy
Evaluating AI-based auto-grading for student-written prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based module for teaching prompting literacy
Scenario-based practice with direct AI interaction
AI auto-grader evaluates student-written prompts effectively
🔎 Similar Papers
No similar papers found.