Breakable Machine: A K-12 Classroom Game for Transformative AI Literacy Through Spoofing and eXplainable AI (XAI)

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current K–12 AI education (ages 10–15) overemphasizes model construction while neglecting critical AI literacy. Method: This study introduces an adversarial-game-based, deconstructive pedagogy wherein students manipulate image inputs to induce high-confidence misclassifications in classifiers and use explainable AI (XAI) techniques—particularly feature attribution visualizations—to intuitively grasp AI fragility, bias, and socio-technical implications. Model failures are reframed as collaborative inquiry opportunities, integrated with classroom leaderboards and cooperative “AI cracking” experiments. Contribution/Results: The approach reorients AI literacy toward critical thinking, data agency, and ethical awareness. All games and source code are open-sourced. Empirical evaluation demonstrates a significant cognitive shift—from “building AI” to “deconstructing and interrogating AI”—validating its efficacy in fostering critical AI literacy among young learners.

Technology Category

Application Category

📝 Abstract
This paper, submitted to the special track on resources for teaching AI in K-12, presents an eXplainable AI (XAI)-based classroom game "Breakable Machine" for teaching critical, transformative AI literacy through adversarial play and interrogation of AI systems. Designed for learners aged 10-15, the game invites students to spoof an image classifier by manipulating their appearance or environment in order to trigger high-confidence misclassifications. Rather than focusing on building AI models, this activity centers on breaking them-exposing their brittleness, bias, and vulnerability through hands-on, embodied experimentation. The game includes an XAI view to help students visualize feature saliency, revealing how models attend to specific visual cues. A shared classroom leaderboard fosters collaborative inquiry and comparison of strategies, turning the classroom into a site for collective sensemaking. This approach reframes AI education by treating model failure and misclassification not as problems to be debugged, but as pedagogically rich opportunities to interrogate AI as a sociotechnical system. In doing so, the game supports students in developing data agency, ethical awareness, and a critical stance toward AI systems increasingly embedded in everyday life. The game and its source code are freely available.
Problem

Research questions and friction points this paper is trying to address.

Teaching AI literacy through adversarial play and spoofing
Exposing AI model brittleness and bias via hands-on experimentation
Using XAI visualization to reveal model decision-making processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

XAI-based classroom game for AI literacy
Spoofing image classifiers through adversarial play
Visualizing feature saliency for model interrogation
🔎 Similar Papers
No similar papers found.
O
Olli Hilke
University of Eastern Finland, School of Computing, Joensuu, Finland
N
Nicolas Pope
University of Eastern Finland, School of Computing, Joensuu, Finland
J
Juho Kahila
University of Eastern Finland, Applied Educational Science and Teacher Education, Finland
H
Henriikka Vartiainen
University of Eastern Finland, Applied Educational Science and Teacher Education, Finland
Teemu Roos
Teemu Roos
Professor at University of Helsinki
Machine Learning#UnivHelsinkiCS
T
Tuomo Parkki
Joensuu Lyseo School, Finland
M
Matti Tedre
University of Eastern Finland, School of Computing, Joensuu, Finland