AI Misuse in Education Is a Measurement Problem: Toward a Learning Visibility Framework

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the risks posed by the misuse of artificial intelligence in education, which obscures the visibility of the learning process and threatens academic integrity, equity, and cognitive development. The authors propose a “learning visibility” framework that reconceptualizes AI misuse as a measurement challenge rather than a detection problem. Integrating cognitive offloading theory, learning analytics, and multimodal timeline reconstruction techniques, the framework establishes an assessment system centered on process transparency, normative AI use, and shared evidentiary practices. Moving beyond the limitations of conventional AI-detection tools, this approach offers educators a principled pathway for AI integration that upholds educational values, fosters trust, and enhances transparency, thereby effectively mitigating the “black box” effect induced by AI-mediated learning environments.

Technology Category

Application Category

📝 Abstract
The rapid integration of conversational AI systems into educational settings has intensified ethical concerns about academic integrity, fairness, and students'cognitive development. Institutional responses have largely centered on AI detection tools and restrictive policies, yet such approaches have proven unreliable and ethically contentious. This paper reframes AI misuse in education not primarily as a detection problem, but as a measurement problem rooted in the loss of visibility into the learning process. When AI enters the assessment loop, educators often retain access to final outputs but lose valuable insight into how those outputs were produced. Drawing on research in cognitive offloading, learning analytics, and multimodal timeline reconstruction, we propose the Learning Visibility Framework, grounded in three principles: clear specification and modeling of acceptable AI use, recognition of learning processes as assessable evidence alongside outcomes, and the establishment of transparent timelines of student activity. Rather than promoting surveillance, the framework emphasizes transparency and shared evidence as foundations for ethical AI integration in classroom settings. By shifting focus from adversarial detection toward process visibility, this work offers a principled pathway for aligning AI use with educational values while preserving trust and transparency between students and educators
Problem

Research questions and friction points this paper is trying to address.

AI misuse
learning visibility
educational assessment
cognitive offloading
academic integrity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning Visibility Framework
AI misuse
measurement problem
process visibility
learning analytics
🔎 Similar Papers
No similar papers found.
Eduardo Davalos
Eduardo Davalos
Assistant Professor, Trinity University
AIEDHCIeye-trackingLLM
Y
Yike Zhang
St. Mary’s University, San Antonio TX, 78228, USA