🤖 AI Summary
Current LLM misuse detection tools in online learning suffer from low reliability, lack of standardized evaluation criteria, and insufficient understanding of educational implications. Method: This study proposes actionable, interpretable criteria for identifying LLM-generated text and introduces a novel multidimensional detection paradigm based on fine-tuned GPT-4o, integrating statistical indicators—including anomalously high scores, readability metrics, and response latency. Results: The method achieves 80% accuracy (F1 = 0.78) on short-answer detection, substantially outperforming GPTZero (70%, F1 = 0.50); robustness is confirmed via human coding and comparative evaluation. Empirical analysis further reveals that students misusing LLMs exhibit abnormally elevated post-test accuracy—indicating superficial engagement and bypassing of deep learning processes. This work constitutes the first systematic integration of detection methodology, interpretable decision criteria, and pedagogical impact analysis, providing both a methodological framework and empirical foundation for AI governance in education.
📝 Abstract
The increasing availability of large language models (LLMs) has raised concerns about their potential misuse in online learning. While tools for detecting LLM-generated text exist and are widely used by researchers and educators, their reliability varies. Few studies have compared the accuracy of detection methods, defined criteria to identify content generated by LLM, or evaluated the effect on learner performance from LLM misuse within learning. In this study, we define LLM-generated text within open responses as those produced by any LLM without paraphrasing or refinement, as evaluated by human coders. We then fine-tune GPT-4o to detect LLM-generated responses and assess the impact on learning from LLM misuse. We find that our fine-tuned LLM outperforms the existing AI detection tool GPTZero, achieving an accuracy of 80% and an F1 score of 0.78, compared to GPTZero's accuracy of 70% and macro F1 score of 0.50, demonstrating superior performance in detecting LLM-generated responses. We also find that learners suspected of LLM misuse in the open response question were more than twice as likely to correctly answer the corresponding posttest MCQ, suggesting potential misuse across both question types and indicating a bypass of the learning process. We pave the way for future work by demonstrating a structured, code-based approach to improve LLM-generated response detection and propose using auxiliary statistical indicators such as unusually high assessment scores on related tasks, readability scores, and response duration. In support of open science, we contribute data and code to support the fine-tuning of similar models for similar use cases.