Enhancing Security and Strengthening Defenses in Automated Short-Answer Grading Systems

๐Ÿ“… 2025-04-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study exposes the severe vulnerability of Transformer-based automated short-answer scoring systems in medical education under adversarial gaming. We systematically identify three novel classes of short-answer gaming strategies targeting such systems. To enhance robustness, we propose a defense framework integrating adversarial training, ensemble voting, and ridge regression. Furthermore, we introduce a GPT-4โ€“driven multi-prompt mechanism for detecting gaming behaviorโ€”a first in this domain. Experimental results demonstrate that our framework significantly reduces misclassification rates; the ensemble component improves defense success rate by over 40%. Leveraging diverse prompt engineering, GPT-4 achieves 89.2% accuracy in identifying gaming attempts. This work provides the first systematic security analysis and robustness enhancement solution tailored specifically to short-answer scoring in AI-powered educational assessment, advancing trustworthy AI for high-stakes medical education evaluation.

Technology Category

Application Category

๐Ÿ“ Abstract
This study examines vulnerabilities in transformer-based automated short-answer grading systems used in medical education, with a focus on how these systems can be manipulated through adversarial gaming strategies. Our research identifies three main types of gaming strategies that exploit the system's weaknesses, potentially leading to false positives. To counteract these vulnerabilities, we implement several adversarial training methods designed to enhance the systems' robustness. Our results indicate that these methods significantly reduce the susceptibility of grading systems to such manipulations, especially when combined with ensemble techniques like majority voting and ridge regression, which further improve the system's defense against sophisticated adversarial inputs. Additionally, employing large language models such as GPT-4 with varied prompting techniques has shown promise in recognizing and scoring gaming strategies effectively. The findings underscore the importance of continuous improvements in AI-driven educational tools to ensure their reliability and fairness in high-stakes settings.
Problem

Research questions and friction points this paper is trying to address.

Identifying vulnerabilities in automated grading systems for medical education
Developing adversarial training to enhance grading system robustness
Testing large language models to detect and score adversarial inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial training enhances grading system robustness
Ensemble techniques like majority voting improve defenses
GPT-4 with varied prompting detects gaming strategies
๐Ÿ”Ž Similar Papers
No similar papers found.
S
Sahar Yarmohammadtoosky
School of Data Science and Analytics, Kennesaw State University
Yiyun Zhou
Yiyun Zhou
Zhejiang University
Data MiningMultimodal LearningLarge Language Model
V
Victoria Yaneva
National Board of Medical Examiners (NBME)
P
Peter Baldwin
National Board of Medical Examiners (NBME)
Saed Rezayi
Saed Rezayi
NLP Scientist at NBME
AI/ML/NLPKnowledge Graphs
B
Brian Clauser
National Board of Medical Examiners (NBME)
P
Polina Harikeo
National Board of Medical Examiners (NBME)