GradingAttack: Attacking Large Language Models Towards Short Answer Grading Ability

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) deployed in automated short-answer grading are vulnerable to adversarial attacks, which jeopardize the fairness and reliability of scoring. This work proposes GradingAttack, a novel framework that adapts general-purpose adversarial attacks to this specific task for the first time. It introduces two fine-grained attack strategies: token-level perturbations and prompt-level manipulations. Furthermore, the study presents a new evaluation metric that jointly considers both attack success rate and stealthiness. Experimental results demonstrate that prompt-level attacks achieve higher success rates, whereas token-level attacks exhibit greater imperceptibility. The effectiveness and practicality of the proposed approach are validated across multiple datasets.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated remarkable potential for automatic short answer grading (ASAG), significantly boosting student assessment efficiency and scalability in educational scenarios. However, their vulnerability to adversarial manipulation raises critical concerns about automatic grading fairness and reliability. In this paper, we introduce GradingAttack, a fine-grained adversarial attack framework that systematically evaluates the vulnerability of LLM based ASAG models. Specifically, we align general-purpose attack methods with the specific objectives of ASAG by designing token-level and prompt-level strategies that manipulate grading outcomes while maintaining high camouflage. Furthermore, to quantify attack camouflage, we propose a novel evaluation metric that balances attack success and camouflage. Experiments on multiple datasets demonstrate that both attack strategies effectively mislead grading models, with prompt-level attacks achieving higher success rates and token-level attacks exhibiting superior camouflage capability. Our findings underscore the need for robust defenses to ensure fairness and reliability in ASAG. Our code and datasets are available at https://anonymous.4open.science/r/GradingAttack.
Problem

Research questions and friction points this paper is trying to address.

adversarial attack
automatic short answer grading
large language models
grading fairness
model vulnerability
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial attack
automatic short answer grading
large language models
attack camouflage
fine-grained manipulation
🔎 Similar Papers
No similar papers found.