The Compliance Paradox: Semantic-Instruction Decoupling in Automated Academic Code Evaluation

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical flaw in large language models (LLMs) when applied to automated code scoring: their tendency to over-comply with implicit instructions at the expense of code correctness, leading to functionally incorrect submissions being erroneously deemed acceptable—a phenomenon termed the “compliance paradox.” To systematically expose this vulnerability, the authors propose the SPACI framework and the AST-ASIP protocol, which employ semantics-preserving adversarial code injection by embedding adversarial instructions into Trivia nodes of abstract syntax trees. Evaluations across nine state-of-the-art LLMs and 25,000 multilingual code submissions reveal that even high-performing models, such as DeepSeek-V3, exhibit failure rates exceeding 95%. The work further introduces a three-dimensional metric to quantify the extent of “false certification” induced by such compliance-driven errors.

Technology Category

Application Category

📝 Abstract
The rapid integration of Large Language Models (LLMs) into educational assessment rests on the unverified assumption that instruction following capability translates directly to objective adjudication. We demonstrate that this assumption is fundamentally flawed. Instead of evaluating code quality, models frequently decouple from the submission's logic to satisfy hidden directives, a systemic vulnerability we term the Compliance Paradox, where models fine-tuned for extreme helpfulness are vulnerable to adversarial manipulation. To expose this, we introduce the Semantic-Preserving Adversarial Code Injection (SPACI) Framework and the Abstract Syntax Tree-Aware Semantic Injection Protocol (AST-ASIP). These methods exploit the Syntax-Semantics Gap by embedding adversarial directives into syntactically inert regions (trivia nodes) of the Abstract Syntax Tree. Through a large-scale evaluation of 9 SOTA models across 25,000 submissions in Python, C, C++, and Java, we reveal catastrophic failure rates (>95%) in high-capacity open-weights models like DeepSeek-V3, which systematically prioritize hidden formatting constraints over code correctness. We quantify this failure using our novel tripartite framework measuring Decoupling Probability, Score Divergence, and Pedagogical Severity to demonstrate the widespread"False Certification"of functionally broken code. Our findings suggest that current alignment paradigms create a"Trojan"vulnerability in automated grading, necessitating a shift from standard RLHF toward domain-specific Adjudicative Robustness, where models are conditioned to prioritize evidence over instruction compliance. We release our complete dataset and injection framework to facilitate further research on the topic.
Problem

Research questions and friction points this paper is trying to address.

Compliance Paradox
Automated Code Evaluation
Semantic-Instruction Decoupling
Adversarial Manipulation
False Certification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compliance Paradox
Semantic-Preserving Adversarial Code Injection
Abstract Syntax Tree-Aware Semantic Injection
Adjudicative Robustness
Syntax-Semantics Gap
🔎 Similar Papers
No similar papers found.
D
Devanshu Sahoo
BITS Pilani
M
Manish Prasad
BITS Pilani
V
Vasudev Majhi
BITS Pilani
A
Arjun Neekhra
BITS Pilani
Yash Sinha
Yash Sinha
National University of Singapore
Machine UnlearningSoftware Defined Networks
V
Vinay Chamola
BITS Pilani
M
Murari Mandal
KIIT University
D
Dhruv Kumar
Trustwise