How to Trick Your AI TA: A Systematic Study of Academic Jailbreaking in LLM Code Evaluation

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the academic integrity risk posed by students exploiting adversarial prompts to subvert LLM-based automated code grading systems and obtain undeserved scores—termed “academic jailbreaking.” Methodologically, we introduce the first poisoned academic adversarial dataset, comprising 25K real-world course submissions, accompanied by human-annotated ground-truth labels and fine-grained scoring rubrics. We establish a taxonomy of jailbreaking attacks tailored to educational LLM code evaluation and propose a three-dimensional evaluation framework: Jailbreaking Success Rate (JSR), Score Inflation (SI), and Harmfulness (H). Experiments span over 20 adversarial strategies and six mainstream LLMs, revealing that role-playing–based attacks achieve up to 97% jailbreaking success. All data, benchmarks, and source code are publicly released to support the development of robust AI teaching assistants.

Technology Category

Application Category

📝 Abstract
The use of Large Language Models (LLMs) as automatic judges for code evaluation is becoming increasingly prevalent in academic environments. But their reliability can be compromised by students who may employ adversarial prompting strategies in order to induce misgrading and secure undeserved academic advantages. In this paper, we present the first large-scale study of jailbreaking LLM-based automated code evaluators in academic context. Our contributions are: (i) We systematically adapt 20+ jailbreaking strategies for jailbreaking AI code evaluators in the academic context, defining a new class of attacks termed academic jailbreaking. (ii) We release a poisoned dataset of 25K adversarial student submissions, specifically designed for the academic code-evaluation setting, sourced from diverse real-world coursework and paired with rubrics and human-graded references, and (iii) In order to capture the multidimensional impact of academic jailbreaking, we systematically adapt and define three jailbreaking metrics (Jailbreak Success Rate, Score Inflation, and Harmfulness). (iv) We comprehensively evalulate the academic jailbreaking attacks using six LLMs. We find that these models exhibit significant vulnerability, particularly to persuasive and role-play-based attacks (up to 97% JSR). Our adversarial dataset and benchmark suite lay the groundwork for next-generation robust LLM-based evaluators in academic code assessment.
Problem

Research questions and friction points this paper is trying to address.

Investigates adversarial attacks on LLM-based code evaluators in academia
Studies how students manipulate AI judges for unfair academic advantages
Systematically evaluates vulnerabilities of automated grading systems to jailbreaking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematically adapts 20+ jailbreaking strategies for academic code evaluation
Releases a poisoned dataset of 25K adversarial student submissions
Defines three jailbreaking metrics to measure multidimensional impact
🔎 Similar Papers
No similar papers found.
D
Devanshu Sahoo
BITS Pilani
V
Vasudev Majhi
BITS Pilani
A
Arjun Neekhra
BITS Pilani
Yash Sinha
Yash Sinha
National University of Singapore
Machine UnlearningSoftware Defined Networks
M
Murari Mandal
KIIT University
D
Dhruv Kumar
BITS Pilani