Break-The-Chain: Reasoning Failures in LLMs via Adversarial Prompting in Code Generation

📅 2025-06-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) possess genuine reasoning capabilities in code generation—or merely rely on superficial statistical patterns. To this end, we propose the first explainable adversarial prompting framework targeting chain-of-thought (CoT) fragility, introducing semantically faithful yet structurally adversarial perturbations—including narrative reformulation, irrelevant constraint injection, example reordering, and numeric perturbation—to systematically assess model robustness to prompt formulation. Experiments across 700 LeetCode-style programming problems reveal that such perturbations induce up to a 42.1% drop or a 35.3% gain in accuracy, exposing the high sensitivity and unpredictability of current CoT reasoning. We publicly release both the perturbed dataset and evaluation framework, establishing a new benchmark and analytical toolkit for developing trustworthy, robust LLMs aligned with principled reasoning.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved remarkable success in tasks requiring complex reasoning, such as code generation, mathematical problem solving, and algorithmic synthesis -- especially when aided by reasoning tokens and Chain-of-Thought prompting. Yet, a core question remains: do these models truly reason, or do they merely exploit shallow statistical patterns? In this paper, we systematically investigate the robustness of reasoning LLMs by introducing a suite of semantically faithful yet adversarially structured prompt perturbations. Our evaluation -- spanning 700 perturbed code generations derived from LeetCode-style problems -- applies transformations such as storytelling reframing, irrelevant constraint injection, example reordering, and numeric perturbation. We observe that while certain modifications severely degrade performance (with accuracy drops up to -42.1%), others surprisingly improve model accuracy by up to 35.3%, suggesting sensitivity not only to semantics but also to surface-level prompt dynamics. These findings expose the fragility and unpredictability of current reasoning systems, underscoring the need for more principles approaches to reasoning alignments and prompting robustness. We release our perturbation datasets and evaluation framework to promote further research in trustworthy and resilient LLM reasoning.
Problem

Research questions and friction points this paper is trying to address.

Investigates robustness of LLMs in code generation reasoning
Examines impact of adversarial prompt perturbations on accuracy
Reveals fragility and unpredictability in current reasoning systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial prompting to test LLM reasoning
Semantically faithful prompt perturbations applied
Evaluated 700 perturbed code generation cases
🔎 Similar Papers
No similar papers found.