🤖 AI Summary
This study investigates whether large language models (LLMs) possess genuine reasoning capabilities in code generation—or merely rely on superficial statistical patterns. To this end, we propose the first explainable adversarial prompting framework targeting chain-of-thought (CoT) fragility, introducing semantically faithful yet structurally adversarial perturbations—including narrative reformulation, irrelevant constraint injection, example reordering, and numeric perturbation—to systematically assess model robustness to prompt formulation. Experiments across 700 LeetCode-style programming problems reveal that such perturbations induce up to a 42.1% drop or a 35.3% gain in accuracy, exposing the high sensitivity and unpredictability of current CoT reasoning. We publicly release both the perturbed dataset and evaluation framework, establishing a new benchmark and analytical toolkit for developing trustworthy, robust LLMs aligned with principled reasoning.
📝 Abstract
Large Language Models (LLMs) have achieved remarkable success in tasks requiring complex reasoning, such as code generation, mathematical problem solving, and algorithmic synthesis -- especially when aided by reasoning tokens and Chain-of-Thought prompting. Yet, a core question remains: do these models truly reason, or do they merely exploit shallow statistical patterns? In this paper, we systematically investigate the robustness of reasoning LLMs by introducing a suite of semantically faithful yet adversarially structured prompt perturbations. Our evaluation -- spanning 700 perturbed code generations derived from LeetCode-style problems -- applies transformations such as storytelling reframing, irrelevant constraint injection, example reordering, and numeric perturbation. We observe that while certain modifications severely degrade performance (with accuracy drops up to -42.1%), others surprisingly improve model accuracy by up to 35.3%, suggesting sensitivity not only to semantics but also to surface-level prompt dynamics. These findings expose the fragility and unpredictability of current reasoning systems, underscoring the need for more principles approaches to reasoning alignments and prompting robustness. We release our perturbation datasets and evaluation framework to promote further research in trustworthy and resilient LLM reasoning.