🤖 AI Summary
This study investigates whether state-of-the-art large language models (LLMs) possess genuine reasoning capabilities on elementary-level reasoning tasks or merely rely on pattern memorization from training data.
Method: We introduce RoR-Bench, a multimodal benchmark, and propose a novel “conditional perturbation + performance cliff” paradigm: systematically applying controlled, minimal perturbations—such as numeric changes, logical relation inversions, or syntactic rephrasings—to expose failures in compositional generalization to unseen condition combinations. We integrate multimodal prompting, cross-model consistency analysis, and formal task modeling.
Contribution/Results: Experiments reveal up to 60% accuracy drops under minor perturbations for models including OpenAI-o1 and DeepSeek-R1, strongly indicating recitation-dominated behavior. This work provides the first systematic empirical validation that current LLMs lack structured generalization in foundational reasoning, establishing a reproducible, quantitative benchmark and methodology for assessing authentic reasoning.
📝 Abstract
The rapid escalation from elementary school-level to frontier problems of the difficulty for LLM benchmarks in recent years have weaved a miracle for researchers that we are only inches away from surpassing human intelligence. However, is the LLMs' remarkable reasoning ability indeed comes from true intelligence by human standards, or are they simply reciting solutions witnessed during training at an Internet level? To study this problem, we propose RoR-Bench, a novel, multi-modal benchmark for detecting LLM's recitation behavior when asked simple reasoning problems but with conditions subtly shifted, and conduct empirical analysis on our benchmark. Surprisingly, we found existing cutting-edge LLMs unanimously exhibits extremely severe recitation behavior; by changing one phrase in the condition, top models such as OpenAI-o1 and DeepSeek-R1 can suffer $60%$ performance loss on elementary school-level arithmetic and reasoning problems. Such findings are a wake-up call to the LLM community that compels us to re-evaluate the true intelligence level of cutting-edge LLMs.