🤖 AI Summary
This work addresses the challenge of disentangling reasoning from memorization in large language model (LLM) evaluation on multiple-choice questions. We propose a general, non-numerical question mutation framework that systematically rewrites questions under semantic consistency constraints and enables controlled cross-lingual (English/Spanish) variation. This approach fully decouples correct answers from surface-level token or conceptual co-occurrences in training data, thereby compelling models to rely on genuine reasoning rather than memorized patterns. To our knowledge, this is the first domain-agnostic framework enabling rigorous reasoning–memorization separation and supporting contamination sensitivity analysis. Evaluated on MMLU and UNED-Access 2024, state-of-the-art models exhibit average accuracy drops of 57% and 50%, respectively—strongly indicating reliance on data contamination and memorization. Notably, high-scoring models (e.g., o3-mini) show markedly lower robustness than DeepSeek-R1-70B, challenging the validity of current benchmarking practices.
📝 Abstract
In LLM evaluations, reasoning is often distinguished from recall/memorization by performing numerical variations to math-oriented questions. Here we introduce a general variation method for multiple-choice questions that completely dissociates the correct answer from previously seen tokens or concepts, requiring LLMs to understand and reason (rather than memorizing) in order to answer correctly. Using this method, we evaluate state-of-the-art proprietary and open-source LLMs on two datasets available in English and Spanish: the public MMLU benchmark and the private UNED-Access 2024 dataset. Results show that all models experience remarkable accuracy drops under our proposed variation, with an average loss of 57% on MMLU and 50% on UNED-Access 2024, ranging from 10% to 93% across models. Notably, the most accurate model in our experimentation (OpenAI-o3-mini) is not the most robust (DeepSeek-R1-70B), suggesting that the best models in standard evaluations may not be the ones with better reasoning capabilities. Also, we see larger accuracy drops in public (vs private) datasets and questions posed in their original language (vs a manual translation), which are signs of contamination and also point to a relevant role of recall/memorization in current LLMs' answers.