RiddleBench: A New Generative Reasoning Benchmark for LLMs

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models exhibit moderate performance on structured reasoning tasks but lack comprehensive evaluation of their integrated capabilities—particularly in logical reasoning, spatial awareness, and constraint satisfaction. Method: To address this gap, we introduce RiddleBench, the first English-language puzzle benchmark (1,737 items) explicitly designed to assess multi-functional collaborative reasoning in generative settings. It employs systematic perturbations—including constraint reordering and distractor injection—to rigorously evaluate reasoning robustness, coherence, self-correction, and interference resistance. Contribution/Results: Experiments reveal that state-of-the-art models (Gemini 2.5 Pro, o3, Claude 4 Sonnet) achieve only 60.30%–63.37% accuracy, exposing fundamental deficiencies such as hallucination cascades, confirmation bias, and reasoning fragility. RiddleBench serves both as a diagnostic tool and a constructive benchmark, establishing a new paradigm for advancing holistic reasoning capabilities in foundation models.

Technology Category

Application Category

📝 Abstract
Large Language Models have demonstrated strong performance on many established reasoning benchmarks. However, these benchmarks primarily evaluate structured skills like quantitative problem-solving, leaving a gap in assessing flexible, multifaceted reasoning abilities that are central to human intelligence. These abilities require integrating logical deduction with spatial awareness and constraint satisfaction, which current evaluations do not measure well. To address this, we introduce RiddleBench, a benchmark of 1,737 challenging puzzles in English designed to probe these core reasoning capabilities. Evaluation of state-of-the-art models on RiddleBench shows fundamental weaknesses. Even top proprietary models like Gemini 2.5 Pro, o3, and Claude 4 Sonnet achieve accuracy just above 60% (60.30%, 63.37%, and 63.16%). Analysis further reveals deep failures, including hallucination cascades (accepting flawed reasoning from other models) and poor self-correction due to a strong self-confirmation bias. Their reasoning is also fragile, with performance degrading significantly when constraints are reordered or irrelevant information is introduced. RiddleBench functions as a diagnostic tool for these issues and as a resource for guiding the development of more robust and reliable language models.
Problem

Research questions and friction points this paper is trying to address.

Assessing flexible multifaceted reasoning abilities in LLMs
Measuring integration of logical spatial and constraint reasoning
Diagnosing hallucination and self-correction failures in generative models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces RiddleBench benchmark for multifaceted reasoning
Evaluates models on logical, spatial, and constraint integration
Diagnoses reasoning failures like hallucination and self-confirmation bias
🔎 Similar Papers
No similar papers found.