🤖 AI Summary
Existing logic puzzle benchmarks (e.g., standard Sudoku) induce overfitting in large reasoning models (LRMs) to syntactic patterns and memorized solutions, obscuring deficits in genuine rule comprehension and generalization. Method: We introduce HardcoreLogic—a benchmark comprising 5,000+ puzzles—featuring the first systematic evaluation framework for the “long-tail” variants of logic puzzles. It employs three orthogonal perturbations: increased complexity, unconventional elements, and unsolvable instances, integrated with rule generation, constraint modeling, and multi-dimensional difficulty control for controllable, diverse, and high-challenge variant construction. Contribution/Results: Experiments reveal substantial performance degradation across mainstream LRMs on HardcoreLogic, exposing their heavy reliance on memorized shortcuts rather than robust logical reasoning. This benchmark provides a critical, empirically grounded evaluation tool to advance research on generalizable and interpretable reasoning capabilities.
📝 Abstract
Large Reasoning Models (LRMs) have demonstrated impressive performance on complex tasks, including logical puzzle games that require deriving solutions satisfying all constraints. However, whether they can flexibly apply appropriate rules to varying conditions, particularly when faced with non-canonical game variants, remains an open question. Existing corpora focus on popular puzzles like 9x9 Sudoku, risking overfitting to canonical formats and memorization of solution patterns, which can mask deficiencies in understanding novel rules or adapting strategies to new variants. To address this, we introduce HardcoreLogic, a challenging benchmark of over 5,000 puzzles across 10 games, designed to test the robustness of LRMs on the "long-tail" of logical games. HardcoreLogic systematically transforms canonical puzzles through three dimensions: Increased Complexity (IC), Uncommon Elements (UE), and Unsolvable Puzzles (UP), reducing reliance on shortcut memorization. Evaluations on a diverse set of LRMs reveal significant performance drops, even for models achieving top scores on existing benchmarks, indicating heavy reliance on memorized stereotypes. While increased complexity is the dominant source of difficulty, models also struggle with subtle rule variations that do not necessarily increase puzzle difficulty. Our systematic error analysis on solvable and unsolvable puzzles further highlights gaps in genuine reasoning. Overall, HardcoreLogic exposes the limitations of current LRMs and establishes a benchmark for advancing high-level logical reasoning.