HardcoreLogic: Challenging Large Reasoning Models with Long-tail Logic Puzzle Games

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing logic puzzle benchmarks (e.g., standard Sudoku) induce overfitting in large reasoning models (LRMs) to syntactic patterns and memorized solutions, obscuring deficits in genuine rule comprehension and generalization. Method: We introduce HardcoreLogic—a benchmark comprising 5,000+ puzzles—featuring the first systematic evaluation framework for the “long-tail” variants of logic puzzles. It employs three orthogonal perturbations: increased complexity, unconventional elements, and unsolvable instances, integrated with rule generation, constraint modeling, and multi-dimensional difficulty control for controllable, diverse, and high-challenge variant construction. Contribution/Results: Experiments reveal substantial performance degradation across mainstream LRMs on HardcoreLogic, exposing their heavy reliance on memorized shortcuts rather than robust logical reasoning. This benchmark provides a critical, empirically grounded evaluation tool to advance research on generalizable and interpretable reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models (LRMs) have demonstrated impressive performance on complex tasks, including logical puzzle games that require deriving solutions satisfying all constraints. However, whether they can flexibly apply appropriate rules to varying conditions, particularly when faced with non-canonical game variants, remains an open question. Existing corpora focus on popular puzzles like 9x9 Sudoku, risking overfitting to canonical formats and memorization of solution patterns, which can mask deficiencies in understanding novel rules or adapting strategies to new variants. To address this, we introduce HardcoreLogic, a challenging benchmark of over 5,000 puzzles across 10 games, designed to test the robustness of LRMs on the "long-tail" of logical games. HardcoreLogic systematically transforms canonical puzzles through three dimensions: Increased Complexity (IC), Uncommon Elements (UE), and Unsolvable Puzzles (UP), reducing reliance on shortcut memorization. Evaluations on a diverse set of LRMs reveal significant performance drops, even for models achieving top scores on existing benchmarks, indicating heavy reliance on memorized stereotypes. While increased complexity is the dominant source of difficulty, models also struggle with subtle rule variations that do not necessarily increase puzzle difficulty. Our systematic error analysis on solvable and unsolvable puzzles further highlights gaps in genuine reasoning. Overall, HardcoreLogic exposes the limitations of current LRMs and establishes a benchmark for advancing high-level logical reasoning.
Problem

Research questions and friction points this paper is trying to address.

Testing LRMs' flexibility with non-canonical logic puzzle variants
Reducing shortcut memorization through systematic puzzle transformations
Evaluating genuine reasoning gaps in solvable and unsolvable puzzles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing HardcoreLogic benchmark with 5000 puzzles
Systematically transforms puzzles via three complexity dimensions
Evaluates model robustness on long-tail logic games
🔎 Similar Papers
No similar papers found.
Jingcong Liang
Jingcong Liang
Fudan University
Computational ArgumentationLarge Language Model
S
Shijun Wan
Fudan University
X
Xuehai Wu
Fudan University
S
Siyuan Wang
University of Southern California
Y
Yitong Li
Huawei Technologies Ltd.
Q
Qianglong Chen
Huawei Technologies Ltd.
Duyu Tang
Duyu Tang
Huawei
Natural Language Processing
Z
Zhongyu Wei
Fudan University