Exposing Weaknesses of Large Reasoning Models through Graph Algorithm Problems

πŸ“… 2026-02-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current large reasoning models (LRMs) lack systematic evaluation benchmarks that address long-context processing, verifiability, and reasoning complexity. This work proposes GrAlgoBench, the first multidimensional reasoning evaluation framework grounded in graph algorithms, encompassing nine task categories and supporting programmatic automatic verification, fine-grained difficulty control, and reasoning trace analysis. Experimental results reveal that LRM accuracy drops sharply below 50% when the number of graph nodes exceeds 120, primarily due to execution errors, weak memory retention, and redundant reasoning steps. The study also uncovers an β€œoverthinking” phenomenon caused by ineffective self-verification, highlighting a fundamental limitation of current models in handling complex, structured reasoning tasks.

Technology Category

Application Category

πŸ“ Abstract
Large Reasoning Models (LRMs) have advanced rapidly; however, existing benchmarks in mathematics, code, and common-sense reasoning remain limited. They lack long-context evaluation, offer insufficient challenge, and provide answers that are difficult to verify programmatically. We introduce GrAlgoBench, a benchmark designed to evaluate LRMs through graph algorithm problems. Such problems are particularly well suited for probing reasoning abilities: they demand long-context reasoning, allow fine-grained control of difficulty levels, and enable standardized, programmatic evaluation. Across nine tasks, our systematic experiments reveal two major weaknesses of current LRMs. First, accuracy deteriorates sharply as context length increases, falling below 50% once graphs exceed 120 nodes. This degradation is driven by frequent execution errors, weak memory, and redundant reasoning. Second, LRMs suffer from an over-thinking phenomenon, primarily caused by extensive yet largely ineffective self-verification, which inflates reasoning traces without improving correctness. By exposing these limitations, GrAlgoBench establishes graph algorithm problems as a rigorous, multidimensional, and practically relevant testbed for advancing the study of reasoning in LRMs. Code is available at https://github.com/Bklight999/GrAlgoBench.
Problem

Research questions and friction points this paper is trying to address.

Large Reasoning Models
graph algorithm problems
long-context reasoning
reasoning evaluation
benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

graph algorithm benchmark
long-context reasoning
Large Reasoning Models
programmatic evaluation
over-thinking phenomenon
πŸ”Ž Similar Papers
No similar papers found.