Modeling Hierarchical Thinking in Large Reasoning Models

📅 2025-10-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large reasoning models (LRMs) exhibit emergent hierarchical reasoning capabilities via chain-of-thought (CoT) prompting, yet their internal reasoning dynamics remain poorly understood and lack interpretable, formal modeling. Method: We propose a memoryless finite-state machine (FSM)-based framework to model CoT reasoning dynamics, abstracting CoT trajectories into high-level semantic state sequences—automatically identifying critical states including initialization, derivation, strategy enhancement, uncertainty estimation, and backtracking—and integrating human annotation for structured parsing and visualization. Contribution/Results: Unlike black-box analyses, our framework systematically characterizes cross-model differences in reasoning pathways, pinpoints typical failure patterns (e.g., strategy rigidity, insufficient backtracking), and exposes underlying mechanistic deficiencies. It establishes a novel, interpretable, and computationally tractable paradigm for evaluating reasoning capability, guiding training optimization, and enhancing model robustness.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable reasoning abilities when they generate step-by-step solutions, known as chain-of-thought (CoT) reasoning. When trained to using chain-of-thought reasoning examples, the resulting models (called Large Reasoning Models, or LRMs) appear to learn hierarchical thinking strategies similar to those used by humans. However, understanding LRMs emerging reasoning capabilities remains a difficult open problem, with many potential important applications including improving training and understanding robustness. In this paper, we adopt a memoryless Finite State Machine formulation to approximate LRM's emerging hierarchical reasoning dynamics as a structured, interpretable abstraction. We identify a small set of discrete reasoning states including - initialization, deduction, augmentation-strategy, uncertainty-estimation, backtracking, and final-conclusion that capture the high-level states present in the model's reasoning process. By annotating each step of a model's CoT with these states, we can represent the reasoning trajectory as a transition sequence through the state graph. This FSM formulation provides a systematic way to analyze, interpret and visualize how different models approach problems. We describe the FSM model, provide examples of CoT annotations under this scheme, and discuss how it can shed light on differences between available models in their approach to reasoning. Our results demonstrate that this FSM-based analysis reveals distinct reasoning patterns and potential shortcomings, offering a new lens to evaluate and improve LLM reasoning.
Problem

Research questions and friction points this paper is trying to address.

Modeling hierarchical reasoning dynamics in Large Reasoning Models
Developing interpretable finite state machine for reasoning analysis
Identifying distinct reasoning patterns and potential model shortcomings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Finite State Machine models hierarchical reasoning dynamics
Discrete states annotate chain-of-thought reasoning steps
State transitions visualize and analyze reasoning patterns
🔎 Similar Papers
No similar papers found.