What Makes a Good Reasoning Chain? Uncovering Structural Patterns in Long Chain-of-Thought Reasoning

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work has not clarified how structural characteristics of long chain-of-thought (LCoT) reasoning affect the correctness of large language model (LLM) inference. Method: We propose LCoT2Tree, the first framework to automatically model LCoT as a hierarchical tree structure, enabling systematic analysis of structural patterns—including exploration breadth, backtracking depth, and verification density—as predictors of answer correctness. We further identify interpretable failure modes (e.g., “over-branching”) and design a graph neural network (GNN)-based diagnostic model grounded in these structural features. Contribution/Results: Experiments across multiple tasks and LLMs demonstrate substantial improvements in diagnostic accuracy for reasoning processes. Moreover, leveraging structural insights to guide Best-of-N sampling significantly boosts final answer correctness. This work establishes a novel paradigm for enhancing LLM reasoning interpretability and controllable optimization through structural modeling.

Technology Category

Application Category

📝 Abstract
Recent advances in reasoning with large language models (LLMs) have popularized Long Chain-of-Thought (LCoT), a strategy that encourages deliberate and step-by-step reasoning before producing a final answer. While LCoTs have enabled expert-level performance in complex tasks, how the internal structures of their reasoning chains drive, or even predict, the correctness of final answers remains a critical yet underexplored question. In this work, we present LCoT2Tree, an automated framework that converts sequential LCoTs into hierarchical tree structures and thus enables deeper structural analysis of LLM reasoning. Using graph neural networks (GNNs), we reveal that structural patterns extracted by LCoT2Tree, including exploration, backtracking, and verification, serve as stronger predictors of final performance across a wide range of tasks and models. Leveraging an explainability technique, we further identify critical thought patterns such as over-branching that account for failures. Beyond diagnostic insights, the structural patterns by LCoT2Tree support practical applications, including improving Best-of-N decoding effectiveness. Overall, our results underscore the critical role of internal structures of reasoning chains, positioning LCoT2Tree as a powerful tool for diagnosing, interpreting, and improving reasoning in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Identifying structural patterns in long chain-of-thought reasoning
Predicting correctness of final answers using reasoning chain structures
Improving LLM reasoning through diagnostic and practical applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Converts sequential LCoTs into hierarchical trees
Uses GNNs to analyze structural reasoning patterns
Identifies critical thought patterns for failure analysis
🔎 Similar Papers
No similar papers found.
Gangwei Jiang
Gangwei Jiang
中国科学技术大学
machine learning
Y
Yahui Liu
Kuaishou Technology
Z
Zhaoyi Li
University of Science and Technology of China
Q
Qi Wang
Kuaishou Technology
F
Fuzheng Zhang
Kuaishou Technology
Linqi Song
Linqi Song
Associate Professor, Department of Computer Science, City University of Hong Kong
Information TheoryFederated LearningNatural Language Processing
Ying Wei
Ying Wei
Zhejiang University
Machine LearningTransfer LearningContinual LearningAI for Science
D
Defu Lian
University of Science and Technology of China