Graph-Based Chain-of-Thought Pruning for Reducing Redundant Reflections in Reasoning LLMs

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency in large language models (LLMs) during reinforcement learning–enhanced chain-of-thought reasoning, where sparse rewards often lead to redundant reflections—such as indiscriminate checking and repetitive verification—that degrade reasoning efficiency. To tackle this, the authors propose the first directed acyclic graph–based chain-of-thought pruning framework, which explicitly models dependencies among reasoning steps and introduces dual pruning strategies at both the branch and depth levels to systematically identify and eliminate two prevalent redundancy patterns. Integrated with a three-stage training pipeline—comprising supervised fine-tuning, direct preference optimization, and GRPO with length penalty—the method achieves comparable or improved task accuracy while reducing reasoning tokens by 42% on average, substantially enhancing both conciseness and computational efficiency.
📝 Abstract
Extending CoT through RL has been widely used to enhance the reasoning capabilities of LLMs. However, due to the sparsity of reward signals, it can also induce undesirable thinking patterns such as overthinking, i.e., generating redundant intermediate reasoning content. In this work, we argue that a major source of such redundancy is inefficient reflection, which often manifests in two problematic patterns: Indiscriminate Reflection, where the model performs broad, low-impact checks throughout reasoning, and Repetitive Reflection, where it repeatedly re-verifies an already established conclusion. To address this, we introduce a graph-based CoT optimization framework. Specifically, we convert each linear CoT into a directed acyclic graph (DAG) with explicit dependency edges, and design a dual pruning strategy: branch-level pruning removes weakly contributing reflection branches, while depth-level pruning eliminates late-stage re-verification. We distill this behavior via a three-stage pipeline: (1) SFT to initialize the policy on pruned concise traces, (2) DPO to prefer correct but less redundant trajectories, and (3) GRPO with length penalty to jointly optimize answer correctness and efficiency. Experiments show that our approach reduces the average reasoning tokens by 42\% while maintaining or improving accuracy.
Problem

Research questions and friction points this paper is trying to address.

Chain-of-Thought
Redundant Reflection
Reasoning Efficiency
Large Language Models
Overthinking
Innovation

Methods, ideas, or system contributions that make the work stand out.

graph-based CoT
reasoning redundancy
reflection pruning
DAG reasoning
efficient LLM reasoning
🔎 Similar Papers
No similar papers found.
H
Hongyuan Yuan
School of Geosciences and Info-Physics, Central South University, Changsha, China
X
Xinran He
Baidu Inc., Beijing, China
R
Run Shao
School of Geosciences and Info-Physics, Central South University, Changsha, China
B
Bolei He
Baidu Inc., Beijing, China
X
Xianwei Xue
Baidu Inc., Beijing, China
M
Mengke Chen
Baidu Inc., Beijing, China
Q
Qiutong Pan
Baidu Inc., Beijing, China
H
Haiwei Wang
Baidu Inc., Beijing, China
Haifeng Li
Haifeng Li
Central South University
GISRemote sensingMachine learningSparse represetationBrain Theory