DeepPrune: Parallel Scaling without Inter-trace Redundancy

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Parallel chain-of-thought (CoT) inference in large language models (LLMs) suffers from severe computational inefficiency due to high inter-path redundancy (>80% of reasoning trajectories yield identical answers). To address this, we propose DeepPrune, the first dynamic pruning framework that jointly integrates discriminative equivalence prediction with online greedy clustering. A discriminative model—trained with focal loss and oversampling—identifies redundant reasoning paths in real time; subsequently, equivalent branches are pruned dynamically under an answer-diversity constraint. Evaluated across multiple benchmarks and mainstream LLMs, DeepPrune achieves over 80% token savings while incurring ≤3 percentage points accuracy degradation. This substantially improves parallel CoT inference efficiency and establishes a novel paradigm for resource-efficient reasoning.

Technology Category

Application Category

📝 Abstract
Parallel scaling has emerged as a powerful paradigm to enhance reasoning capabilities in large language models (LLMs) by generating multiple Chain-of-Thought (CoT) traces simultaneously. However, this approach introduces significant computational inefficiency due to inter-trace redundancy -- our analysis reveals that over 80% of parallel reasoning traces yield identical final answers, representing substantial wasted computation. To address this critical efficiency bottleneck, we propose DeepPrune, a novel framework that enables efficient parallel scaling through dynamic pruning. Our method features a specialized judge model trained with focal loss and oversampling techniques to accurately predict answer equivalence from partial reasoning traces which realizes 0.87 AUROC on equivalence prediction, combined with an online greedy clustering algorithm that dynamically prunes redundant paths while preserving answer diversity. Comprehensive evaluations across three challenging benchmarks (AIME 2024, AIME 2025, and GPQA) and multiple reasoning models demonstrate that DeepPrune achieves remarkable token reduction by over 80% compared to conventional consensus sampling on most cases, while maintaining competitive accuracy within 3 percentage points. Our work establishes a new standard for efficient parallel reasoning, making high-performance reasoning more efficient. Our code and data are here: https://deepprune.github.io/
Problem

Research questions and friction points this paper is trying to address.

Reduces computational redundancy in parallel reasoning traces
Prunes redundant paths while preserving answer diversity
Maintains accuracy with over 80% token reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic pruning framework for parallel scaling
Judge model predicts answer equivalence early
Greedy clustering algorithm preserves answer diversity
🔎 Similar Papers
No similar papers found.