DISC: Dynamic Decomposition Improves LLM Inference Scaling

📅 2025-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing reasoning scaling methods rely on predefined, fixed-granularity step decomposition, limiting adaptability to varying problem complexity. This paper proposes a dynamic decomposition mechanism that adaptively partitions both the solution process and reasoning trace into variable-granularity steps during inference. Our core contributions are threefold: (1) the first confidence-driven real-time step decomposition; (2) priority-based sampling scheduling; and (3) multi-granularity reasoning path management—enabling difficulty-aware, dynamic allocation of computational resources and overcoming the limitations of static token-level or sentence-level step paradigms. Evaluated on APPS, MATH, and LiveCodeBench, our method achieves significant improvements in both reasoning efficiency and task accuracy, empirically validating the effectiveness and generalizability of adaptive step decomposition for scaling large language model reasoning.

Technology Category

Application Category

📝 Abstract
Many inference scaling methods work by breaking a problem into smaller steps (or groups of tokens), then sampling and choosing the best next step. However, these steps and their sizes are usually predetermined based on human intuition or domain knowledge. This paper introduces dynamic decomposition, a method that automatically and adaptively splits solution and reasoning traces into steps during inference. This approach improves computational efficiency by focusing more resources on difficult steps, breaking them down further and prioritizing their sampling. Experiments on coding and math benchmarks (APPS, MATH, and LiveCodeBench) show that dynamic decomposition performs better than static methods, which rely on fixed steps like token-level, sentence-level, or single-step decompositions. These results suggest that dynamic decomposition can enhance many inference scaling techniques.
Problem

Research questions and friction points this paper is trying to address.

Dynamic decomposition improves LLM inference scaling
Automatically adapts problem splitting during inference
Enhances computational efficiency in complex steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic decomposition adaptively splits steps
Improves efficiency by focusing on difficult steps
Outperforms static methods in benchmarks
🔎 Similar Papers
No similar papers found.
Jonathan Light
Jonathan Light
RPI PhD
Decision making under uncertaintyfoundation modelsreinforcement learning
W
Wei Cheng
NEC Laboratories America, Princeton, NJ, USA
W
Wu Yue
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ, USA
Masafumi Oyamada
Masafumi Oyamada
Chief Scientist, NEC Corporation
Self-Improving AIsLarge Language ModelsKnowledge Management
M
Mengdi Wang
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ, USA
Santiago Paternain
Santiago Paternain
Rensselaer Polytechnic Institute
Reinforcement LearningOptimizationControl Theory
H
Haifeng Chen
NEC Laboratories America, Princeton, NJ, USA