Divide-and-Conquer CoT: RL for Reducing Latency via Parallel Reasoning

πŸ“… 2026-01-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the high latency of chain-of-thought (CoT) reasoning in large language models, which stems from its inherently sequential nature and impedes real-time applications. To overcome this limitation, the authors propose a parallelized CoT framework that employs a β€œdirector” model to decompose complex reasoning tasks into subtasks amenable to concurrent execution by multiple β€œworker” models, thereby substantially shortening the critical path. The approach innovatively integrates a divide-and-conquer strategy with multi-stage reinforcement learning and incorporates a dynamic data filtering mechanism. Evaluated on challenging mathematical benchmarks such as AIME 2024 and HMMT 2025, the method maintains competitive accuracy while reducing the longest reasoning path by 35%–40%, leading to significantly lower inference latency.

Technology Category

Application Category

πŸ“ Abstract
Long chain-of-thought reasoning (Long CoT) is now fundamental to state-of-the-art LLMs, especially in mathematical reasoning. However, LLM generation is highly sequential, and long CoTs lead to a high latency. We propose to train Divide-and-Conquer CoT (DC-CoT) to reduce the latency. With DC-CoT, the model can act as a director that identifies distinct subtasks that can be performed in parallel in its reasoning process, and then spawns workers to execute the subtasks. Our goal is to achieve high accuracy, with a low longest path length, which is a theoretical measure of the latency needed for the response. We start with a long CoT base model (DeepScaleR-1.5B-Preview), and first use SFT with a small curated demonstration set to initialize its ability to spawn workers in a certain format. Because SFT degrades the accuracy significantly, we design a multi-stage RL algorithm, with various data filtering strategies, to recover the accuracy while decreasing the longest path length. Across several benchmarks including AIME 2024 and HMMT 2025, DC-CoT achieves similar accuracy as DeepScaleR-1.5B-Preview while decreasing longest path length by 35-40%. Our code, SFT dataset and models are publicly available at https://github.com/amahankali10/DC_CoT_RL_for_Low_Latency_CoT_with_Parallel_Reasoning.
Problem

Research questions and friction points this paper is trying to address.

Chain-of-Thought
Latency
Parallel Reasoning
Large Language Models
Mathematical Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Divide-and-Conquer CoT
parallel reasoning
reinforcement learning
latency reduction
long chain-of-thought
πŸ”Ž Similar Papers
No similar papers found.
A
Arvind V. Mahankali
Stanford University
Kaiyue Wen
Kaiyue Wen
Phd Student, Stanford University
Machine LearningNatural Language Processing
T
Tengyu Ma
Stanford University