π€ AI Summary
This work addresses the high latency of chain-of-thought (CoT) reasoning in large language models, which stems from its inherently sequential nature and impedes real-time applications. To overcome this limitation, the authors propose a parallelized CoT framework that employs a βdirectorβ model to decompose complex reasoning tasks into subtasks amenable to concurrent execution by multiple βworkerβ models, thereby substantially shortening the critical path. The approach innovatively integrates a divide-and-conquer strategy with multi-stage reinforcement learning and incorporates a dynamic data filtering mechanism. Evaluated on challenging mathematical benchmarks such as AIME 2024 and HMMT 2025, the method maintains competitive accuracy while reducing the longest reasoning path by 35%β40%, leading to significantly lower inference latency.
π Abstract
Long chain-of-thought reasoning (Long CoT) is now fundamental to state-of-the-art LLMs, especially in mathematical reasoning. However, LLM generation is highly sequential, and long CoTs lead to a high latency. We propose to train Divide-and-Conquer CoT (DC-CoT) to reduce the latency. With DC-CoT, the model can act as a director that identifies distinct subtasks that can be performed in parallel in its reasoning process, and then spawns workers to execute the subtasks. Our goal is to achieve high accuracy, with a low longest path length, which is a theoretical measure of the latency needed for the response. We start with a long CoT base model (DeepScaleR-1.5B-Preview), and first use SFT with a small curated demonstration set to initialize its ability to spawn workers in a certain format. Because SFT degrades the accuracy significantly, we design a multi-stage RL algorithm, with various data filtering strategies, to recover the accuracy while decreasing the longest path length. Across several benchmarks including AIME 2024 and HMMT 2025, DC-CoT achieves similar accuracy as DeepScaleR-1.5B-Preview while decreasing longest path length by 35-40%. Our code, SFT dataset and models are publicly available at https://github.com/amahankali10/DC_CoT_RL_for_Low_Latency_CoT_with_Parallel_Reasoning.