🤖 AI Summary
This work addresses the limitation of existing large language models, which aggregate multiple chains of thought only at the trajectory level and thereby overlook high-quality intermediate steps embedded in partially correct attempts. To overcome this, the authors propose the Stitching Noisy Diffusion Thoughts (SNDT) framework—the first method enabling training-free, step-level reasoning composition. SNDT leverages a masked diffusion language model to generate diverse, low-cost reasoning trajectories, scores individual intermediate steps using a process reward model (PRM), and stitches together the highest-scoring steps across trajectories to construct a composite reasoning path, which is then used by an autoregressive solver to produce the final answer. By decoupling exploration, evaluation, and generation, SNDT achieves up to a 23.8% absolute improvement in average accuracy across six mathematical and programming benchmarks while reducing reasoning latency by up to 1.8× compared to methods such as Dream, LLaDA, and TiDAR.
📝 Abstract
Reasoning with large language models often benefits from generating multiple chains-of-thought, but existing aggregation strategies are typically trajectory-level (e.g., selecting the best trace or voting on the final answer), discarding useful intermediate work from partial or "nearly correct" attempts. We propose Stitching Noisy Diffusion Thoughts, a self-consistency framework that turns cheap diffusion-sampled reasoning into a reusable pool of step-level candidates. Given a problem, we (i) sample many diverse, low-cost reasoning trajectories using a masked diffusion language model, (ii) score every intermediate step with an off-the-shelf process reward model (PRM), and (iii) stitch these highest-quality steps across trajectories into a composite rationale. This rationale then conditions an autoregressive (AR) model (solver) to recompute only the final answer. This modular pipeline separates exploration (diffusion) from evaluation and solution synthesis, avoiding monolithic unified hybrids while preserving broad search. Across math reasoning benchmarks, we find that step-level recombination is most beneficial on harder problems, and ablations highlight the importance of the final AR solver in converting stitched but imperfect rationales into accurate answers. Using low-confidence diffusion sampling with parallel, independent rollouts, our training-free framework improves average accuracy by up to 23.8% across six math and coding tasks. At the same time, it achieves up to a 1.8x latency reduction relative to both traditional diffusion models (e.g., Dream, LLaDA) and unified architectures (e.g., TiDAR). Code is available at https://github.com/roymiles/diffusion-stitching.