Fast Monte Carlo Tree Diffusion: 100x Speedup via Parallel Sparse Planning

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models suffer from low inference efficiency in long-horizon trajectory planning, while Monte Carlo Tree Diffusion (MCTD) incurs substantial computational overhead due to sequential search and iterative denoising. To address these challenges, this paper proposes Fast-MCTD—a novel framework featuring two key innovations: (1) Parallel MCTD, which enables efficient concurrent rollouts via delayed parameter updates and redundancy-aware node selection; and (2) Sparse MCTD, which coarsens the temporal dimension of trajectories to reduce modeling complexity and drastically shrink the search space. Together, these components achieve over 100× speedup without compromising planning quality. Experiments on multi-task planning benchmarks demonstrate that Fast-MCTD attains up to 100× acceleration over standard MCTD, outperforms search-free diffusion planners in inference speed, and matches or exceeds their planning performance.

Technology Category

Application Category

📝 Abstract
Diffusion models have recently emerged as a powerful approach for trajectory planning. However, their inherently non-sequential nature limits their effectiveness in long-horizon reasoning tasks at test time. The recently proposed Monte Carlo Tree Diffusion (MCTD) offers a promising solution by combining diffusion with tree-based search, achieving state-of-the-art performance on complex planning problems. Despite its strengths, our analysis shows that MCTD incurs substantial computational overhead due to the sequential nature of tree search and the cost of iterative denoising. To address this, we propose Fast-MCTD, a more efficient variant that preserves the strengths of MCTD while significantly improving its speed and scalability. Fast-MCTD integrates two techniques: Parallel MCTD, which enables parallel rollouts via delayed tree updates and redundancy-aware selection; and Sparse MCTD, which reduces rollout length through trajectory coarsening. Experiments show that Fast-MCTD achieves up to 100x speedup over standard MCTD while maintaining or improving planning performance. Remarkably, it even outperforms Diffuser in inference speed on some tasks, despite Diffuser requiring no search and yielding weaker solutions. These results position Fast-MCTD as a practical and scalable solution for diffusion-based inference-time reasoning.
Problem

Research questions and friction points this paper is trying to address.

Improve speed of Monte Carlo Tree Diffusion planning
Reduce computational overhead in diffusion-based planning
Enhance scalability for long-horizon reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel MCTD enables parallel rollouts efficiently
Sparse MCTD reduces rollout length via coarsening
Combines tree search with diffusion for speedup