π€ AI Summary
Diffusion models for world modeling often suffer from inconsistent trajectory generation due to entanglement between conditional understanding and target denoising within a shared architecture. To address this, we propose Foresight Diffusionβa novel dual-stream decoupled framework: one deterministic prediction stream models temporal conditions, while the other incorporates distilled guidance representations from a pre-trained predictor to specialize in target denoising. This explicit separation of semantic conditioning from noise elimination overcomes an inherent consistency bottleneck in diffusion-based trajectory modeling. Evaluated on robotic video prediction and scientific spatiotemporal forecasting tasks, our method achieves significant improvements in both prediction accuracy and sample trajectory consistency, outperforming state-of-the-art diffusion models and streaming world model baselines.
π Abstract
Diffusion and flow-based models have enabled significant progress in generation tasks across various modalities and have recently found applications in world modeling. However, unlike typical generation tasks that encourage sample diversity, world models entail different sources of uncertainty and require consistent samples aligned with the ground-truth trajectory, which is a limitation we empirically observe in diffusion models. We argue that a key bottleneck in learning consistent diffusion-based world models lies in the suboptimal predictive ability, which we attribute to the entanglement of condition understanding and target denoising within shared architectures and co-training schemes. To address this, we propose Foresight Diffusion (ForeDiff), a diffusion-based world modeling framework that enhances consistency by decoupling condition understanding from target denoising. ForeDiff incorporates a separate deterministic predictive stream to process conditioning inputs independently of the denoising stream, and further leverages a pretrained predictor to extract informative representations that guide generation. Extensive experiments on robot video prediction and scientific spatiotemporal forecasting show that ForeDiff improves both predictive accuracy and sample consistency over strong baselines, offering a promising direction for diffusion-based world models.