🤖 AI Summary
To address the limitation of deterministic regression in visual foundation model (VFM) feature spaces—namely, its inability to capture future uncertainty in world modeling—this paper proposes the first autoregressive flow-matching-based generative temporal forecasting framework. Methodologically: (i) it constructs a compact, information-preserving latent space, replacing PCA for improved representation fidelity; (ii) it integrates VFM feature distillation with a multimodal decoder supporting RGB, depth, surface normals, and semantic segmentation, enabling both deterministic decoding and probabilistic uncertainty modeling. Experiments demonstrate that, under comparable computational cost, our approach yields sharper and more accurate predictions across all modalities; moreover, the learned latent representations significantly outperform PCA baselines in both forecasting and image generation tasks. The core contribution lies in introducing flow matching into VFM feature-space generative modeling, thereby achieving efficient, semantically rich, and uncertainty-aware multimodal world prediction.
📝 Abstract
Forecasting from partial observations is central to world modeling. Many recent methods represent the world through images, and reduce forecasting to stochastic video generation. Although such methods excel at realism and visual fidelity, predicting pixels is computationally intensive and not directly useful in many applications, as it requires translating RGB into signals useful for decision making. An alternative approach uses features from vision foundation models (VFMs) as world representations, performing deterministic regression to predict future world states. These features can be directly translated into actionable signals such as semantic segmentation and depth, while remaining computationally efficient. However, deterministic regression averages over multiple plausible futures, undermining forecast accuracy by failing to capture uncertainty. To address this crucial limitation, we introduce a generative forecaster that performs autoregressive flow matching in VFM feature space. Our key insight is that generative modeling in this space requires encoding VFM features into a compact latent space suitable for diffusion. We show that this latent space preserves information more effectively than previously used PCA-based alternatives, both for forecasting and other applications, such as image generation. Our latent predictions can be easily decoded into multiple useful and interpretable output modalities: semantic segmentation, depth, surface normals, and even RGB. With matched architecture and compute, our method produces sharper and more accurate predictions than regression across all modalities. Our results suggest that stochastic conditional generation of VFM features offers a promising and scalable foundation for future world models.