🤖 AI Summary
This work addresses the limitations of existing latent world models, which struggle to capture long-term semantics from short observation windows, and visual-language models (VLMs), whose sparse sampling and textual outputs render them unsuitable as dense predictors. To bridge this gap, the authors propose a dual-temporal-path framework: a dense JEPA branch models fine-grained dynamics, while a uniformly sampled VLM “thinker” branch—augmented with a hierarchical pyramid representation module—aggregates multi-level VLM semantic features into guidance signals compatible with the latent space. This enables effective knowledge transfer from the VLM to the dense prediction model. Evaluated on hand-manipulation trajectory forecasting, the approach significantly outperforms both pure VLM and JEPA baselines, demonstrating superior robustness in long-horizon rollouts.
📝 Abstract
Recent progress in latent world models (e.g., V-JEPA2) has shown promising capability in forecasting future world states from video observations. Nevertheless, dense prediction from a short observation window limits temporal context and can bias predictors toward local, low-level extrapolation, making it difficult to capture long-horizon semantics and reducing downstream utility. Vision--language models (VLMs), in contrast, provide strong semantic grounding and general knowledge by reasoning over uniformly sampled frames, but they are not ideal as standalone dense predictors due to compute-driven sparse sampling, a language-output bottleneck that compresses fine-grained interaction states into text-oriented representations, and a data-regime mismatch when adapting to small action-conditioned datasets. We propose a VLM-guided JEPA-style latent world modeling framework that combines dense-frame dynamics modeling with long-horizon semantic guidance via a dual-temporal pathway: a dense JEPA branch for fine-grained motion and interaction cues, and a uniformly sampled VLM \emph{thinker} branch with a larger temporal stride for knowledge-rich guidance. To transfer the VLM's progressive reasoning signals effectively, we introduce a hierarchical pyramid representation extraction module that aggregates multi-layer VLM representations into guidance features compatible with latent prediction. Experiments on hand-manipulation trajectory prediction show that our method outperforms both a strong VLM-only baseline and a JEPA-predictor baseline, and yields more robust long-horizon rollout behavior.