🤖 AI Summary
This work addresses the challenges of long-horizon planning and poor initialization in high-dimensional action spaces for vision-based navigation driven by natural language instructions. The authors propose PiJEPA, a two-stage framework that first leverages a fine-tuned general-purpose policy—built upon the Octo architecture with a frozen DINOv2 or V-JEPA-2 visual encoder—to generate language- and vision-conditioned action priors. These priors then guide model-predictive path integral (MPPI) planning within a JEPA world model. Notably, this is the first approach to utilize policy-derived action priors to initialize the sampling distribution of a world model, substantially improving both planning efficiency and instruction-following fidelity. Experiments demonstrate that PiJEPA outperforms pure policy execution and unguided planning in real-world navigation tasks, achieving significant gains in goal-reaching accuracy and adherence to natural language instructions.
📝 Abstract
Navigating to a visually specified goal given natural language instructions remains a fundamental challenge in embodied AI. Existing approaches either rely on reactive policies that struggle with long-horizon planning, or employ world models that suffer from poor action initialization in high-dimensional spaces. We present PiJEPA, a two-stage framework that combines the strengths of learned navigation policies with latent world model planning for instruction-conditioned visual navigation. In the first stage, we finetune an Octo-based generalist policy, augmented with a frozen pretrained vision encoder (DINOv2 or V-JEPA-2), on the CAST navigation dataset to produce an informed action distribution conditioned on the current observation and language instruction. In the second stage, we use this policy-derived distribution to warm-start Model Predictive Path Integral (MPPI) planning over a separately trained JEPA world model, which predicts future latent states in the embedding space of the same frozen encoder. By initializing the MPPI sampling distribution from the policy prior rather than from an uninformed Gaussian, our planner converges faster to high-quality action sequences that reach the goal. We systematically study the effect of the vision encoder backbone, comparing DINOv2 and V-JEPA-2, across both the policy and world model components. Experiments on real-world navigation tasks demonstrate that PiJEPA significantly outperforms both standalone policy execution and uninformed world model planning, achieving improved goal-reaching accuracy and instruction-following fidelity.