🤖 AI Summary
Vision-language models (VLMs) struggle to transfer to robot navigation due to misalignment between their pretraining objectives and the requirements of embodied action—particularly the disparity in action spaces and lack of navigation-specific supervision.
Method: We propose DiffNav, the first approach to leverage internet-scale pretrained image diffusion models for natural-language-guided open-world navigation. It fine-tunes a diffusion model to generate semantic-aware visual path masks—without pixel-level annotations—and decodes them into safe, executable trajectories via a lightweight behavior cloning policy. Supervision is derived from joint path masks generated by VLM-enhanced captioning and self-supervised visual tracking.
Contribution/Results: By introducing visual path masks as a unified intermediate representation, DiffNav enables cross-task compositional generalization and emergent compositional behaviors. Experiments show a 33% increase in navigation success rate and 54% reduction in collisions in real-world settings, outperforming state-of-the-art methods across object reaching, obstacle avoidance, and terrain preference tasks, with strong robustness to both seen and unseen environments.
📝 Abstract
Robots must adapt to diverse human instructions and operate safely in unstructured, open-world environments. Recent Vision-Language models (VLMs) offer strong priors for grounding language and perception, but remain difficult to steer for navigation due to differences in action spaces and pretraining objectives that hamper transferability to robotics tasks. Towards addressing this, we introduce VENTURA, a vision-language navigation system that finetunes internet-pretrained image diffusion models for path planning. Instead of directly predicting low-level actions, VENTURA generates a path mask (i.e. a visual plan) in image space that captures fine-grained, context-aware navigation behaviors. A lightweight behavior-cloning policy grounds these visual plans into executable trajectories, yielding an interface that follows natural language instructions to generate diverse robot behaviors. To scale training, we supervise on path masks derived from self-supervised tracking models paired with VLM-augmented captions, avoiding manual pixel-level annotation or highly engineered data collection setups. In extensive real-world evaluations, VENTURA outperforms state-of-the-art foundation model baselines on object reaching, obstacle avoidance, and terrain preference tasks, improving success rates by 33% and reducing collisions by 54% across both seen and unseen scenarios. Notably, we find that VENTURA generalizes to unseen combinations of distinct tasks, revealing emergent compositional capabilities. Videos, code, and additional materials: https://venturapath.github.io