OmniVLA: An Omni-Modal Vision-Language-Action Model for Robot Navigation

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world robotic navigation is often constrained by unimodal goal specification, limiting adaptability to natural, complementary instruction modalities such as language, vision, or spatial coordinates. To address this, we propose OmniVLA—the first vision-language-action model supporting unified multimodal modeling of language, vision, and pose. Its core is a stochastic multimodal fusion training framework that enables end-to-end joint learning from 2D pose representations, egocentric images, natural language, and their arbitrary combinations. This design significantly enhances robustness in unseen environments, generalization under sparse or partial modality inputs, and rapid adaptation to novel modalities and tasks. Experiments demonstrate that OmniVLA outperforms specialized baselines on cross-modal navigation benchmarks, achieves strong zero-shot generalization—accurately executing unseen linguistic instructions—and seamlessly transfers to new environments without fine-tuning.

Technology Category

Application Category

📝 Abstract
Humans can flexibly interpret and compose different goal specifications, such as language instructions, spatial coordinates, or visual references, when navigating to a destination. In contrast, most existing robotic navigation policies are trained on a single modality, limiting their adaptability to real-world scenarios where different forms of goal specification are natural and complementary. In this work, we present a training framework for robotic foundation models that enables omni-modal goal conditioning for vision-based navigation. Our approach leverages a high-capacity vision-language-action (VLA) backbone and trains with three primary goal modalities: 2D poses, egocentric images, and natural language, as well as their combinations, through a randomized modality fusion strategy. This design not only expands the pool of usable datasets but also encourages the policy to develop richer geometric, semantic, and visual representations. The resulting model, OmniVLA, achieves strong generalization to unseen environments, robustness to scarce modalities, and the ability to follow novel natural language instructions. We demonstrate that OmniVLA outperforms specialist baselines across modalities and offers a flexible foundation for fine-tuning to new modalities and tasks. We believe OmniVLA provides a step toward broadly generalizable and flexible navigation policies, and a scalable path for building omni-modal robotic foundation models. We present videos showcasing OmniVLA performance and will release its checkpoints and training code on our project page.
Problem

Research questions and friction points this paper is trying to address.

Existing robotic navigation policies are limited to single modality goal specifications
Real-world navigation requires flexible interpretation of complementary goal modalities
Specialist models lack adaptability to diverse forms of goal specification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Omni-modal goal conditioning for vision-based navigation
Randomized modality fusion strategy for training
High-capacity vision-language-action backbone architecture
🔎 Similar Papers
No similar papers found.