RoboOmni: Proactive Robot Manipulation in Omni-modal Context

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the problem of enabling robots to actively infer user intent without explicit instructions, leveraging multimodal contextual signals—including spoken dialogue, ambient sounds, and visual cues. To this end, we propose the novel paradigm of “multimodal contextual instruction” and introduce OmniAction, the first large-scale dataset featuring multi-speaker interactions, diverse environmental audio, and complex visual backgrounds. We further design the Perceiver-Thinker-Talker-Executor (PTTE) end-to-end framework, which jointly models spatiotemporal audio-visual-language signals to unify intent recognition, interactive confirmation, and action execution. Evaluated in both simulation and real-world settings, our approach significantly outperforms text-based and ASR-based baselines in task success rate, reduces inference latency, and enhances proactive collaboration capability. This work provides the first empirical validation of the critical role of full-modality fusion in enabling robots to actively perceive and respond to dynamic human environments.

Technology Category

Application Category

📝 Abstract
Recent advances in Multimodal Large Language Models (MLLMs) have driven rapid progress in Vision-Language-Action (VLA) models for robotic manipulation. Although effective in many scenarios, current approaches largely rely on explicit instructions, whereas in real-world interactions, humans rarely issue instructions directly. Effective collaboration requires robots to infer user intentions proactively. In this work, we introduce cross-modal contextual instructions, a new setting where intent is derived from spoken dialogue, environmental sounds, and visual cues rather than explicit commands. To address this new setting, we present RoboOmni, a Perceiver-Thinker-Talker-Executor framework based on end-to-end omni-modal LLMs that unifies intention recognition, interaction confirmation, and action execution. RoboOmni fuses auditory and visual signals spatiotemporally for robust intention recognition, while supporting direct speech interaction. To address the absence of training data for proactive intention recognition in robotic manipulation, we build OmniAction, comprising 140k episodes, 5k+ speakers, 2.4k event sounds, 640 backgrounds, and six contextual instruction types. Experiments in simulation and real-world settings show that RoboOmni surpasses text- and ASR-based baselines in success rate, inference speed, intention recognition, and proactive assistance.
Problem

Research questions and friction points this paper is trying to address.

Robots proactively infer human intentions from multimodal cues
Unifying intention recognition and action execution in robot manipulation
Addressing data scarcity for proactive robotic assistance through OmniAction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Omni-modal LLMs unify intention recognition and execution
Spatiotemporal fusion of auditory and visual signals
Large-scale OmniAction dataset enables proactive robot training
🔎 Similar Papers
No similar papers found.