π€ AI Summary
This work addresses the limitations of existing vision-language-action models, which struggle to perceive and respond in real time to critical acoustic events due to their reliance on action chunking and open-loop control. To overcome this, the authors propose HEAR, the first framework to formalize a continuous vision-sound-language-action (VSLA) control paradigm. HEAR introduces a causal audio context preservation mechanism and explicit temporal modeling, supported by a novel sound-centric manipulation benchmark, HEAR-Bench, and a large-scale pretraining dataset, OpenX-Sound. The framework integrates a streaming Historizer, a foundation modelβdriven Envisioner, an audio world model Advancer, and a flow-matching Realizer to fuse multimodal inputs for dynamic, real-time responses in acoustic environments. Experiments demonstrate that HEAR significantly enhances robotic perception and reaction to transient sound events, underscoring the importance of continuous causal modeling and temporal learning in sound-driven tasks.
π Abstract
While recent Vision-Language-Action (VLA) models have begun to incorporate audio, they typically treat sound as static pre-execution prompts or focus exclusively on human speech. This leaves a significant gap in real-time, sound-centric manipulation where fleeting environmental acoustics provide critical state verification during task execution. Consequently, key sounds are easily missed due to low-frequency updates or system latency. This problem is exacerbated by action chunking with open-loop execution, which creates a Blind Execution Interval where acoustic events are lost between discrete audio observation windows. Recognizing the necessity of continuous auditory awareness, we formalize Vision-Sound-Language-Action (VSLA) as a continuous control paradigm conditioned on vision, streaming audio, language, and proprioception under delayed decision loops. As an instantiation, we introduce HEAR, a VSLA framework integrating four components: (i) a streaming Historizer to maintain a compact, causal audio context across execution gaps; (ii) an Envisioner adapted from omni foundation models to reason over multi-sensory inputs; (iii) an Advancer, formulated as an audio world model, to learn temporal dynamics by predicting near-future audio codes; and (iv) a flow-matching Realizer policy to generate smooth action chunks. To address the scarcity of pretraining data and evaluations for VSLA, we construct OpenX-Sound for pretraining, alongside HEAR-Bench, the first sound-centric manipulation benchmark with strict causal timing rules. Our results suggest that robust sound-centric manipulation necessitates causal persistence and explicit temporal learning. This framework provides a practical step toward multi-sensory foundation models for embodied agents, enabling robots to perceive and interact with dynamic environments. Code and videos are available at https://hear.irmv.top.