🤖 AI Summary
This work addresses the lack of systematic evaluation of auditory understanding capabilities in existing vision-language models when applied to egocentric videos. To this end, we introduce the first benchmark specifically designed for auditory comprehension in this setting, proposing a comprehensive evaluation framework encompassing seven task categories—including spatial localization and causal reasoning. Leveraging a multi-stage pipeline of automatic generation and human validation, we integrate data from Ego4D and EgoBlind to construct a high-quality evaluation set comprising 900 videos and 7,315 question-answer pairs. Experiments on nine state-of-the-art multimodal large language models reveal that while current models exhibit preliminary auditory reasoning abilities, they remain significantly limited in fine-grained spatial and causal understanding.
📝 Abstract
Multimodal Large Language Models (MLLMs) have recently achieved remarkable progress in vision-language understanding. Yet, human perception is inherently multisensory, integrating sight, sound, and motion to reason about the world. Among these modalities, sound provides indispensable cues about spatial layout, off-screen events, and causal interactions, particularly in egocentric settings where auditory and visual signals are tightly coupled. To this end, we introduce EgoSound, the first benchmark designed to systematically evaluate egocentric sound understanding in MLLMs. EgoSound unifies data from Ego4D and EgoBlind, encompassing both sighted and sound-dependent experiences. It defines a seven-task taxonomy spanning intrinsic sound perception, spatial localization, causal inference, and cross-modal reasoning. Constructed through a multi-stage auto-generative pipeline, EgoSound contains 7315 validated QA pairs across 900 videos. Comprehensive experiments on nine state-of-the-art MLLMs reveal that current models exhibit emerging auditory reasoning abilities but remain limited in fine-grained spatial and causal understanding. EgoSound establishes a challenging foundation for advancing multisensory egocentric intelligence, bridging the gap between seeing and truly hearing the world.