🤖 AI Summary
The DAVE task requires precise temporal localization and classification of audio-visual synchronization events in untrimmed videos. Existing approaches rely on independent unimodal encoders and global cross-modal attention, making them vulnerable to unimodal noise and neglecting the local temporal continuity inherent in such events. To address this, we propose an unsupervised, local-consistency-driven framework. Our method introduces a Locality-aware Correspondence Correction (LCC) module for self-supervised cross-modal correspondence refinement and a Cross-modal Dynamic Perception (CDP) layer to model fine-grained temporal alignment. Leveraging the intrinsic local correlations between audio and visual signals as self-supervised guidance, LCC and CDP jointly enable unimodal denoising and high-fidelity cross-modal alignment. The architecture integrates modality-specific encoders, a cross-modal feature pyramid, and data-driven local consistency constraints. On the DAVE benchmark, our approach significantly improves boundary localization accuracy and class discrimination robustness, achieving state-of-the-art performance and validating the effectiveness of local-consistency modeling.
📝 Abstract
Dense-localization Audio-Visual Events (DAVE) aims to identify time boundaries and corresponding categories for events that can be heard and seen concurrently in an untrimmed video. Existing DAVE solutions extract audio and visual features through modality-specific encoders and fuse them via dense cross-attention. The independent processing of each modality neglects their complementarity, resulting in modality-specific noise, while dense attention fails to account for local temporal continuity of events, causing irrelevant signal distractions. In this paper, we present LoCo, a Locality-aware cross-modal Correspondence learning framework for DAVE. The core idea is to explore local temporal continuity nature of audio-visual events, which serves as informative yet free supervision signals to guide the filtering of irrelevant information and inspire the extraction of complementary multimodal information during both unimodal and cross-modal learning stages. i) Specifically, LoCo applies Locality-aware Correspondence Correction (LCC) to unimodal features via leveraging cross-modal local-correlated properties without any extra annotations. This enforces unimodal encoders to highlight similar semantics shared by audio and visual features. ii) To better aggregate such audio and visual features, we further customize Cross-modal Dynamic Perception layer (CDP) in cross-modal feature pyramid to understand local temporal patterns of audio-visual events by imposing local consistency within multimodal features in a data-driven manner. By incorporating LCC and CDP, LoCo provides solid performance gains and outperforms existing DAVE methods.