Hearing and Seeing Through CLIP: A Framework for Self-Supervised Sound Source Localization

πŸ“… 2025-05-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses sound source localization under zero-text supervision. Methodologically, it proposes a CLIP-driven self-supervised framework featuring: (1) a novel zero-text audio representation transfer mechanism that maps raw audio into token sequences compatible with CLIP’s text encoder, yielding audio-driven cross-modal embeddings; (2) a contrastive audio-visual correspondence learning objective jointly optimized with sounding region mask generation; and (3) an LLM-guided object-aware audio-visual distillation strategy that leverages large language models’ semantic priors to enhance mask semantic completeness and spatial compactness. Evaluated on five benchmark tasks, the method achieves state-of-the-art performance across all metrics, demonstrates strong zero-shot generalization, and produces more complete localization results with sharper spatial boundaries.

Technology Category

Application Category

πŸ“ Abstract
Large-scale vision-language models demonstrate strong multimodal alignment and generalization across diverse tasks. Among them, CLIP stands out as one of the most successful approaches. In this work, we extend the application of CLIP to sound source localization, proposing a self-supervised method operates without explicit text input. We introduce a framework that maps audios into tokens compatible with CLIP's text encoder, producing audio-driven embeddings. These embeddings are used to generate sounding region masks, from which visual features are extracted and aligned with the audio embeddings through a contrastive audio-visual correspondence objective. Our findings show that alignment knowledge of pre-trained multimodal foundation model enables our method to generate more complete and compact localization for sounding objects. We further propose an LLM-guided extension that distills object-aware audio-visual scene understanding into the model during training to enhance alignment. Extensive experiments across five diverse tasks demonstrate that our method, in all variants, outperforms state-of-the-art approaches and achieves strong generalization in zero-shot settings.
Problem

Research questions and friction points this paper is trying to address.

Extend CLIP to self-supervised sound source localization
Align audio embeddings with visual features for object localization
Enhance audio-visual alignment using LLM-guided scene understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised audio token mapping for CLIP
Contrastive audio-visual alignment for localization
LLM-guided object-aware scene understanding
πŸ”Ž Similar Papers
No similar papers found.
S
Sooyoung Park
School of Electrical Engineering, KAIST, South Korea; ETRI, South Korea
Arda Senocak
Arda Senocak
KAIST
Computer VisionMachine Learning
Joon Son Chung
Joon Son Chung
KAIST
Machine learningspeech processingcomputer vision