🤖 AI Summary
Zero-shot audio classification suffers from ambiguous text descriptions, semantic misalignment between text and audio, and difficulties in discriminating multi-source audio classes. To address these challenges, this paper proposes a few-shot transfer framework that replaces unstable text prompt embeddings with aggregated audio embeddings—such as cluster centroids or class prototypes—derived from support-set examples of the same class, thereby circumventing text-audio semantic alignment errors. Our method leverages a contrastively pre-trained dual-modality encoder to construct audio prototypes from a small support set and performs classification via nearest-neighbor or similarity-based matching. Experiments across multiple benchmark datasets demonstrate substantial improvements over zero-shot baselines, achieving an average accuracy gain of 8.2% while exhibiting enhanced robustness to acoustic noise and source diversity. The core contribution is the first systematic validation of audio prototypes for few-shot audio classification, establishing a novel, text-free paradigm for cross-modal representation learning.
📝 Abstract
State-of-the-art audio classification often employs a zero-shot approach, which involves comparing audio embeddings with embeddings from text describing the respective audio class. These embeddings are usually generated by neural networks trained through contrastive learning to align audio and text representations. Identifying the optimal text description for an audio class is challenging, particularly when the class comprises a wide variety of sounds. This paper examines few-shot methods designed to improve classification accuracy beyond the zero-shot approach. Specifically, audio embeddings are grouped by class and processed to replace the inherently noisy text embeddings. Our results demonstrate that few-shot classification typically outperforms the zero-shot baseline.