🤖 AI Summary
This work addresses the inefficiency of adapting large audio-language models (ALMs) in few-shot scenarios by proposing MUKA, a training-free adaptation framework that introduces multi-kernel learning to ALMs for the first time. MUKA integrates fine-grained contextual representations from instruction-tuned models (e.g., Pengi) with global semantic embeddings learned by contrastive pre-trained models (e.g., CLAP), aligning local similarity and global semantics through a product kernel—without requiring any additional training. Evaluated across 11 diverse audio datasets, MUKA achieves state-of-the-art performance among training-free methods and even surpasses several trainable adapters on multiple tasks, demonstrating both strong theoretical grounding and computational efficiency.
📝 Abstract
Multimodal foundation models have demonstrated impressive generalization capabilities, yet efficiently adapting them to new tasks in a few-shot setting remains a critical challenge. In this work, we investigate the few-shot adaptation of Large Audio-Language Models (ALMs) through both training-based and training-free approaches. We introduce MUKA, a multi-kernel adaptation framework that combines the fine-grained, context-dependent representations of instruction-tuning based models like Pengi with the global semantic representations of contrastive pretraining methods like CLAP. By constructing a product kernel that aligns local similarity with global semantics, MUKA enhances representational power while preserving the theoretical guarantees of kernel methods and avoiding additional training. Extensive experiments across 11 diverse audio datasets demonstrate that MUKA achieves state-of-the-art performance among training-free methods and even surpasses training-based adapters in several scenarios, offering a compelling balance between adaptability and efficiency.