MUKA: Multi Kernel Audio Adaptation Of Audio-Language Models

📅 2026-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of adapting large audio-language models (ALMs) in few-shot scenarios by proposing MUKA, a training-free adaptation framework that introduces multi-kernel learning to ALMs for the first time. MUKA integrates fine-grained contextual representations from instruction-tuned models (e.g., Pengi) with global semantic embeddings learned by contrastive pre-trained models (e.g., CLAP), aligning local similarity and global semantics through a product kernel—without requiring any additional training. Evaluated across 11 diverse audio datasets, MUKA achieves state-of-the-art performance among training-free methods and even surpasses several trainable adapters on multiple tasks, demonstrating both strong theoretical grounding and computational efficiency.

Technology Category

Application Category

📝 Abstract
Multimodal foundation models have demonstrated impressive generalization capabilities, yet efficiently adapting them to new tasks in a few-shot setting remains a critical challenge. In this work, we investigate the few-shot adaptation of Large Audio-Language Models (ALMs) through both training-based and training-free approaches. We introduce MUKA, a multi-kernel adaptation framework that combines the fine-grained, context-dependent representations of instruction-tuning based models like Pengi with the global semantic representations of contrastive pretraining methods like CLAP. By constructing a product kernel that aligns local similarity with global semantics, MUKA enhances representational power while preserving the theoretical guarantees of kernel methods and avoiding additional training. Extensive experiments across 11 diverse audio datasets demonstrate that MUKA achieves state-of-the-art performance among training-free methods and even surpasses training-based adapters in several scenarios, offering a compelling balance between adaptability and efficiency.
Problem

Research questions and friction points this paper is trying to address.

few-shot adaptation
audio-language models
multimodal foundation models
training-free adaptation
model adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-kernel adaptation
Audio-language models
Few-shot learning
Training-free adaptation
Product kernel
🔎 Similar Papers
No similar papers found.
R
Reda Bensaid
IMT Atlantique, Brest, France; Polytechnique Montréal, Canada
A
Amine Ouasfi
Inria, University Rennes, IRISA, CNRS
Y
Yassir Bendou
IMT Atlantique, Brest, France
Ilyass Moummad
Ilyass Moummad
Postdoctoral Researcher, Inria IROKO, Montpellier
Deep LearningComputer VisionMachine Listening
Vincent Gripon
Vincent Gripon
IMT Atlantique and Lab-STICC
Deep LearningFew-Shot LearningArtificial Intelligence
F
François Leduc-Primeau
Polytechnique Montréal, Canada
A
Adnane Boukhayma
Inria, University Rennes, IRISA, CNRS