CALM: Class-Conditional Sparse Attention Vectors for Large Audio-Language Models

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance gap between large audio language models and specialized architectures in discriminative tasks, as well as the limitation of conventional sparse attention mechanisms that apply uniform head weights while ignoring category-specific distinctions. The authors propose a novel few-shot audio classification method that introduces, for the first time, a class-conditional attention head importance weighting mechanism. This approach enables individual attention heads to specialize toward specific semantic categories and integrates their predictions through reliability-aware weighting. Evaluated across multiple few-shot audio and audio-visual classification benchmarks, the method substantially outperforms existing state-of-the-art techniques, achieving absolute accuracy improvements of up to 14.52%, 1.53%, and 8.35% respectively.

Technology Category

Application Category

📝 Abstract
Large audio-language models (LALMs) exhibit strong zero-shot capabilities in multiple downstream tasks, such as audio question answering (AQA) and abstract reasoning; however, these models still lag behind specialized models for certain discriminative tasks (e.g., audio classification). Recent studies show that sparse subsets of attention heads within an LALM can serve as strong discriminative feature extractors for downstream tasks such as classification via simple voting schemes. However, these methods assign uniform weights to all selected heads, implicitly assuming that each head contributes equally across all semantic categories. In this work, we propose Class-Conditional Sparse Attention Vectors for Large Audio-Language Models, a few-shot classification method that learns class-dependent importance weights over attention heads. This formulation allows individual heads to specialize in distinct semantic categories and to contribute to ensemble predictions proportionally to their estimated reliability. Experiments on multiple few-shot audio and audiovisual classification benchmarks and tasks demonstrate that our method consistently outperforms state-of-the-art uniform voting-based approaches by up to 14.52%, 1.53%, 8.35% absolute gains for audio classification, audio-visual classification, and spoofing detection respectively.
Problem

Research questions and friction points this paper is trying to address.

large audio-language models
few-shot classification
sparse attention
class-conditional weighting
discriminative tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

class-conditional attention
sparse attention
large audio-language models
few-shot classification
attention head weighting
🔎 Similar Papers
No similar papers found.