π€ AI Summary
This work addresses the challenge of language interference in multilingual speech large language models when trained solely on ASR-labeled data via distillation, which hinders effective instruction following due to shared projection layers. To mitigate this, the authors propose a language-aware distillation framework that introduces a learnable query bank and a gating network to dynamically select or blend language-specific query tokens. These tokens are then processed through a Q-Former projector, enabling efficient instruction-following training under pure ASR supervision. The proposed method achieves a 14% improvement over the baseline on instruction-following tasks and outperforms existing speech large models by 32% on Audio-MLQA, a newly constructed multilingual speech question-answering benchmark.
π Abstract
Speech Large Language Models (LLMs) that understand and follow instructions in many languages are useful for real-world interaction, but are difficult to train with supervised fine-tuning, requiring large, task-specific speech corpora. While recent distillation-based approaches train performant English-only Speech LLMs using only annotated ASR data by aligning text and speech using only a lightweight projector, these models under-perform when scaled to multilingual settings due to language interference in the shared projector. We address this by introducing language-aware distillation using a query bank and a gating network that selects or mixes query tokens using a Q-Former projector. Our approach shows gains of 14% over matched multilingual distillation baselines on instruction following. We further synthesize Audio-MLQA, a multilingual spoken QA benchmark built on MLQA with high-quality TTS questions. Our best model improves over existing Speech LLM baselines by 32% on Audio-MLQA.