🤖 AI Summary
This work addresses the challenge of automatic speech recognition (ASR) for low-resource languages, where limited labeled data hinders generalization to unseen languages. It presents the first empirical validation of multimodal in-context learning (MICL) in speech-capable large language models—such as Phi-4 and Qwen3-Omni—for zero-shot ASR on unseen low-resource languages. The authors propose a hypothesis selection mechanism that combines a strong acoustic model with a speech-enabled LLM, leveraging joint audio-text prompts to enable cross-lingual transfer. Experimental results demonstrate that MICL substantially improves ASR performance, achieving cross-lingual transfer capabilities comparable to or even surpassing conventional supervised methods that rely on target-language annotations—all without any target-language supervision. Attention analysis further reveals the model’s intrinsic preference for the textual modality during inference.
📝 Abstract
Automatic speech recognition (ASR) still covers only a small fraction of the world's languages, mainly due to supervised data scarcity. In-context learning (ICL) with large language models (LLMs) addresses this problem, but prior work largely focuses on high-resource languages covered during training and text-only settings. This paper investigates whether speech LLMs can learn unseen languages with multimodal ICL (MICL), and how this learning can be used to improve ASR. We conduct experiments with two speech LLMs, Phi-4 and Qwen3-Omni, on three diverse endangered languages. Firstly, we find that MICL is effective for unseen languages, leveraging both speech and text modalities. We further show that cross-lingual transfer learning improves MICL efficiency on target languages without training on them. Moreover, we analyze attention patterns to interpret MICL mechanisms, and we observe layer-dependent preferences between audio and text context, with an overall bias towards text. Finally, we show that prompt-based ASR with speech LLMs performs poorly on unseen languages, motivating a simple ASR system that combines a stronger acoustic model with a speech LLM via MICL-based selection of acoustic hypotheses. Results show that MICL consistently improves ASR performance, and that cross-lingual transfer learning matches or outperforms corpus-trained language models without using target-language data. Our code is publicly available.