🤖 AI Summary
This work addresses the modality mismatch between speech and text that arises when adapting large language models (LLMs) for automatic speech recognition using only textual data. To bridge the gap between the speech encoder and the LLM, the authors propose a hybrid batch training strategy that jointly leverages a small amount of target-domain speech data—less than four hours—and abundant text data. Remarkably, with only 10% of the target-domain speech data, the proposed approach significantly outperforms text-only adaptation on both in-domain and out-of-domain evaluations, achieving word error rates comparable to or even better than those obtained by conventional fine-tuning with full speech datasets. These results demonstrate the method’s high efficiency and practical applicability in low-resource domain adaptation scenarios.
📝 Abstract
Conventional end-to-end automatic speech recognition (ASR) systems rely on paired speech-text data for domain adaptation. Recent LLM-based ASR architectures connect a speech encoder to a large language model via a projection module, enabling adaptation with text-only data. However, this introduces a modality gap, as the LLM is not exposed to the noisy representations produced by the speech projector. We investigate whether small amounts of speech can mitigate this mismatch. We compare three strategies: text-only adaptation, paired speech-text adaptation, and mixed batching (MB), which combines both. Experiments in in-domain and out-of-domain settings show that even limited speech consistently improves performance. Notably, MB using only 10% of the target-domain (less than 4 hours) speech achieves word error rates comparable to, or better than, conventional ASR fine-tuning with the full dataset, indicating that small amounts of speech provide a strong modality-alignment signal.