In-Context Learning Boosts Speech Recognition via Human-like Adaptation to Speakers and Language Varieties

πŸ“… 2025-05-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates whether large language models (LLMs) can achieve human-like speaker and dialect adaptation for automatic speech recognition (ASR) from minimal exposure. To this end, we propose an in-context learning (ICL) framework for ASR: during inference, 12 audio-text exemplars (β‰ˆ50 seconds total) are interleaved with task instructions and fed into the Phi-4 multimodal modelβ€”without gradient updates or fine-tuning. We present the first empirical evidence that LLM-based ICL exhibits human-like phonetic adaptation behavior: performance improves with increasing exemplars but saturates asymptotically, with particularly pronounced gains in low-resource dialects and speaker-matched settings. Evaluated across multiple English corpora, our method achieves an average relative 19.7% reduction in word error rate (WER), corresponding to an absolute WER decrease of 1.2 percentage points. This demonstrates significantly enhanced robustness of ASR systems to speaker and linguistic variation through zero-shot, gradient-free adaptation.

Technology Category

Application Category

πŸ“ Abstract
Human listeners readily adjust to unfamiliar speakers and language varieties through exposure, but do these adaptation benefits extend to state-of-the-art spoken language models? We introduce a scalable framework that allows for in-context learning (ICL) in Phi-4 Multimodal using interleaved task prompts and audio-text pairs, and find that as few as 12 example utterances (~50 seconds) at inference time reduce word error rates by a relative 19.7% (1.2 pp.) on average across diverse English corpora. These improvements are most pronounced in low-resource varieties, when the context and target speaker match, and when more examples are provided--though scaling our procedure yields diminishing marginal returns to context length. Overall, we find that our novel ICL adaptation scheme (1) reveals a similar performance profile to human listeners, and (2) demonstrates consistent improvements to automatic speech recognition (ASR) robustness across diverse speakers and language backgrounds. While adaptation succeeds broadly, significant gaps remain for certain varieties, revealing where current models still fall short of human flexibility. We release our prompts and code on GitHub.
Problem

Research questions and friction points this paper is trying to address.

Enhancing speech recognition via human-like adaptation to speakers
Improving ASR robustness across diverse language varieties
Reducing word error rates with minimal in-context examples
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-context learning adapts speech recognition dynamically
Few-shot examples reduce word error rates significantly
Framework improves ASR robustness across diverse speakers
πŸ”Ž Similar Papers
No similar papers found.