From Oracle to Noisy Context: Mitigating Contextual Exposure Bias in Speech-LLMs

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the context exposure bias in spoken large language models, which arises from training on idealized dialogue histories while relying on error-prone histories during inference. The study presents the first systematic formulation and solution to this problem through a unified training framework that integrates noisy dialogue histories generated by a teacher model (Whisper large-v3), a context dropout mechanism, supervised fine-tuning, and failure-case-driven direct preference optimization (DPO). This approach substantially enhances model robustness against erroneous or irrelevant contextual inputs, yielding consistent improvements on both TED-LIUM 3 and zero-shot LibriSpeech benchmarks. Specifically, with two-turn predicted history, the word error rate decreases from 5.59% to 5.17%, and under irrelevant-context attacks, performance degradation is minimized—increasing only slightly from 5.17% to 5.63%.

Technology Category

Application Category

📝 Abstract
Contextual automatic speech recognition (ASR) with Speech-LLMs is typically trained with oracle conversation history, but relies on error-prone history at inference, causing a train-test mismatch in the context channel that we term contextual exposure bias. We propose a unified training framework to improve robustness under realistic histories: (i) Teacher Error Knowledge by using Whisper large-v3 hypotheses as training-time history, (ii) Context Dropout to regularize over-reliance on history, and (iii) Direct Preference Optimization (DPO) on curated failure cases. Experiments on TED-LIUM 3 (in-domain) and zero-shot LibriSpeech (out-of-domain) show consistent gains under predicted-history decoding. With a two-utterance history as context, SFT with Whisper hypotheses reduce WER from 5.59% (oracle-history training) to 5.47%, and DPO further improves to 5.17%. Under irrelevant-context attacks, DPO yields the smallest degradation (5.17% -> 5.63%), indicating improved robustness to misleading context. Our code and models are published on https://github.com/XYGuo1996/Contextual_Speech_LLMs.
Problem

Research questions and friction points this paper is trying to address.

contextual exposure bias
speech-LLMs
train-test mismatch
noisy context
automatic speech recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contextual Exposure Bias
Speech-LLMs
Teacher Error Knowledge
Context Dropout
Direct Preference Optimization
🔎 Similar Papers