🤖 AI Summary
This work addresses the challenge of error accumulation in long-form speech recognition, where large models often suffer from hallucinations, repetitions, or omissions due to their reliance on previously generated transcriptions. The authors propose a training-free contrastive decoding framework that refines token-by-token generation during inference by contrasting the logits from clean audio against those from three types of acoustically perturbed negatives: Gaussian noise, silence, and time-shifted inputs. Introducing, for the first time, a multi-negative contrastive mechanism combined with a log-sum-exp aggregation strategy, the method is plug-and-play compatible with existing Whisper systems. Evaluated on five English long-form speech benchmarks, it achieves up to a 24.3 percentage point reduction in word error rate (WER) on CORAAL and accelerates token generation by 48% compared to beam search.
📝 Abstract
Long-form speech recognition with large encoder-decoder models such as Whisper often exhibit hallucinations, repetition loops, and content omissions. These errors can accumulate and be further amplified when the previous segment's transcription is used as decoding context. We propose Whisper-CD, a training-free contrastive decoding framework that contrasts clean-audio logits against negative logits computed from three acoustically motivated perturbations: Gaussian noise injection, silence signal, and audio temporal shift. We aggregate these negatives via the log-sum-exp operator, building a unified multi-negative objective for token-by-token decoding. Across five English long-form benchmarks, Whisper-CD reduces WER by up to 24.3pp on CORAAL and shows 48% faster token generation throughput than beam search. Because Whisper-CD operates purely at inference time, it can be applied as a drop-in replacement to already-deployed Whisper systems without retraining.