Whisper-CD: Accurate Long-Form Speech Recognition using Multi-Negative Contrastive Decoding

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of error accumulation in long-form speech recognition, where large models often suffer from hallucinations, repetitions, or omissions due to their reliance on previously generated transcriptions. The authors propose a training-free contrastive decoding framework that refines token-by-token generation during inference by contrasting the logits from clean audio against those from three types of acoustically perturbed negatives: Gaussian noise, silence, and time-shifted inputs. Introducing, for the first time, a multi-negative contrastive mechanism combined with a log-sum-exp aggregation strategy, the method is plug-and-play compatible with existing Whisper systems. Evaluated on five English long-form speech benchmarks, it achieves up to a 24.3 percentage point reduction in word error rate (WER) on CORAAL and accelerates token generation by 48% compared to beam search.

Technology Category

Application Category

📝 Abstract
Long-form speech recognition with large encoder-decoder models such as Whisper often exhibit hallucinations, repetition loops, and content omissions. These errors can accumulate and be further amplified when the previous segment's transcription is used as decoding context. We propose Whisper-CD, a training-free contrastive decoding framework that contrasts clean-audio logits against negative logits computed from three acoustically motivated perturbations: Gaussian noise injection, silence signal, and audio temporal shift. We aggregate these negatives via the log-sum-exp operator, building a unified multi-negative objective for token-by-token decoding. Across five English long-form benchmarks, Whisper-CD reduces WER by up to 24.3pp on CORAAL and shows 48% faster token generation throughput than beam search. Because Whisper-CD operates purely at inference time, it can be applied as a drop-in replacement to already-deployed Whisper systems without retraining.
Problem

Research questions and friction points this paper is trying to address.

long-form speech recognition
hallucinations
repetition loops
content omissions
decoding errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

contrastive decoding
multi-negative perturbation
long-form speech recognition
Whisper
training-free inference
🔎 Similar Papers
No similar papers found.
H
Hoseong Ahn
Department of Intelligent Software, Sungkyunkwan University
J
Jeongyun Chae
Department of Intelligent Software, Sungkyunkwan University
Y
Yoonji Park
Department of Computer Science and Engineering, Sungkyunkwan University
Kyuhong Shim
Kyuhong Shim
Sungkyunkwan University
Deep LearningSpeech ProcessingLanguage Processing