🤖 AI Summary
Whisper exhibits hallucination errors under noisy conditions, and existing mitigation strategies predominantly rely on preprocessing or postprocessing, lacking structural modifications to the model itself. To address this, we propose a two-stage robustification architecture: (1) an Adaptive Layer Attention (ALA) mechanism that dynamically enhances encoder-layer resilience to noise; and (2) a multi-objective knowledge distillation framework jointly optimizing transcription accuracy and cross-condition (clean/noisy) attention distribution alignment to suppress hallucinations. This is the first work to directly alleviate Whisper’s hallucinations at the architectural level, preserving both low-level acoustic fidelity and high-level semantic modeling. Experiments on standard noise benchmarks demonstrate significant reductions in both hallucination error rate and word error rate under noise, while maintaining pristine performance on clean speech—thereby improving reliability for real-world deployment.
📝 Abstract
The Whisper model, an open-source automatic speech recognition system, is widely adopted for its strong performance across multilingual and zero-shot settings. However, it frequently suffers from hallucination errors, especially under noisy acoustic conditions. Previous works to reduce hallucinations in Whisper-style ASR systems have primarily focused on audio preprocessing or post-processing of transcriptions to filter out erroneous content. However, modifications to the Whisper model itself remain largely unexplored to mitigate hallucinations directly. To address this challenge, we present a two-stage architecture that first enhances encoder robustness through Adaptive Layer Attention (ALA) and further suppresses hallucinations using a multi-objective knowledge distillation (KD) framework. In the first stage, ALA groups encoder layers into semantically coherent blocks via inter-layer correlation analysis. A learnable multi-head attention module then fuses these block representations, enabling the model to jointly exploit low- and high-level features for more robust encoding. In the second stage, our KD framework trains the student model on noisy audio to align its semantic and attention distributions with a teacher model processing clean inputs. Our experiments on noisy speech benchmarks show notable reductions in hallucinations and word error rates, while preserving performance on clean speech. Together, ALA and KD offer a principled strategy to improve Whisper's reliability under real-world noisy conditions.