🤖 AI Summary
This work addresses the susceptibility of Whisper to transcription errors and hallucinations in long-form audio scenarios such as lectures and interviews. To mitigate these issues, the authors propose a three-stage cascaded architecture: an initial transcription is generated using Wav2Vec2, followed by false-positive filtering via an Audio Spectrogram Transformer (AST), and finally refined output production by Whisper. The approach integrates uncertainty modeling and curriculum learning, and is trained on a diverse multilingual Russian speech corpus. Experimental results demonstrate that the proposed system significantly outperforms both Whisper and WhisperX across varying acoustic conditions, yielding substantial improvements in accuracy and robustness for long-form audio transcription. The implementation has been made publicly available.
📝 Abstract
This work presents a speech-to-text system"Pisets"for scientists and journalists which is based on a three-component architecture aimed at improving speech recognition accuracy while minimizing errors and hallucinations associated with the Whisper model. The architecture comprises primary recognition using Wav2Vec2, false positive filtering via the Audio Spectrogram Transformer (AST), and final speech recognition through Whisper. The implementation of curriculum learning methods and the utilization of diverse Russian-language speech corpora significantly enhanced the system's effectiveness. Additionally, advanced uncertainty modeling techniques were introduced, contributing to further improvements in transcription quality. The proposed approaches ensure robust transcribing of long audio data across various acoustic conditions compared to WhisperX and the usual Whisper model. The source code of"Pisets"system is publicly available at GitHub: https://github.com/bond005/pisets.