🤖 AI Summary
Whisper’s pretrained models exhibit poor domain-specific terminology recognition and high word error rate (WER) of 68.49% in multilingual pilot speech transcription within cockpit environments. To address this, we propose a lightweight, aviation-oriented adaptation framework: first, we design a domain-specific text normalization strategy tailored to aeronautical jargon—standardizing spelling variants, acronyms, and multilingual code-switching expressions; second, leveraging authentic cockpit simulator recordings and pilot interview data, we apply low-rank adaptation (LoRA) for parameter-efficient fine-tuning of Whisper Large. This approach significantly enhances domain generalization, reducing WER to 26.26%—a 61.6% relative improvement. Our key contributions include establishing the first open-source aviation speech normalization specification and empirically validating LoRA’s efficacy for adapting ASR models under limited, specialized speech data. The methodology provides a reproducible, lightweight optimization paradigm for vertical-domain ASR systems.
📝 Abstract
The developments in transformer encoder-decoder architectures have led to significant breakthroughs in machine translation, Automatic Speech Recognition (ASR), and instruction-based chat machines, among other applications. The pre-trained models were trained on vast amounts of generic data over a few epochs (fewer than five in most cases), resulting in their strong generalization capabilities. Nevertheless, the performance of these models does suffer when applied to niche domains like transcribing pilot speech in the cockpit, which involves a lot of specific vocabulary and multilingual conversations. This paper investigates and improves the transcription accuracy of cockpit conversations with Whisper models. We have collected around 85 minutes of cockpit simulator recordings and 130 minutes of interview recordings with pilots and manually labeled them. The speakers are middle aged men speaking both German and English. To improve the accuracy of transcriptions, we propose multiple normalization schemes to refine the transcripts and improve Word Error Rate (WER). We then employ fine-tuning to enhance ASR performance, utilizing performance-efficient fine-tuning with Low-Rank Adaptation (LoRA). Hereby, WER decreased from 68.49 % (pretrained whisper Large model without normalization baseline) to 26.26% (finetuned whisper Large model with the proposed normalization scheme).