🤖 AI Summary
This study addresses the poor cross-domain generalization of automatic speech recognition (ASR) models for the low-resource language Akan. We systematically evaluate seven Transformer-based models—derived from Whisper and Wav2Vec2—across four distinct domains: cultural narration, daily conversation, Bible reading, and financial dialogue. Results show strong in-domain performance but significant word error rate degradation under domain shift, revealing critical limitations in domain adaptability. A key finding is an inherent trade-off between interpretability and output fluency between Whisper and Wav2Vec2 architectures. To address these challenges, we propose three innovations: (1) a lightweight domain adaptation mechanism tailored for low-resource languages; (2) a dynamic routing strategy leveraging phonetic and semantic speech features; and (3) a multilingual joint training framework enabling multi-domain co-optimization. Empirical evaluation demonstrates that our approach effectively mitigates performance degradation, establishing a novel paradigm for robust, cross-domain ASR deployment in low-resource settings.
📝 Abstract
Most existing automatic speech recognition (ASR) research evaluate models using in-domain datasets. However, they seldom evaluate how they generalize across diverse speech contexts. This study addresses this gap by benchmarking seven Akan ASR models built on transformer architectures, such as Whisper and Wav2Vec2, using four Akan speech corpora to determine their performance. These datasets encompass various domains, including culturally relevant image descriptions, informal conversations, biblical scripture readings, and spontaneous financial dialogues. A comparison of the word error rate and character error rate highlighted domain dependency, with models performing optimally only within their training domains while showing marked accuracy degradation in mismatched scenarios. This study also identified distinct error behaviors between the Whisper and Wav2Vec2 architectures. Whereas fine-tuned Whisper Akan models led to more fluent but potentially misleading transcription errors, Wav2Vec2 produced more obvious yet less interpretable outputs when encountering unfamiliar inputs. This trade-off between readability and transparency in ASR errors should be considered when selecting architectures for low-resource language (LRL) applications. These findings highlight the need for targeted domain adaptation techniques, adaptive routing strategies, and multilingual training frameworks for Akan and other LRLs.