🤖 AI Summary
This study addresses the lack of systematic research on automatic speech recognition (ASR) error correction for low-resource Burmese. We propose the first Transformer-based correction framework that jointly models phonemes and alignment information. Our approach innovatively incorporates International Phonetic Alphabet (IPA) representations and forced-alignment features, while fusing outputs from multiple ASR backends to enable joint word- and character-level optimization within a sequence-to-sequence architecture. Experiments show that our method significantly reduces the average word error rate (WER) from 51.56% to 39.82% on unaugmented data, and improves the chrF++ score to 0.627 (+0.0406), substantially outperforming baselines. It maintains robust performance even with data augmentation (WER = 43.59%). This work establishes a transferable, phoneme-aware modeling paradigm for ASR error correction in low-resource languages.
📝 Abstract
This paper investigates sequence-to-sequence Transformer models for automatic speech recognition (ASR) error correction in low-resource Burmese, focusing on different feature integration strategies including IPA and alignment information. To our knowledge, this is the first study addressing ASR error correction specifically for Burmese. We evaluate five ASR backbones and show that our ASR Error Correction (AEC) approaches consistently improve word- and character-level accuracy over baseline outputs. The proposed AEC model, combining IPA and alignment features, reduced the average WER of ASR models from 51.56 to 39.82 before augmentation (and 51.56 to 43.59 after augmentation) and improving chrF++ scores from 0.5864 to 0.627, demonstrating consistent gains over the baseline ASR outputs without AEC. Our results highlight the robustness of AEC and the importance of feature design for improving ASR outputs in low-resource settings.