ASR Error Correction in Low-Resource Burmese with Alignment-Enhanced Transformers using Phonetic Features

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of systematic research on automatic speech recognition (ASR) error correction for low-resource Burmese. We propose the first Transformer-based correction framework that jointly models phonemes and alignment information. Our approach innovatively incorporates International Phonetic Alphabet (IPA) representations and forced-alignment features, while fusing outputs from multiple ASR backends to enable joint word- and character-level optimization within a sequence-to-sequence architecture. Experiments show that our method significantly reduces the average word error rate (WER) from 51.56% to 39.82% on unaugmented data, and improves the chrF++ score to 0.627 (+0.0406), substantially outperforming baselines. It maintains robust performance even with data augmentation (WER = 43.59%). This work establishes a transferable, phoneme-aware modeling paradigm for ASR error correction in low-resource languages.

Technology Category

Application Category

📝 Abstract
This paper investigates sequence-to-sequence Transformer models for automatic speech recognition (ASR) error correction in low-resource Burmese, focusing on different feature integration strategies including IPA and alignment information. To our knowledge, this is the first study addressing ASR error correction specifically for Burmese. We evaluate five ASR backbones and show that our ASR Error Correction (AEC) approaches consistently improve word- and character-level accuracy over baseline outputs. The proposed AEC model, combining IPA and alignment features, reduced the average WER of ASR models from 51.56 to 39.82 before augmentation (and 51.56 to 43.59 after augmentation) and improving chrF++ scores from 0.5864 to 0.627, demonstrating consistent gains over the baseline ASR outputs without AEC. Our results highlight the robustness of AEC and the importance of feature design for improving ASR outputs in low-resource settings.
Problem

Research questions and friction points this paper is trying to address.

Correcting ASR errors in low-resource Burmese language
Integrating phonetic and alignment features in Transformers
Improving word- and character-level accuracy of ASR outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformers with IPA phonetic features for Burmese ASR
Alignment-enhanced models for low-resource speech correction
Feature integration strategy reducing word error rates
🔎 Similar Papers
No similar papers found.
Y
Ye Bhone Lin
Language Understanding Laboratory , Myanmar
T
Thura Aung
Department of Computer Engineering, KMITL, Bangkok, Thailand
Ye Kyaw Thu
Ye Kyaw Thu
LST Lab., NECTEC (Thai), NLP Research Lab, UTYCC (Myanmar), Language Understanding Lab., (Myanmar)
Natural Language ProcessingMachine TranslationSpeech ProcessingAI
T
Thazin Myint Oo
Language Understanding Laboratory , Myanmar