🤖 AI Summary
To address the challenges in Taiwanese Hokkien ASR—namely, the difficulty of modeling fine-grained phonetic details using character-based annotations and the limited lexical-syntactic coverage of romanized (e.g., Tâi-lô) transcriptions—this paper proposes a cross-lingual, two-stage fine-tuning framework. First, it leverages Tâi-lô romanization to train HuBERT for learning phoneme- and tone-aware acoustic representations. Second, it incorporates character-level text to jointly model lexical and syntactic structures, enabling synergistic alignment between acoustic and orthographic information. The method innovatively integrates dual annotation modalities, circumventing the limitations inherent to single-modality paradigms. Evaluated on the TAT-MOE benchmark, our approach achieves a 24.88% relative reduction in character error rate over strong baselines. The model is parameter-efficient and scalable, offering a reusable technical pathway for low-resource dialectal ASR.
📝 Abstract
Automatic speech recognition (ASR) for low-resource languages such as Taiwanese Hokkien is difficult due to the scarcity of annotated data. However, direct fine-tuning on Han-character transcriptions often fails to capture detailed phonetic and tonal cues, while training only on romanization lacks lexical and syntactic coverage. In addition, prior studies have rarely explored staged strategies that integrate both annotation types. To address this gap, we present CLiFT-ASR, a cross-lingual fine-tuning framework that builds on Mandarin HuBERT models and progressively adapts them to Taiwanese Hokkien. The framework employs a two-stage process in which it first learns acoustic and tonal representations from phonetic Tai-lo annotations and then captures vocabulary and syntax from Han-character transcriptions. This progressive adaptation enables effective alignment between speech sounds and orthographic structures. Experiments on the TAT-MOE corpus demonstrate that CLiFT-ASR achieves a 24.88% relative reduction in character error rate (CER) compared with strong baselines. The results indicate that CLiFT-ASR provides an effective and parameter-efficient solution for Taiwanese Hokkien ASR and that it has potential to benefit other low-resource language scenarios.