CLiFT-ASR: A Cross-Lingual Fine-Tuning Framework for Low-Resource Taiwanese Hokkien Speech Recognition

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges in Taiwanese Hokkien ASR—namely, the difficulty of modeling fine-grained phonetic details using character-based annotations and the limited lexical-syntactic coverage of romanized (e.g., Tâi-lô) transcriptions—this paper proposes a cross-lingual, two-stage fine-tuning framework. First, it leverages Tâi-lô romanization to train HuBERT for learning phoneme- and tone-aware acoustic representations. Second, it incorporates character-level text to jointly model lexical and syntactic structures, enabling synergistic alignment between acoustic and orthographic information. The method innovatively integrates dual annotation modalities, circumventing the limitations inherent to single-modality paradigms. Evaluated on the TAT-MOE benchmark, our approach achieves a 24.88% relative reduction in character error rate over strong baselines. The model is parameter-efficient and scalable, offering a reusable technical pathway for low-resource dialectal ASR.

Technology Category

Application Category

📝 Abstract
Automatic speech recognition (ASR) for low-resource languages such as Taiwanese Hokkien is difficult due to the scarcity of annotated data. However, direct fine-tuning on Han-character transcriptions often fails to capture detailed phonetic and tonal cues, while training only on romanization lacks lexical and syntactic coverage. In addition, prior studies have rarely explored staged strategies that integrate both annotation types. To address this gap, we present CLiFT-ASR, a cross-lingual fine-tuning framework that builds on Mandarin HuBERT models and progressively adapts them to Taiwanese Hokkien. The framework employs a two-stage process in which it first learns acoustic and tonal representations from phonetic Tai-lo annotations and then captures vocabulary and syntax from Han-character transcriptions. This progressive adaptation enables effective alignment between speech sounds and orthographic structures. Experiments on the TAT-MOE corpus demonstrate that CLiFT-ASR achieves a 24.88% relative reduction in character error rate (CER) compared with strong baselines. The results indicate that CLiFT-ASR provides an effective and parameter-efficient solution for Taiwanese Hokkien ASR and that it has potential to benefit other low-resource language scenarios.
Problem

Research questions and friction points this paper is trying to address.

Developing ASR for low-resource Taiwanese Hokkien with limited annotated data
Addressing phonetic-tonal and lexical-syntactic representation gaps in transcriptions
Creating cross-lingual fine-tuning framework integrating multiple annotation types
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-lingual fine-tuning using Mandarin HuBERT models
Two-stage adaptation with phonetic and character annotations
Progressive alignment between speech sounds and orthography
H
Hung-Yang Sung
National Taiwan Normal University, Taiwan
Chien-Chun Wang
Chien-Chun Wang
National Taiwan Normal University
Speech EnhancementSpeech RecognitionVoice Activity DetectionSpeech Quality Assessment
K
Kuan-Tang Huang
National Taiwan Normal University, Taiwan
T
Tien-Hong Lo
National Taiwan Normal University, Taiwan
Y
Yu-Sheng Tsao
EZAI, Taiwan
Y
Yung-Chang Hsu
EZAI, Taiwan
B
Berlin Chen
National Taiwan Normal University, Taiwan