Weakly Supervised Data Refinement and Flexible Sequence Compression for Efficient Thai LLM-based ASR

πŸ“… 2025-05-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the dual challenges of scarce high-quality labeled data and high computational overhead in low-resource Thai automatic speech recognition (ASR), this paper proposes EThai-ASRβ€”the first large language model (LLM)-driven efficient Thai ASR system. Methodologically, we design a self-evolving weak-label refinement strategy to enhance speech encoder robustness; introduce a plug-and-play trimodal sequence compression module that enables dynamic-length compression, significantly reducing computation while preserving modeling capacity; and construct an end-to-end architecture comprising a speech encoder, a connector module, and a Thai-specific LLM-based decoder. Evaluated on multiple Thai ASR benchmarks, EThai-ASR achieves state-of-the-art performance. Furthermore, we publicly release a high-quality refined transcription dataset, establishing a new paradigm and foundational resource for low-resource speech recognition.

Technology Category

Application Category

πŸ“ Abstract
Despite remarkable achievements, automatic speech recognition (ASR) in low-resource scenarios still faces two challenges: high-quality data scarcity and high computational demands. This paper proposes EThai-ASR, the first to apply large language models (LLMs) to Thai ASR and create an efficient LLM-based ASR system. EThai-ASR comprises a speech encoder, a connection module and a Thai LLM decoder. To address the data scarcity and obtain a powerful speech encoder, EThai-ASR introduces a self-evolving data refinement strategy to refine weak labels, yielding an enhanced speech encoder. Moreover, we propose a pluggable sequence compression module used in the connection module with three modes designed to reduce the sequence length, thus decreasing computational demands while maintaining decent performance. Extensive experiments demonstrate that EThai-ASR has achieved state-of-the-art accuracy in multiple datasets. We release our refined text transcripts to promote further research.
Problem

Research questions and friction points this paper is trying to address.

Addresses high-quality data scarcity in Thai ASR
Reduces computational demands in LLM-based ASR
Enhances weak labels via self-evolving refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-evolving data refinement for weak labels
Pluggable sequence compression module
LLM-based Thai ASR system
πŸ”Ž Similar Papers
No similar papers found.
M
Mingchen Shao
Audio, Speech and Language Processing Group (ASLP@NPU), Northwestern Polytechnical University, China
Xinfa Zhu
Xinfa Zhu
Northwestern Polytechnical University
speech generation
C
Chengyou Wang
Audio, Speech and Language Processing Group (ASLP@NPU), Northwestern Polytechnical University, China
Bingshen Mu
Bingshen Mu
Northwestern Polytechnical University
Speech RecognitionSpeech Understanding
H
Hai Li
iQIYI, Inc, China
Y
Yin Yan
iQIYI, Inc, China
J
Junhui Liu
iQIYI, Inc, China
D
Danming Xie
iQIYI, Inc, China
L
Lei Xie
Audio, Speech and Language Processing Group (ASLP@NPU), Northwestern Polytechnical University, China