HENT-SRT: Hierarchical Efficient Neural Transducer with Self-Distillation for Joint Speech Recognition and Translation

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Neural Transducer (NT) models achieve strong performance in automatic speech recognition (ASR), but suffer from poor generalization to speech translation (ST) due to difficulties in modeling word-order reordering, degraded joint ASR/ST performance, and high training overhead. This paper proposes a Hierarchical Efficient Neural Transducer framework: (1) a novel factorized task-decoupled architecture that explicitly separates semantic and language-specific representations for ASR and ST; (2) a blank-penalized decoding strategy to suppress deletion errors; and (3) a CTC-consistency self-distillation mechanism to jointly optimize both tasks. The model employs a hierarchical downsampling encoder and a stateless predictor, integrated with pruned transducer loss. Evaluated on conversational datasets in Arabic, Spanish, and Chinese, it establishes new NT-based ST state-of-the-art results and significantly narrows the performance gap with attention-based encoder-decoder (AED) models.

Technology Category

Application Category

📝 Abstract
Neural transducers (NT) provide an effective framework for speech streaming, demonstrating strong performance in automatic speech recognition (ASR). However, the application of NT to speech translation (ST) remains challenging, as existing approaches struggle with word reordering and performance degradation when jointly modeling ASR and ST, resulting in a gap with attention-based encoder-decoder (AED) models. Existing NT-based ST approaches also suffer from high computational training costs. To address these issues, we propose HENT-SRT (Hierarchical Efficient Neural Transducer for Speech Recognition and Translation), a novel framework that factorizes ASR and translation tasks to better handle reordering. To ensure robust ST while preserving ASR performance, we use self-distillation with CTC consistency regularization. Moreover, we improve computational efficiency by incorporating best practices from ASR transducers, including a down-sampled hierarchical encoder, a stateless predictor, and a pruned transducer loss to reduce training complexity. Finally, we introduce a blank penalty during decoding, reducing deletions and improving translation quality. Our approach is evaluated on three conversational datasets Arabic, Spanish, and Mandarin achieving new state-of-the-art performance among NT models and substantially narrowing the gap with AED-based systems.
Problem

Research questions and friction points this paper is trying to address.

Addresses joint ASR and ST performance gap with AED models
Reduces high computational costs in NT-based ST training
Improves word reordering and translation quality in ST
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical encoder for efficient computation
Self-distillation with CTC consistency regularization
Blank penalty to reduce deletions
🔎 Similar Papers
No similar papers found.