Dynamic Acoustic Model Architecture Optimization in Training for ASR

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
ASR acoustic model architecture design faces bottlenecks including heavy reliance on manual, experience-driven tuning and prohibitive computational costs of neural architecture search. To address these, this paper proposes the Dynamic Model Architecture Optimization (DMAO) framework, which introduces an in-training, “grow-and-prune” parameter reallocation mechanism. DMAO features: (i) dynamic sparsification guided by CTC loss; (ii) module importance estimation driven by gradient sensitivity; and (iii) differentiable architectural topology evolution—enabling structural self-adaptation with negligible additional computation. Evaluated on LibriSpeech, TED-LIUM-v2, and Switchboard, DMAO achieves up to a 6% relative WER reduction under identical training budgets. Crucially, its gains generalize robustly across diverse architectures, model scales, and datasets, demonstrating strong cross-domain adaptability without architectural constraints.

Technology Category

Application Category

📝 Abstract
Architecture design is inherently complex. Existing approaches rely on either handcrafted rules, which demand extensive empirical expertise, or automated methods like neural architecture search, which are computationally intensive. In this paper, we introduce DMAO, an architecture optimization framework that employs a grow-and-drop strategy to automatically reallocate parameters during training. This reallocation shifts resources from less-utilized areas to those parts of the model where they are most beneficial. Notably, DMAO only introduces negligible training overhead at a given model complexity. We evaluate DMAO through experiments with CTC on LibriSpeech, TED-LIUM-v2 and Switchboard datasets. The results show that, using the same amount of training resources, our proposed DMAO consistently improves WER by up to 6% relatively across various architectures, model sizes, and datasets. Furthermore, we analyze the pattern of parameter redistribution and uncover insightful findings.
Problem

Research questions and friction points this paper is trying to address.

Optimizes ASR model architecture dynamically during training
Reduces reliance on manual rules or costly automated searches
Improves speech recognition accuracy with efficient resource allocation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Grow-and-drop strategy for parameter reallocation
Negligible training overhead at model complexity
Consistent WER improvement across architectures
🔎 Similar Papers
No similar papers found.
J
Jingjing Xu
Machine Learning and Human Language Technology Group, RWTH Aachen University, Germany; AppTek GmbH, Germany
Z
Zijian Yang
Machine Learning and Human Language Technology Group, RWTH Aachen University, Germany
Albert Zeyer
Albert Zeyer
Human Language Technology and Pattern Recognition Group, RWTH Aachen University
Deep Learning
Eugen Beck
Eugen Beck
AppTek.ai
Machine LearningAutomated Speech Recognition
R
Ralf Schlueter
Machine Learning and Human Language Technology Group, RWTH Aachen University, Germany; AppTek GmbH, Germany
Hermann Ney
Hermann Ney
RWTH Aachen University
Machine LearningSpeech RecognitionMachine TranslationComputer Vision