Prepending or Cross-Attention for Speech-to-Text? An Empirical Comparison

📅 2025-01-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates the performance trade-offs between Dense Feature Preceding (DFP) and standard encoder-decoder cross-attention architectures in automatic speech recognition (ASR) and speech translation (ST). Under unified training conditions—including from-scratch training, identical data and parameter scales—and across multilingual benchmarks (MuST-C v1.0, CoVoST2) and multitask settings (monolingual, bilingual, and multilingual ASR/ST), we conduct the first rigorous comparative evaluation across CTC compression, knowledge distillation, inference latency, GPU memory consumption, and robustness. Results show that DFP does not yield consistent advantages; cross-attention excels in generation efficiency, memory footprint, and cross-lingual generalization. Furthermore, the study highlights the critical role of the speech encoder in end-to-end speech modeling. Our empirical findings provide principled guidance for architectural design in speech-to-text systems, offering concrete evidence to inform model selection and optimization strategies.

Technology Category

Application Category

📝 Abstract
Following the remarkable success of Large Language Models (LLMs) in NLP tasks, there is increasing interest in extending their capabilities to speech -- the most common form in communication. To integrate speech into LLMs, one promising approach is dense feature prepending (DFP) which prepends the projected speech representations to the textual representations, allowing end-to-end training with the speech encoder. However, DFP typically requires connecting a text decoder to a speech encoder. This raises questions about the importance of having a sophisticated speech encoder for DFP, and how its performance compares with a standard encoder-decoder (i.e. cross-attention) architecture. In order to perform a controlled architectural comparison, we train all models from scratch, rather than using large pretrained models, and use comparable data and parameter settings, testing speech-to-text recognition (ASR) and translation (ST) on MuST-C v1.0 and CoVoST2 datasets. We study the influence of a speech encoder in DFP. More importantly, we compare DFP and cross-attention under a variety of configurations, such as CTC compression, sequence-level knowledge distillation, generation speed and GPU memory footprint on monolingual, bilingual and multilingual models. Despite the prevalence of DFP over cross-attention, our overall results do not indicate a clear advantage of DFP.
Problem

Research questions and friction points this paper is trying to address.

Speech Recognition
DFP Method
Encoder-Decoder Architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

DFP Method
Encoder-Decoder Architecture
Speech-to-Text
🔎 Similar Papers
No similar papers found.