Do we really need Self-Attention for Streaming Automatic Speech Recognition?

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high computational cost and latency of Transformers in streaming automatic speech recognition (ASR), primarily caused by the self-attention mechanism. For the first time, we systematically evaluate and demonstrate that the self-attention module can be entirely removed from streaming ASR architectures without causing a significant increase in word error rate (WER). To this end, we propose two lightweight alternatives: replacing self-attention with deformable convolutions or simply omitting the module altogether. Experimental results show that both approaches substantially reduce computational complexity and inference latency while maintaining competitive recognition performance. These findings challenge the prevailing assumption that the Transformer architecture—and specifically its self-attention component—is indispensable for high-quality streaming ASR.

Technology Category

Application Category

📝 Abstract
Transformer-based architectures are the most used architectures in many deep learning fields like Natural Language Processing, Computer Vision or Speech processing. It may encourage the direct use of Transformers in the constrained tasks, without questioning whether it will yield the same benefits as in standard tasks. Given specific constraints, it is essential to evaluate the relevance of transformer models. This work questions the suitability of transformers for specific domains. We argue that the high computational requirements and latency issues associated with these models do not align well with streaming applications. Our study promotes the search for alternative strategies to improve efficiency without sacrificing performance. In light of this observation, our paper critically examines the usefulness of transformer architecture in such constrained environments. As a first attempt, we show that the computational cost for Streaming Automatic Speech Recognition (ASR) can be reduced using deformable convolution instead of Self-Attention. Furthermore, we show that Self-Attention mechanisms can be entirely removed and not replaced, without observing significant degradation in the Word Error Rate.
Problem

Research questions and friction points this paper is trying to address.

Streaming ASR
Self-Attention
Transformer
Computational Efficiency
Latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Attention
Streaming ASR
Deformable Convolution
Transformer Efficiency
Word Error Rate
🔎 Similar Papers
No similar papers found.