Spiralformer: Low Latency Encoder for Streaming Speech Recognition with Circular Layer Skipping and Early Exiting

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high encoding latency of Transformer encoders in block-wise streaming speech recognition, this paper proposes Spiralformer—a low-latency encoder. Its core innovation integrates recurrent layer skipping with an early-exit mechanism, enabling dynamic, spiral-like adjustment of the number of computed layers across temporal blocks; this ensures full-layer coverage while significantly reducing end-to-end latency. Methodologically, it introduces layer dropping and fine-grained early-exit strategies to optimize computational efficiency under small-step chunk processing. Evaluated on LibriSpeech and CSJ, Spiralformer reduces average token emission latency by 21.6% and 7.0%, respectively, with negligible changes in computational cost and word error rate—thereby achieving a favorable trade-off between real-time performance and recognition accuracy.

Technology Category

Application Category

📝 Abstract
For streaming speech recognition, a Transformer-based encoder has been widely used with block processing. Although many studies addressed improving emission latency of transducers, little work has been explored for improving encoding latency of the block processing. We seek to reduce latency by frequently emitting a chunk with a small shift rather than scarce large-chunk emissions, resulting in higher computational costs. To efficiently compute with the small chunk shift, we propose a new encoder, Spiralformer, tailored for block processing by combining layer dropping and early exiting. We skip layer computation in a cyclic manner and shift the computed layer in each block spirally, which completes computation for all the layers over the block processing. Experimentally, we observed that our method achieved 21.6% reduction in the averaged token emission delay in Librispeech, and 7.0% in CSJ, compared with the baseline with similar computational cost and word error rates.
Problem

Research questions and friction points this paper is trying to address.

Reducing encoding latency in streaming speech recognition
Optimizing computational efficiency with small chunk shifts
Implementing circular layer skipping and early exiting mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Circular layer skipping reduces computational latency
Early exiting mechanism accelerates token emission
Spiral computation pattern optimizes block processing efficiency
🔎 Similar Papers
No similar papers found.