LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high inference overhead and deployment inefficiency of ASR encoders, this paper proposes a lightweight compression method based on low-rank approximation. Our approach innovatively co-optimizes PCA-driven activation low-rank approximation and self-attention mechanisms operating in the reduced-dimensional space—the first work to jointly leverage these two techniques for ASR encoder compression. The method requires only a small-scale, unsupervised calibration dataset and achieves compression without fine-tuning. Evaluated on Whisper large-v3, it reduces encoder parameter count by over 50%, yielding a model size comparable to Whisper medium while achieving lower word error rate (WER). This strategy significantly expands the efficiency–accuracy Pareto frontier. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Modern automatic speech recognition (ASR) models, such as OpenAI's Whisper, rely on deep encoder-decoder architectures, and their encoders are a critical bottleneck for efficient deployment due to high computational intensity. We introduce LiteASR, a low-rank compression scheme for ASR encoders that significantly reduces inference costs while maintaining transcription accuracy. Our approach leverages the strong low-rank properties observed in intermediate activations: by applying principal component analysis (PCA) with a small calibration dataset, we approximate linear transformations with a chain of low-rank matrix multiplications, and further optimize self-attention to work in the reduced dimension. Evaluation results show that our method can compress Whisper large-v3's encoder size by over 50%, matching Whisper medium's size with better transcription accuracy, thereby establishing a new Pareto-optimal frontier of efficiency and performance. The code of LiteASR is available at https://github.com/efeslab/LiteASR.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational intensity of ASR encoders
Maintains transcription accuracy with low-rank compression
Optimizes self-attention in reduced dimensions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-rank compression for ASR encoders
PCA-based approximation with calibration dataset
Optimized self-attention in reduced dimensions
🔎 Similar Papers
No similar papers found.