D-com: Accelerating Iterative Processing to Enable Low-rank Decomposition of Activations

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high runtime overhead of low-rank decomposition in large language model (LLM) inference—which increases end-to-end latency (e.g., +38% for Llama2-7B on A100 due to activation decomposition)—this work proposes an activation-oriented efficient low-rank decomposition paradigm. Our method introduces: (1) a progressive Lanczos decomposition algorithm that jointly employs computation replication and output-shape preservation to enhance computational density while maintaining numerical accuracy; and (2) a hardware-software co-designed acceleration architecture enabling multi-trajectory parallel decomposition. Experiments demonstrate that, with only a 3% degradation in model quality, our approach reduces end-to-end latency by 22% and accelerates decomposition throughput by 6.2×—significantly outperforming conventional weight- or activation-only decomposition schemes.

Technology Category

Application Category

📝 Abstract
The computation and memory costs of large language models kept increasing over last decade, which reached over the scale of 1T parameters. To address the challenges from the large scale models, model compression techniques such as low-rank decomposition have been explored. Previous model decomposition works have focused on weight decomposition to avoid costly runtime decomposition, whose latency often significantly exceeds the benefits from decomposition (e.g., 38% more end-to-end latency when running Llama2-7b on A100 with 4K sequence length with activation decomposition compared to no decomposition). In this work, we debunk such observations and report that the input decomposition can be significantly beneficial with a proper choice of decomposition algorithm and hardware support. We adopt progressive decomposition algorithm, Lanczos algorithm, and design a co-accelerator architecture for the decomposition algorithm. To address the memory- boundness of the decomposition operation, we introduce a novel compute replication methodology that moves the op- eration toward compute-bound region, which enables 6.2x speedup in our evaluation. We also develop an output shape- preserving computation scheme that eliminates decomposi- tion costs in consecutive layers. To compensate model quality loss from compression, we introduce a multi-track decom- position approach that separately handles outlier channels for high accuracy and low perplexity with minimal compu- tational costs. Combined together, our accelerator, D-com, provides 22% end-to-end latency improvements compared to A100 GPU at the cost of small model quality degradation (e.g., 3% on AI2 Reasoning Challenge task).
Problem

Research questions and friction points this paper is trying to address.

Accelerating low-rank decomposition of activations for large language models
Reducing memory-bound decomposition operations through compute replication
Minimizing model quality degradation while improving computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive Lanczos algorithm for low-rank decomposition
Compute replication to address memory-bound operations
Multi-track decomposition handling outlier channels separately
🔎 Similar Papers
No similar papers found.