A3 : an Analytical Low-Rank Approximation Framework for Attention

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High deployment costs of large language models (LLMs) are exacerbated by existing low-rank approximation methods that ignore Transformer architectural characteristics and introduce non-negligible runtime overhead (e.g., extra GEMM operations). Method: This paper proposes an analytical low-rank approximation framework tailored to the Transformer architecture. It decomposes the model into three functional components—QK, OV, and MLP—and independently minimizes their respective functional losses (e.g., attention scores, output fidelity). The method employs mixed-rank allocation and post-training compression to achieve pure low-rank compression without additional GEMM overhead. Contribution/Results: It is the first to shift optimization from layer-wise output error to component-level functional fidelity, and natively supports joint deployment with KV cache compression and quantization. On LLaMA-3.1-70B under matched computational and memory compression, it achieves a WikiText-2 perplexity of 4.69—substantially outperforming the SOTA (7.87)—thereby significantly improving end-to-end deployment efficiency.

Technology Category

Application Category

📝 Abstract
Large language models have demonstrated remarkable performance; however, their massive parameter counts make deployment highly expensive. Low-rank approximation offers a promising compression solution, yet existing approaches have two main limitations: (1) They focus on minimizing the output error of individual linear layers, without considering the architectural characteristics of Transformers, and (2) they decompose a large weight matrix into two small low-rank matrices. Consequently, these methods often fall short compared to other compression techniques like pruning and quantization, and introduce runtime overhead such as the extra GEMM kernel launches for decomposed small matrices. To address these limitations, we propose $ t A^ t 3$, a post-training low-rank approximation framework. $ t A^ t 3$ splits a Transformer layer into three functional components, namely $ t QK$, $ t OV$, and $ t MLP$. For each component, $ t A^ t 3$ provides an analytical solution that reduces the hidden dimension size inside each component while minimizing the component's functional loss ($it i.e.$, error in attention scores, attention outputs, and MLP outputs). This approach directly reduces model sizes, KV cache sizes, and FLOPs without introducing any runtime overheads. In addition, it provides a new narrative in advancing the optimization problem from singular linear layer loss optimization toward improved end-to-end performance. Through extensive experiments, we show that $ t A^ t 3$ maintains superior performance compared to SoTAs. For example, under the same reduction budget in computation and memory, our low-rank approximated LLaMA 3.1-70B achieves a perplexity of 4.69 on WikiText-2, outperforming the previous SoTA's 7.87 by 3.18. We also demonstrate the versatility of $ t A^ t 3$, including KV cache compression, quantization, and mixed-rank assignments for enhanced performance.
Problem

Research questions and friction points this paper is trying to address.

Reduces Transformer model size and FLOPs without runtime overhead
Optimizes low-rank approximation for Transformer architectural components
Improves end-to-end performance over existing compression techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analytical low-rank approximation for Transformer components
Reduces hidden dimensions without runtime overhead
Optimizes end-to-end performance via functional loss minimization
🔎 Similar Papers
No similar papers found.
Jeffrey T. H. Wong
Jeffrey T. H. Wong
Imperial College London
Efficient Machine LearningDeep Learning
C
Cheng Zhang
Department of Electrical and Electronic Engineering, Imperial College London
X
Xinye Cao
Department of Electrical and Electronic Engineering, Imperial College London
Pedro Gimenes
Pedro Gimenes
Imperial College London
Machine Learning
G
George A. Constantinides
Department of Electrical and Electronic Engineering, Imperial College London
Wayne Luk
Wayne Luk
Professor of Computer Engineering, Imperial College London
Hardware and ArchitectutreReconfigurable ComputingDesign Automation
Yiren Zhao
Yiren Zhao
University of Toronto
Computer NetworksOptical NetworksDatacenter Networks