🤖 AI Summary
To address the high computational and memory overhead of deploying large language models (LLMs) in resource-constrained settings, this paper proposes a training-free, fine-grained low-rank compression method operating in the activation space. Specifically, it applies principal component analysis (PCA) per attention head, truncates singular vectors to perform weight transformation, and introduces an importance-driven dynamic rank allocation mechanism to adaptively optimize rank configurations across decoder layers. Unlike existing low-rank decomposition approaches, our method requires no fine-tuning (only minutes for calibration), preserves the original model architecture, substantially accelerates inference, and incurs negligible accuracy degradation. Extensive evaluation across four mainstream LLMs and eleven downstream tasks demonstrates consistent superiority over structural pruning baselines, validating both its efficiency and strong generalization capability.
📝 Abstract
Large Language Models (LLMs) have enabled remarkable progress in natural language processing, yet their high computational and memory demands pose challenges for deployment in resource-constrained environments. Although recent low-rank decomposition methods offer a promising path for structural compression, they often suffer from accuracy degradation, expensive calibration procedures, and result in inefficient model architectures that hinder real-world inference speedups. In this paper, we propose FLAT-LLM, a fast and accurate, training-free structural compression method based on fine-grained low-rank transformations in the activation space. Specifically, we reduce the hidden dimension by transforming the weights using truncated eigenvectors computed via head-wise Principal Component Analysis (PCA), and employ an importance-based metric to adaptively allocate ranks across decoders. FLAT-LLM achieves efficient and effective weight compression without recovery fine-tuning, which could complete the calibration within a few minutes. Evaluated across 4 models and 11 datasets, FLAT-LLM outperforms structural pruning baselines in generalization and downstream performance, while delivering inference speedups over decomposition-based methods.