๐ค AI Summary
Deploying large language models (LLMs) on resource-constrained devices faces bottlenecks including high computational overhead, accuracy degradation after compression, and reliance on post-compression fine-tuning.
Method: We propose MoDeGPT, a fine-tuning-free modular structured compression framework that decouples Transformer blocks into matrix-pair modules and applies unified Nystrรถm, Column-Randomization (CR), and SVD decompositions at the module level to reconstruct outputs and reduce hidden dimensionality. This introduces the first gradient-free, module-level decomposition paradigm, theoretically unifying three major matrix approximation techniques.
Contribution/Results: MoDeGPT achieves synergistic optimization of high compression ratio and zero-shot performance: 98% reduction in FLOPs for a 13B model; 90โ95% zero-shot accuracy retention for Llama-2/3 and OPT under 25โ30% parameter compression; compression completed in hours on a single GPU; and up to 46% inference throughput improvement.
๐ Abstract
Large Language Models (LLMs) have reshaped the landscape of artificial intelligence by demonstrating exceptional performance across various tasks. However, substantial computational requirements make their deployment challenging on devices with limited resources. Recently, compression methods using low-rank matrix techniques have shown promise, yet these often lead to degraded accuracy or introduce significant overhead in parameters and inference latency. This paper introduces extbf{Mo}dular extbf{De}composition (MoDeGPT), a novel structured compression framework that does not need recovery fine-tuning while resolving the above drawbacks. MoDeGPT partitions the Transformer block into modules comprised of matrix pairs and reduces the hidden dimensions via reconstructing the module-level outputs. MoDeGPT is developed based on a theoretical framework that utilizes three well-established matrix decomposition algorithms -- Nystr""om approximation, CR decomposition, and SVD -- and applies them to our redefined transformer modules. Our comprehensive experiments show MoDeGPT, without backward propagation, matches or surpasses previous structured compression methods that rely on gradient information, and saves 98% of compute costs on compressing a 13B model. On extsc{Llama}-2/3 and OPT models, MoDeGPT maintains 90-95% zero-shot performance with 25-30% compression rates. Moreover, the compression can be done on a single GPU within a few hours and increases the inference throughput by up to 46%.