🤖 AI Summary
This work addresses the challenge of efficiently deploying large language models, whose massive parameter counts hinder practical application. Existing structured pruning methods—such as cross-module PCA—often introduce auxiliary parameters and disrupt activation distributions, leading to significant performance degradation. To overcome these limitations, we propose IntraSlice, a framework that performs block-wise approximate PCA pruning within individual Transformer modules. By fully leveraging intra-module structure, IntraSlice enables complete fusion of transformation matrices without introducing extra parameters. Furthermore, it incorporates a global sparsity ratio estimator calibrated to the compressed activation distribution, achieving a superior trade-off between model compression and performance. Experiments on Llama2, Llama3, and Phi series models demonstrate that IntraSlice consistently outperforms existing baselines under identical compression ratios or inference speeds.
📝 Abstract
Large Language Models (LLMs) achieve strong performance across diverse tasks but face deployment challenges due to their massive size. Structured pruning offers acceleration benefits but leads to significant performance degradation. Recent PCA-based pruning methods have alleviated this issue by retaining key activation components, but are only applied between modules in order to fuse the transformation matrix, which introduces extra parameters and severely disrupts activation distributions due to residual connections. To address these issues, we propose IntraSlice, a framework that applies block-wise module-intra PCA compression pruning. By leveraging the structural characteristics of Transformer modules, we design an approximate PCA method whose transformation matrices can be fully fused into the model without additional parameters. We also introduce a PCA-based global pruning ratio estimator that further considers the distribution of compressed activations, building on conventional module importance. We validate our method on Llama2, Llama3, and Phi series across various language benchmarks. Experimental results demonstrate that our approach achieves superior compression performance compared to recent baselines at the same compression ratio or inference speed.