IntraSlice: Towards High-Performance Structural Pruning with Block-Intra PCA for LLMs

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently deploying large language models, whose massive parameter counts hinder practical application. Existing structured pruning methods—such as cross-module PCA—often introduce auxiliary parameters and disrupt activation distributions, leading to significant performance degradation. To overcome these limitations, we propose IntraSlice, a framework that performs block-wise approximate PCA pruning within individual Transformer modules. By fully leveraging intra-module structure, IntraSlice enables complete fusion of transformation matrices without introducing extra parameters. Furthermore, it incorporates a global sparsity ratio estimator calibrated to the compressed activation distribution, achieving a superior trade-off between model compression and performance. Experiments on Llama2, Llama3, and Phi series models demonstrate that IntraSlice consistently outperforms existing baselines under identical compression ratios or inference speeds.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) achieve strong performance across diverse tasks but face deployment challenges due to their massive size. Structured pruning offers acceleration benefits but leads to significant performance degradation. Recent PCA-based pruning methods have alleviated this issue by retaining key activation components, but are only applied between modules in order to fuse the transformation matrix, which introduces extra parameters and severely disrupts activation distributions due to residual connections. To address these issues, we propose IntraSlice, a framework that applies block-wise module-intra PCA compression pruning. By leveraging the structural characteristics of Transformer modules, we design an approximate PCA method whose transformation matrices can be fully fused into the model without additional parameters. We also introduce a PCA-based global pruning ratio estimator that further considers the distribution of compressed activations, building on conventional module importance. We validate our method on Llama2, Llama3, and Phi series across various language benchmarks. Experimental results demonstrate that our approach achieves superior compression performance compared to recent baselines at the same compression ratio or inference speed.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Structured Pruning
PCA
Model Compression
Activation Distribution
Innovation

Methods, ideas, or system contributions that make the work stand out.

IntraSlice
structured pruning
module-intra PCA
activation distribution
LLM compression
🔎 Similar Papers
Meng Li
Meng Li
Nanjing University
DatabaseData stream
Peisong Wang
Peisong Wang
CASIA
Deep Neural Network Acceleration and Compression
Y
Yuantian Shao
Nanjing University of Science and Technology
Qinghao Hu
Qinghao Hu
Institute of Automation, Chinese Academy of Sciences
Deep LearningComputer VisionNetwork Compression
H
Hongjian Fang
Beijing National Research Center for Information Science and Technology
Y
Yifan Zhang
Institute of Automation, Chinese Academy of Sciences
Z
Zhihui Wei
Nanjing University of Science and Technology
J
Jian Cheng
Institute of Automation, Chinese Academy of Sciences