π€ AI Summary
To address excessive computational and memory overhead in large language model deployment, this paper proposes a transformer module-level pruning method grounded in the entropy of hidden-layer representations. Unlike conventional redundancy criteria based on cosine similarity, our approach pioneers entropy as the core pruning metric, dynamically identifying and removing redundant computation blocks by modeling information uncertainty in hidden statesβand uncovering the systematic increase of entropy with network depth. Evaluated across multiple benchmark tasks, the method achieves an average 32% parameter reduction while retaining over 98.5% of the original accuracy, significantly outperforming cosine-similarity-based baselines. Our principal contributions are: (i) establishing a theoretical linkage between hidden-layer entropy and module redundancy; and (ii) introducing an interpretable, generalizable structured pruning paradigm. This entropy-driven framework offers both principled insight into transformer internal dynamics and practical efficiency gains for model compression.
π Abstract
As large language models continue to scale, their growing computational and storage demands pose significant challenges for real-world deployment. In this work, we investigate redundancy within Transformer-based models and propose an entropy-based pruning strategy to enhance efficiency while maintaining performance. Empirical analysis reveals that the entropy of hidden representations decreases in the early blocks but progressively increases across most subsequent blocks. This trend suggests that entropy serves as a more effective measure of information richness within computation blocks. Unlike cosine similarity, which primarily captures geometric relationships, entropy directly quantifies uncertainty and information content, making it a more reliable criterion for pruning. Extensive experiments demonstrate that our entropy-based pruning approach surpasses cosine similarity-based methods in reducing model size while preserving accuracy, offering a promising direction for efficient model deployment.