🤖 AI Summary
This work addresses the high computational cost of deploying large language models (LLMs) by proposing a novel pruning approach grounded in cooperative game theory. Unlike existing layer-wise pruning methods that rely on static heuristics and neglect inter-layer dependencies—often leading to significant performance degradation—this study introduces a dynamic framework that treats model performance as a utility function and leverages a lightweight proxy network to efficiently approximate Shapley values for each layer. By integrating hierarchical Monte Carlo sampling, the method explicitly models inter-layer interactions to accurately identify critical layers for retention. Experimental results demonstrate that the proposed technique substantially outperforms current pruning strategies in terms of both perplexity and zero-shot accuracy, achieving aggressive model compression while effectively preserving performance.
📝 Abstract
While large language models (LLMs) demonstrate impressive performance across various tasks, their deployment in real-world scenarios is still constrained by high computational demands. Layer-wise pruning, a commonly employed strategy to mitigate inference costs, can partially address this challenge. However, existing approaches generally depend on static heuristic rules and fail to account for the interdependencies among layers, thereby limiting the effectiveness of the pruning process. To this end, this paper proposes a game-theoretic framework that formulates layer pruning as a cooperative game in which each layer acts as a player and model performance serves as the utility. As computing exact Shapley values is computationally infeasible for large language models (LLMs), we propose using a lightweight surrogate network to estimate layer-wise marginal contributions. This network can predict LLM performance for arbitrary layer combinations at a low computational cost. Additionally, we employ stratified Monte Carlo mask sampling to further reduce the cost of Sharpley value estimation. This approach captures inter-layer dependencies and dynamically identifies critical layers for pruning. Extensive experiments demonstrate the consistent superiority of our method in terms of perplexity and zero-shot accuracy, achieving more efficient and effective layer-wise pruning for large language models.