Compressing LLMs with MoP: Mixture of Pruners

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of large language models and the limitations of existing pruning methods, which typically operate along only one dimension—either depth or width—making it difficult to balance compression ratio and model performance. To overcome this, the authors propose MoP (Mixture of Pruners), an iterative framework that unifies depth and width pruning within a single structured strategy. At each iteration, MoP explores two parallel pruning branches and dynamically selects the superior candidate for further optimization, followed by text-based fine-tuning to recover multimodal capabilities. Evaluated on LLaMA-2/3, the method achieves a 39% reduction in end-to-end latency at 40% compression. On LLaVA-1.5, it substantially improves computational efficiency and effectively restores vision-task performance through text-only fine-tuning, significantly outperforming current approaches.

Technology Category

Application Category

📝 Abstract
The high computational demands of Large Language Models (LLMs) motivate methods that reduce parameter count and accelerate inference. In response, model pruning emerges as an effective strategy, yet current methods typically focus on a single dimension-depth or width. We introduce MoP (Mixture of Pruners), an iterative framework that unifies these dimensions. At each iteration, MoP generates two branches-pruning in depth versus pruning in width-and selects a candidate to advance the path. On LLaMA-2 and LLaMA-3, MoP advances the frontier of structured pruning, exceeding the accuracy of competing methods across a broad set of compression regimes. It also consistently outperforms depth-only and width-only pruning. Furthermore, MoP translates structural pruning into real speedup, reducing end-to-end latency by 39% at 40% compression. Finally, extending MoP to the vision-language model LLaVA-1.5, we notably improve computational efficiency and demonstrate that text-only recovery fine-tuning can restore performance even on visual tasks.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Model Pruning
Structured Pruning
Compression
Inference Acceleration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Pruners
structured pruning
depth-width pruning
LLM compression
efficiency-latency tradeoff
🔎 Similar Papers
No similar papers found.
B
Bruno Lopes Yamamoto
Universidade de São Paulo, Brazil
L
Lucas Lauton de Alcantara
Universidade de São Paulo, Brazil
V
Victor Zacarias
Universidade de São Paulo, Brazil
L
Leandro Giusti Mugnaini
Universidade de São Paulo, Brazil
K
Keith Ando Ogawa
Universidade de São Paulo, Brazil
L
Lucas Pellicer
Instituto de Ciência e Tecnologia Itaú (ICTi), Brazil
R
Rosimeire Pereira Costa
Instituto de Ciência e Tecnologia Itaú (ICTi), Brazil
E
Edson Bollis
Instituto de Ciência e Tecnologia Itaú (ICTi), Brazil
Anna Helena Reali Costa
Anna Helena Reali Costa
Full Professor of Computer Engineering, Universidade de São Paulo
Artificial IntelligenceMachine LearningReinforcement LearningIntelligent Robotics
Artur Jordao
Artur Jordao
Universidade de São Paulo (USP)
Machine LearningPartial Least SquaresPattern Recognition