Gradually Compacting Large Language Models for Reasoning Like a Boiling Frog

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation of large language models after direct pruning, which typically necessitates extensive post-training to recover capabilities. To mitigate this issue, the authors propose Progressive Compression via Tuning Loops (PTL), a framework that iteratively applies pruning and fine-tuning in multiple stages to gradually reduce model size while avoiding abrupt performance drops. PTL integrates neuron- and layer-level pruning with continual pretraining and reinforcement learning, offering compatibility with diverse pruning strategies and post-training techniques, thereby ensuring strong generalizability. Experimental results demonstrate that PTL can compress models to approximately 50% of their original size, requiring only lightweight post-training to maintain near-original performance on challenging tasks such as mathematical reasoning and code generation.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated impressive reasoning capabilities, but their substantial size often demands significant computational resources. To reduce resource consumption and accelerate inference, it is essential to eliminate redundant parameters without compromising performance. However, conventional pruning methods that directly remove such parameters often lead to a dramatic drop in model performance in reasoning tasks, and require extensive post-training to recover the lost capabilities. In this work, we propose a gradual compacting method that divides the compression process into multiple fine-grained iterations, applying a Prune-Tune Loop (PTL) at each stage to incrementally reduce model size while restoring performance with finetuning. This iterative approach-reminiscent of the"boiling frog"effect-enables the model to be progressively compressed without abrupt performance loss. Experimental results show that PTL can compress LLMs to nearly half their original size with only lightweight post-training, while maintaining performance comparable to the original model on reasoning tasks. Moreover, PTL is flexible and can be applied to various pruning strategies, such as neuron pruning and layer pruning, as well as different post-training methods, including continual pre-training and reinforcement learning. Additionally, experimental results confirm the effectiveness of PTL on a variety of tasks beyond mathematical reasoning, such as code generation, demonstrating its broad applicability.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Model Compression
Pruning
Reasoning
Performance Preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

gradual compression
Prune-Tune Loop
large language models
model pruning
reasoning efficiency
🔎 Similar Papers
No similar papers found.