Efficient Mathematical Reasoning Models via Dynamic Pruning and Knowledge Distillation

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and deployment challenges of large language models (LLMs) in mathematical reasoning tasks, this paper proposes a lightweighting method that synergistically integrates dynamic attention head pruning with knowledge distillation. The method dynamically evaluates head importance in real time using weight norm and attention entropy, enabling fine-grained, adaptive pruning. Concurrently, knowledge distillation transfers the teacher model’s reasoning capability to a compact student model, preserving performance with minimal degradation. Our key innovation lies in the first deep integration of dynamic pruning and distillation, supporting runtime adaptation to resource constraints. Evaluated on the Math23k dataset, the optimized model achieves an 18.7% reduction in parameters, a 27.5% inference speedup, and a 19.3% decrease in FLOPs, while sustaining only a 0.7% accuracy drop—demonstrating an outstanding trade-off between efficiency and performance.

Technology Category

Application Category

📝 Abstract
With the rapid development of deep learning, large language models have shown strong capabilities in complex reasoning tasks such as mathematical equation solving. However, their substantial computational and storage costs hinder practical deployment. This paper proposes a lightweight optimization method that integrates dynamic attention head pruning with knowledge distillation. The approach dynamically evaluates the importance of each attention head in the multi-head attention mechanism using a combination of weight norms and entropy, and prunes redundant heads in real time to reduce computational overhead. To mitigate performance degradation, knowledge distillation transfers information from the original model to the pruned student, enabling the smaller model to preserve reasoning ability. Experiments conducted on both Math23k and ASDiv-A verify the effectiveness of the proposed method. For example, on Math23k with a 30% pruning ratio, parameters are reduced by 18.7%, inference speed is improved by 27.5%, FLOPs are reduced by 19.3%, and accuracy drops only 0.7% (from 84.4% to 83.7%). These results demonstrate that the method achieves substantial efficiency gains while maintaining strong reasoning performance, providing a practical solution for efficient deployment of large language models in mathematical reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational and storage costs of large language models
Optimizing mathematical reasoning models via pruning and distillation
Maintaining reasoning accuracy while improving inference efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic attention head pruning reduces computational overhead
Knowledge distillation transfers reasoning ability to smaller model
Combined weight norms and entropy evaluate attention head importance
🔎 Similar Papers
No similar papers found.
F
Fengming Yu
Harbin Engineering University, Harbin, China
Qingyu Meng
Qingyu Meng
Unversity of Utah
Parallel Computing
H
Haiwei Pan
Harbin Engineering University, Harbin, China
K
Kejia Zhang
Harbin Engineering University, Harbin, China