🤖 AI Summary
This work addresses the performance degradation of large reasoning models (LRMs) under low-bit quantization by proposing a novel quantization method grounded in fine-tuning-induced weight update signals. The study identifies and leverages a previously unobserved “protect-the-ends” phenomenon: weights exhibiting either the smallest or largest magnitudes of update during fine-tuning are disproportionately critical to reasoning capability. Based on this insight, the authors design a channel importance metric that requires neither activation statistics nor second-order information. The approach is applicable to both real and pseudo-fine-tuning scenarios and demonstrates consistent gains across four reasoning benchmarks, achieving an average performance improvement of 6.55% over baseline quantization methods—particularly pronounced for reinforcement learning–fine-tuned models—and further generalizes effectively to non-fine-tuned LRMs.
📝 Abstract
Weight-only quantization is important for compressing Large Language Models (LLMs). Inspired by the spirit of classical magnitude pruning, we study whether the magnitude of weight updates during reasoning-incentivized fine-tuning can provide valuable signals for quantizing Large Reasoning Models (LRMs). We hypothesize that the smallest and largest weight updates during fine-tuning are more important than those of intermediate magnitude, a phenomenon we term"protecting both ends". Upon hypothesis validation, we introduce QuantLRM, which stands for weight quantization of LRMs via fine-tuning signals. We fit simple restricted quadratic functions on weight updates to protect both ends. By multiplying the average quadratic values with the count of zero weight updates of channels, we compute channel importance that is more effective than using activation or second-order information. We run QuantLRM to quantize various fine-tuned models (including supervised, direct preference optimization, and reinforcement learning fine-tuning) over four reasoning benchmarks (AIME-120, FOLIO, temporal sequences, and GPQA-Diamond) and empirically find that QuantLRM delivers a consistent improvement for LRMs quantization, with an average improvement of 6.55% on a reinforcement learning fine-tuned model. Also supporting non-fine-tuned LRMs, QuantLRM gathers effective signals via pseudo-fine-tuning, which greatly enhances its applicability.