π€ AI Summary
This work addresses the challenge that long reasoning chains generated by large reasoning models often contain redundant or irrelevant content, causing supervised fine-tuning (SFT) to learn inefficient or even detrimental reasoning patterns. To mitigate this, the paper introduces, for the first time, a paragraph-level attribution metric combined with a selective SFT framework. Leveraging integrated gradients to quantify each tokenβs contribution to the final answer, the method aggregates these signals into paragraph-level measures of attribution strength and directional consistency. It then identifies critical reasoning paragraphs exhibiting high attribution strength but moderate directional consistency and applies loss masking to selectively train on these segments. Evaluated across multiple models and datasets, this approach significantly improves both reasoning accuracy and output efficiency, enabling focused and effective learning of deep reasoning capabilities from extended reasoning trajectories.
π Abstract
Large Reasoning Models (LRMs) achieve strong reasoning performance by generating long chains of thought (CoTs), yet only a small fraction of these traces meaningfully contributes to answer prediction, while the majority contains repetitive or truncated content. Such output redundancy is further propagated after supervised finetuning (SFT), as models learn to imitate verbose but uninformative patterns, which can degrade performance. To this end, we incorporate integrated gradient attribution to quantify each token's influence on final answers and aggregate them into two segment-level metrics: (1) \textit{attribution strength} measures the overall attribution magnitude; and (2) \textit{direction consistency} captures whether tokens'attributions within a segment are uniformly positive or negative (high consistency), or a mixture of both (moderate consistency). Based on these two metrics, we propose a segment-level selective learning framework to identify important segments with high attribution strength but moderate consistency that indicate reflective rather than shallow reasoning. The framework then applies selective SFT on these important segments while masking loss for unimportant ones. Experiments across multiple models and datasets show that our approach improves accuracy and output efficiency, enabling more effective learning from long reasoning traces~\footnote{Code and data are available at https://github.com/SiyuanWangw/SegmentSelectiveSFT}.