SED-SFT: Selectively Encouraging Diversity in Supervised Fine-Tuning

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of mode collapse induced by cross-entropy loss in supervised fine-tuning (SFT), which hampers subsequent reinforcement learning (RL) exploration efficiency. To mitigate this, the authors propose an adaptive diversity-encouraging mechanism that integrates a selective entropy regularization term and a masking strategy during SFT, enhancing diversity only at token positions with high exploration potential while preserving generation accuracy. This approach is embedded within a joint SFT–RL training framework, effectively alleviating the trade-off between the two stages. Evaluated on eight mathematical benchmarks, the method significantly improves response diversity; furthermore, when applied to Llama-3.2-3B-Instruct and Qwen2.5-Math-7B-Instruct, it yields average RL performance gains of 2.06 and 1.20 points, respectively, with negligible additional computational overhead.

Technology Category

Application Category

📝 Abstract
Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL) has emerged as the standard post-training paradigm for large language models (LLMs). However, the conventional SFT process, driven by Cross-Entropy (CE) loss, often induces mode collapse, where models over-concentrate on specific response patterns. This lack of distributional diversity severely restricts the exploration efficiency required for subsequent RL. While recent studies have attempted to improve SFT by replacing the CE loss, aiming to preserve diversity or refine the update policy, they fail to adequately balance diversity and accuracy, thereby yielding suboptimal performance after RL. To address the mode collapse problem, we propose SED-SFT, which adaptively encourages diversity based on the token exploration space. This framework introduces a selective entropy regularization term with a selective masking mechanism into the optimization objective. Extensive experiments across eight mathematical benchmarks demonstrate that SED-SFT significantly enhances generation diversity with a negligible computational overhead increase compared with CE loss, yielding average improvements of 2.06 and 1.20 points in subsequent RL performance over standard CE-based baselines on Llama-3.2-3B-Instruct and Qwen2.5-Math-7B-Instruct, respectively. The code is publicly available at https://github.com/pppa2019/SED-SFT
Problem

Research questions and friction points this paper is trying to address.

mode collapse
supervised fine-tuning
generation diversity
large language models
reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective Entropy Regularization
Mode Collapse
Supervised Fine-Tuning
Diversity Enhancement
Token Exploration Space