Selective Self-to-Supervised Fine-Tuning for Generalization in Large Language Models

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Supervised fine-tuning (SFT) of large language models often degrades generalization capability—improving downstream task performance at the cost of diminished broad-domain competence. To address this, we propose Selective Self-Supervised Fine-Tuning (S3FT), the first method to leverage high-quality model-generated responses as self-supervised signals. S3FT employs a judge model to dynamically filter correct outputs and constructs a hybrid supervision objective integrating gold-standard answers with their semantically equivalent paraphrases. This design mitigates overfitting while preserving foundational capabilities. Experiments demonstrate that S3FT significantly outperforms standard SFT on downstream tasks—including mathematical reasoning, Python programming, and reading comprehension—while substantially reducing generalization degradation: average performance decay on MMLU and TruthfulQA drops by 57% (from 4.4 to 2.5 percentage points). Thus, S3FT achieves concurrent gains in both task-specific accuracy and cross-domain generalization.

Technology Category

Application Category

📝 Abstract
Fine-tuning Large Language Models (LLMs) on specific datasets is a common practice to improve performance on target tasks. However, this performance gain often leads to overfitting, where the model becomes too specialized in either the task or the characteristics of the training data, resulting in a loss of generalization. This paper introduces Selective Self-to-Supervised Fine-Tuning (S3FT), a fine-tuning approach that achieves better performance than the standard supervised fine-tuning (SFT) while improving generalization. S3FT leverages the existence of multiple valid responses to a query. By utilizing the model's correct responses, S3FT reduces model specialization during the fine-tuning stage. S3FT first identifies the correct model responses from the training set by deploying an appropriate judge. Then, it fine-tunes the model using the correct model responses and the gold response (or its paraphrase) for the remaining samples. The effectiveness of S3FT is demonstrated through experiments on mathematical reasoning, Python programming and reading comprehension tasks. The results show that standard SFT can lead to an average performance drop of up to $4.4$ on multiple benchmarks, such as MMLU and TruthfulQA. In contrast, S3FT reduces this drop by half, i.e. $2.5$, indicating better generalization capabilities than SFT while performing significantly better on the fine-tuning tasks.
Problem

Research questions and friction points this paper is trying to address.

Overfitting in fine-tuning LLMs
Loss of generalization in SFT
Improving generalization with S3FT
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective Self-to-Supervised Fine-Tuning
Leverages multiple valid responses
Reduces model specialization
🔎 Similar Papers
No similar papers found.