Patch the Distribution Mismatch: RL Rewriting Agent for Stable Off-Policy SFT

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant distributional shift between downstream task data and the pretraining distribution of large language models, which often leads to catastrophic forgetting during supervised fine-tuning (SFT). To mitigate this issue, the paper formulates data rewriting as a policy learning problem and employs reinforcement learning to jointly optimize the rewriting strategy. This approach aligns the generated question-answering distribution with the target task while enhancing response diversity, all under the constraint of preserving task consistency. By moving beyond fixed templates and conditional sampling, the method constructs high-quality fine-tuning data that maintains downstream task performance on par with standard SFT, while reducing catastrophic forgetting on non-downstream benchmarks by an average of 12.34%.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have made rapid progress, yet adapting them to downstream scenarios still commonly relies on supervised fine-tuning (SFT). When downstream data exhibit a substantial distribution shift from the model's prior training distribution, SFT can induce catastrophic forgetting. To narrow this gap, data rewriting has been proposed as a data-centric approach that rewrites downstream training data prior to SFT. However, existing methods typically sample rewrites from a prompt-induced conditional distribution, so the resulting targets are not necessarily aligned with the model's natural QA-style generation distribution. Moreover, reliance on fixed templates can lead to diversity collapse. To address these issues, we cast data rewriting as a policy learning problem and learn a rewriting policy that better matches the backbone's QA-style generation distribution while preserving diversity. Since distributional alignment, diversity and task consistency are automatically evaluable but difficult to optimize end-to-end with differentiable objectives, we leverage reinforcement learning to optimize the rewrite distribution under reward feedback and propose an RL-based data-rewriting agent. The agent jointly optimizes QA-style distributional alignment and diversity under a hard task-consistency gate, thereby constructing a higher-quality rewritten dataset for downstream SFT. Extensive experiments show that our method achieves downstream gains comparable to standard SFT while reducing forgetting on non-downstream benchmarks by 12.34% on average. Our code is available at https://anonymous.4open.science/r/Patch-the-Prompt-Gap-4112 .
Problem

Research questions and friction points this paper is trying to address.

distribution mismatch
catastrophic forgetting
data rewriting
generation diversity
supervised fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

reinforcement learning
data rewriting
distribution alignment
catastrophic forgetting
supervised fine-tuning