SkillFactory: Self-Distillation For Learning Cognitive Behaviors

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Base language models lack cognitive skills such as answer verification and backtracking, leading to poor generalization and performance degradation during reinforcement learning (RL) on complex tasks. Method: We propose SkillFactory—a teacher-free self-distillation framework that operates during supervised fine-tuning (SFT). It leverages the model’s own sampled reasoning trajectories, re-ranks them based on internal consistency and correctness signals, and constructs “silver-grade” cognitive-behavior datasets. This process explicitly injects inductive biases for verification, backtracking, and other higher-order reasoning capabilities. Contribution: The resulting initialization significantly enhances the robustness and stability of subsequent RL training. Experiments demonstrate that models initialized with SkillFactory achieve stronger generalization on more challenging task variants and effectively mitigate performance degradation across domains—without requiring external supervision or a teacher model.

Technology Category

Application Category

📝 Abstract
Reasoning models leveraging long chains of thought employ various cognitive skills, such as verification of their answers, backtracking, retrying by an alternate method, and more. Previous work has shown that when a base language model exhibits these skills, training that model further with reinforcement learning (RL) can learn to leverage them. How can we get models to leverage skills that aren't exhibited by base models? Our work, SkillFactory, is a method for fine-tuning models to roughly learn these skills during a supervised fine-tuning (SFT) stage prior to RL. Our approach does not rely on distillation from a stronger model, but instead uses samples from the model itself, rearranged to provide training data in the format of those skills. These"silver"SFT traces may be imperfect, but are nevertheless effective for priming a model to acquire skills during RL. Our evaluation shows that (1) starting from SkillFactory SFT initialization helps a model to generalize to harder variants of a task post-RL, despite lower performance pre-RL; (2) cognitive skills are indeed used by the model; (3) RLed SkillFactory models are more robust to regression on out-of-domain tasks than RLed base models. Our work suggests that inductive biases learned prior to RL help models learn robust cognitive skill use.
Problem

Research questions and friction points this paper is trying to address.

Develop method to teach models new cognitive skills
Use self-generated data for supervised fine-tuning before reinforcement learning
Enhance model generalization and robustness on complex tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-distillation method for cognitive skill acquisition
Generates silver SFT traces from model's own samples
Priming model for robust skill use before reinforcement learning
🔎 Similar Papers
No similar papers found.