🤖 AI Summary
Small language models (SLMs) lack intrinsic self-correction capabilities, limiting their reliability in reasoning tasks without external tools or large-model assistance. Method: We propose Self-Taught Self-Correction (STaSC), a purely self-supervised, iterative fine-tuning paradigm that enables SLMs to autonomously generate high-quality correction samples. STaSC integrates confidence-driven sample filtering, iterative instruction fine-tuning, self-distillation, and lightweight parameter-efficient adaptation—requiring no external supervision or auxiliary models. Contribution/Results: Evaluated on standard question-answering benchmarks, STaSC significantly improves accuracy, providing the first empirical evidence that SLMs possess trainable, end-to-end self-correcting mechanisms. All code and optimized lightweight models are publicly released, establishing a new pathway for robust, resource-efficient inference in constrained environments.
📝 Abstract
Although large language models (LLMs) have achieved remarkable performance across various tasks, they remain prone to errors. A key challenge is enabling them to self-correct. While prior research has relied on external tools or large proprietary models, this work explores self-correction in small language models (SLMs) through iterative fine-tuning using solely self-generated data. We introduce the Self-Taught Self-Correction (STaSC) algorithm, which incorporates multiple algorithmic design choices. Experimental results on a question-answering task demonstrate that STaSC effectively learns self-correction, leading to significant performance improvements. Our analysis further provides insights into the mechanisms of self-correction and the impact of different design choices on learning dynamics and overall performance. To support future research, we release our user-friendly codebase and lightweight models.