🤖 AI Summary
This work addresses the lack of reflexivity and multi-step reasoning capabilities in foundational large language models. We propose ThinkTuning—a novel fine-tuning paradigm that, without knowledge distillation, leverages implicit feedback (e.g., thought-trace correction and cognitive guidance) from a same-scale teacher model during inference to establish a classroom-like interactive training mechanism, dynamically eliciting latent reasoning and self-reflective abilities in the student model. Built upon the GRPO framework, ThinkTuning enables interactive reinforcement training, overcoming the limitation of conventional RL methods that merely exploit pre-existing capabilities. Experiments demonstrate substantial improvements across multiple reasoning benchmarks: an average gain of 3.85% over zero-shot baselines; and relative improvements of 2.08%, 2.23%, and 3.99% over standard GRPO on MATH-500, AIME, and GPQA-Diamond, respectively—validating its effectiveness and generalizability in fostering emergent reasoning capabilities.
📝 Abstract
Recent advances in test-time scaling have led to the emergence of thinking LLMs that exhibit self-reflective behaviors and multi-step reasoning. While RL drives this self-improvement paradigm, a recent study (Gandhi et al., 2025) shows that RL alone does not truly instill these new reasoning abilities - it merely draws out behaviors already present in the base models. This raises a question: How can we train the models that don't exhibit such thinking behavior to develop it in the first place? To this end, we propose ThinkTuning, a GRPO-based interactive training approach where we augment the rollouts of a student model with the guidance from a teacher model. A simple idea from classroom practice inspires our method: a teacher poses a problem, lets the student try an answer, then gives corrective feedback -- enough to point the mind in the right direction and then show the solution. Each piece of feedback reshapes the student's thoughts, leading them to arrive at the correct solution. Similarly, we find that this type of implicit supervision through feedback from a teacher model of the same size improves the reasoning capabilities of the student model. In particular, on average, our method shows a 3.85% improvement over zero-shot baselines across benchmarks, and on MATH-500, AIME and GPQA-Diamond it shows 2.08%, 2.23% and 3.99% improvements over the vanilla-GRPO baseline. Source code is available at https://github.com/3rdAT/ThinkTuning.