🤖 AI Summary
Small language models (<3B parameters) suffer from limited reasoning capabilities. Method: This paper proposes a novel paradigm that replaces implicit natural language reasoning with explicit tool invocation. It decouples conventional chain-of-thought reasoning into structured, domain-specific language (DSL)-driven, multi-turn, stateful tool interactions, forming a closed-loop reasoning process with real-time state feedback; the entire pipeline is optimized end-to-end via supervised fine-tuning and reinforcement learning with verifiable reward signals. Contribution/Results: To our knowledge, this is the first approach enabling small models to dynamically extend reasoning depth through external computational resources—without increasing parameter count. Evaluated on Python code repair, the method significantly improves both repair accuracy and sampling efficiency, supports efficient learning under dense reward supervision, and establishes a scalable, verifiable pathway for complex reasoning in lightweight models.
📝 Abstract
Recent advances have established a new machine learning paradigm based on scaling up compute at inference time as well as at training time. In that line of work, a combination of Supervised Fine-Tuning (SFT) on synthetic demonstrations and Reinforcement Learning with Verifiable Rewards (RLVR) is used for training Large Language Models to expend extra compute during inference in the form of "thoughts" expressed in natural language. In this paper, we propose to instead format these tokens as a multi-turn interaction trace with a stateful tool. At each turn, the new state of the tool is appended to the context of the model, whose job is to generate the tokens necessary to control the tool via a custom DSL. We benchmark this approach on the problem of repairing malfunctioning Python code, and show that this constrained setup allows for faster sampling of experience and a denser reward signal, allowing even models of size up to 3B parameters to learn how to proficiently expend additional compute on the task.