🤖 AI Summary
To address the limited multi-step logical reasoning capabilities of small language models (SLMs), this paper proposes SMART, a novel SLM–LLM collaborative reasoning framework. Methodologically, SMART introduces three key innovations: (1) an uncertainty-aware dynamic reasoning augmentation mechanism that triggers large language models (LLMs) to generate precise, context-sensitive cognitive scaffolding prompts on demand; (2) modeling structured reasoning as optimal policy search—replacing inefficient brute-force sampling with guided exploration; and (3) integrating policy-guided decoding with math-specific fine-tuning. Evaluated across multiple mathematical reasoning benchmarks, SMART substantially improves SLM performance, achieving accuracy scores approaching those of standalone LLMs on several tasks. This work is the first to empirically demonstrate that lightweight models can solve complex reasoning problems through targeted external guidance—validating the feasibility of augmenting SLMs with cognitively grounded, LLM-mediated scaffolding.
📝 Abstract
The limited reasoning capabilities of small language models (SLMs) cast doubt on their suitability for tasks demanding deep, multi-step logical deduction. This paper introduces a framework called Small Reasons, Large Hints (SMART), which selectively augments SLM reasoning with targeted guidance from large language models (LLMs). Inspired by the concept of cognitive scaffolding, SMART employs a score-based evaluation to identify uncertain reasoning steps and injects corrective LLM-generated reasoning only when necessary. By framing structured reasoning as an optimal policy search, our approach steers the reasoning trajectory toward correct solutions without exhaustive sampling. Our experiments on mathematical reasoning datasets demonstrate that targeted external scaffolding significantly improves performance, paving the way for collaborative use of both SLM and LLM to tackle complex reasoning tasks that are currently unsolvable by SLMs alone.