MentorCollab: Selective Large-to-Small Inference-Time Guidance for Efficient Reasoning

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high inference cost of large language models and the limited multi-step reasoning capability of smaller, more efficient models. Existing collaborative approaches often suffer from redundant reasoning and inadequate error correction. To overcome these limitations, the authors propose a sparse, selective inference-time guidance mechanism: at randomly sampled critical positions, a large model generates short lookahead segments, which are then evaluated by a lightweight verifier to decide whether to adopt them. This approach requires only minimal intervention—on average, 18.4% of tokens are generated by the large model—yet substantially enhances the small model’s complex reasoning performance. Evaluated across 15 model pairs and three tasks, the method improves accuracy in 12 configurations, yielding an average gain of 3.0% and up to 8.0% in the best case.

Technology Category

Application Category

📝 Abstract
Large reasoning models (LRMs) achieve strong performance by producing long chains of thought, but their inference costs are high and often generate redundant reasoning. Small language models (SLMs) are far more efficient, yet struggle on multi-step reasoning tasks. A natural idea is to let a large model guide a small one at inference time as a mentor, yet existing collaboration methods often promote imitation, resulting in verbose reasoning without consistent error correction. We propose MentorCollab, an inference-time collaboration method in which an LRM selectively and sparsely guides an SLM, rather than taking over generation. At randomly sampled token positions, we probe for divergences between the two models and use a lightweight verifier to decide whether the SLM should follow a short lookahead segment from its mentor or continue on its own. Across 15 SLM--LRM pairs and 3 domains (math reasoning, general knowledge, and commonsense reasoning), our method improves performance in 12 settings, with average gains of 3.0% and up to 8.0%, while adopting only having 18.4% tokens generated by the expensive mentor model on average. We find that short segments and selective probing are sufficient for effective collaboration. Our results show that selective inference-time guidance restores large-model reasoning ability without substantial inference overhead.
Problem

Research questions and friction points this paper is trying to address.

inference-time guidance
large reasoning models
small language models
efficient reasoning
model collaboration
Innovation

Methods, ideas, or system contributions that make the work stand out.

selective guidance
inference-time collaboration
large-to-small model
sparse prompting
reasoning efficiency
🔎 Similar Papers
No similar papers found.