How to Steal Reasoning Without Reasoning Traces

๐Ÿ“… 2026-03-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models often output only final answers while concealing their full reasoning processes, limiting the reusability and transferability of their reasoning capabilities. This work proposes a trajectory inversion model that synthesizes high-quality reasoning trajectories using only the target modelโ€™s inputs, answers, and optional summaries, enabling effective supervised fine-tuning of student models. For the first time, this approach demonstrates that the reasoning capabilities of black-box models can be successfully โ€œextractedโ€ and transferred without access to original reasoning chains, thereby eliminating the dependency on visible intermediate reasoning steps. Experimental results show that Qwen-2.5-7B-Instruct fine-tuned with inverted trajectories achieves substantial performance gains, with accuracy improving from 56.8% to 77.6% on MATH500 and from 11.7% to 42.3% on JEEBench.

Technology Category

Application Category

๐Ÿ“ Abstract
Many large language models (LLMs) use reasoning to generate responses but do not reveal their full reasoning traces (a.k.a. chains of thought), instead outputting only final answers and brief reasoning summaries. To demonstrate that hiding reasoning traces does not prevent users from"stealing"a model's reasoning capabilities, we introduce trace inversion models that, given only the inputs, answers, and (optionally) reasoning summaries exposed by a target model, generate detailed, synthetic reasoning traces. We show that (1) traces synthesized by trace inversion have high overlap with the ground-truth reasoning traces (when available), and (2) fine-tuning student models on inverted traces substantially improves their reasoning. For example, fine-tuning Qwen-2.5-7B-Instruct on traces inverted from the answers and summaries of GPT-5 mini, a commercial black-box LLM, improves its performance from 56.8% to 77.6% on MATH500 and from 11.7% to 42.3% on JEEBench, compared to fine-tuning on just the answers and summaries.
Problem

Research questions and friction points this paper is trying to address.

reasoning traces
large language models
chain of thought
model stealing
black-box LLM
Innovation

Methods, ideas, or system contributions that make the work stand out.

trace inversion
reasoning extraction
chain-of-thought
model stealing
synthetic reasoning
๐Ÿ”Ž Similar Papers
No similar papers found.